Data on the climate data factory are derived from the original model data available on the official IPCC data portal (ESGF). These data went through a light (Raw data) or advanced processing (Ready to use data).  A summary of the processing is presented in this article ; for more details, download the full technical report available on researchgate.  

Short description of the processing chain

Data sourcing

  • Original data: searching and downloading original model data from the ESGF archive

Spatial interpolation

  • Input quality control: checking integrity of original data
  • Remapping : interpolating data on a common grid

Adjustment

  • Bias-adjustment: adjusting variables to observations for synoptic biases
  • Standardization: rewriting files according to  climate community’ standards
  • Output quality control: checking bias-adjusted variables and metadata

Operational

  • File merging: linking together the multiple 10 years data files into a unique 150 years set
  • Spatial extraction: producing country and city level data sets instead of global ones

Figure 1. Summary of the data processing chain

The table below summarizes what the climate data factory provides compared with ESGF.


To learn more about our transparent processing chain, read "The data processing":

Did this answer your question?