Eddy covariance (EC) method is employed worldwide to measure GHGs and energy fluxes between ecosystems and atmosphere. These unique data are utilized by global and regional networks to investigate the interaction between atmosphere and ecosystems, calculate GHGs budgets and study the effect of climate change, management and disturbances on ecosystems functioning. The essential components of EC method are a sonic anemometer and an infra-red gas analyser that collect data at 10 to 20 Hz. Today different sonic anemometers and gas analyzers are available on the market with different design and characteristics. These different options, together with different data processing strategies commonly used, are a source of uncertainty that may potentially affect comparability among sites. The fluxes are calculated starting from the co-variance between vertical wind speed and concentrations and for this reason the performances and specifications of the single sensor do not always reflect the specification and uncertainty in the final data. In addition, the type of measurements collected are unique and there are no equivalent measurements (even complex) that could be used for validation. A possible strategy to deal with this source of uncertainties, especially among long term Research Infrastructures (e.g., ICOS or NEON), is to standardize the technique in its different steps, from the sensor selection to the instrument’s setup and the data processing (including both calculation and quality filtering). On the other side, there is not a perfect sensor, that is without potential biases and that fits all the different environmental conditions. In this synthesis study, effect of standardization is analysed using data from 15 sites covering different climate and ecosystems where a standard setup (ICOS) is running in parallel to a different instrumentation. The data are then processed by the single station teams and centrally. Our aim is to analyse the difference in the calculated fluxes (CO2, latent heat, and sensible heat) associated to 1) different instrumental setups and 2) different processing methods. Results pointed out that the range of variability due to different instrumental setups, processing and methods of correction and filtering of EC data is site dependent and both setup and processing play role. Differences and variability are quantified using reduced major axis regression and mean absolute error. Variation in the fluxes from different setups (standardized vs non-standardized) and processing methods (standardized vs PIs) were quantified. Effect of standardization in the sensors has been quantified in ICOS processing and was 16 % in Fc flux, 11 % in LE flux and 7 % in H flux. In PIs processing it was 10 % in Fc, 19 % in LE and 5 % in H flux on average. Variation in different processing methods for standardized setup was 9 % in Fc, 14 % in LE and 10 % in H flux. For non-standardized setup average variability due to different processing methods was 17 % for Fc, 16 % for LE and 12 % for H flux. In conclusion, due to the complexity of the EC method and the involvement of different steps (from sensors selection to setup, calculation, and processing) it is difficult to identify a single, main common factor explaining the variability and differences among sites. Standardization helps to minimize difference when small changes must be detected (in time and among sites) but this would not ensure the correctness of the absolute value. The correct storage of all the raw data and the complete metadata are however the only possible solution to allow future reanalysis and the correct interpretation of differences in the data. Key words: Standardization, eddy covariance, flux measurements, variability, metadata.
Topic : Theme 2: State of play in integrated approaches for advanced GHG emission estimates and the way forward to operational services.
Reference : T2-B21
Back to the list of submissions
Previous submission · Next submission
Comments are only accessible to participants. If you are a participant, please log in to see them.