Observational Perspectives from U.S. Climate Reference Network (USCRN) and Cooperative Observer Program (COOP) Network: Temperature and Precipitation Comparison

Leeper, R.D., J. Rennie and M.A. Palecki, 2015: Observational Perspectives from U.S. Climate Reference Network (USCRN) and Cooperative Observer Program (COOP) Network: Temperature and Precipitation Comparison. Journal of Atmospheric and Oceanic Technology, 32. https://doi.org/10.1175/JTECH-D-14-00172.1

The U.S. Cooperative Observer Program (COOP) network was formed in the early 1890s to provide daily observations of temperature and precipitation. However, manual observations from naturally aspirated temperature sensors and unshielded precipitation gauges often led to uncertainties in atmospheric measurements. Advancements in observational technology (ventilated temperature sensors, well-shielded precipitation gauges) and measurement techniques (automation and redundant sensors), which improve observation quality, were adopted by NOAA’s National Climatic Data Center (NCDC) into the establishment of the U.S. Climate Reference Network (USCRN). USCRN was designed to provide high-quality and continuous observations to monitor long-term temperature and precipitation trends, and to provide an independent reference to compare to other networks. The purpose of this study is to evaluate how diverse technological and operational choices between the USCRN and COOP programs impact temperature and precipitation observations. Naturally aspirated COOP sensors generally had warmer (+0.48°C) daily maximum and cooler (−0.36°C) minimum temperatures than USCRN, with considerable variability among stations. For precipitation, COOP reported slightly more precipitation overall (1.5%) with network differences varying seasonally. COOP gauges were sensitive to wind biases (no shielding), which are enhanced over winter when COOP observed (10.7%) less precipitation than USCRN. Conversely, wetting factor and gauge evaporation, which dominate in summer, were sources of bias for USCRN, leading to wetter COOP observations over warmer months. Inconsistencies in COOP observations (e.g., multiday observations, time shifts, recording errors) complicated network comparisons and led to unique bias profiles that evolved over time with changes in instrumentation and primary observer.

Study type: Validation, Inhomogeneities

Inhomogeneities metadata
Study type: Network comparison
Instrument type: Liquid in Glass thermometer, Thermistor
Screen type (including Wild screen): Multiplate screen, Stevenson screen
Screen class (including early screens): Stevenson screen, multiplate
Analyzed: Temperature, Precipitation
Causes: Season, Wind speed, Observer, Temperature, Precipitation intensity, Instrumental error, Ground cover, Snow, Ground albedo,
Additional measurements: Solar radiation, Surface wind speed, Surface infra red temperature, Relative humidity, Soil moisture and temperature at various depths
Observation type: Manual, Automatic
Period: 8 Years
No locations: 12
Temporal resolution: Annual, Monthly, Daily, Hourly
Validation metadata
validation type: Comparison with another data quality

Tags: MMTS

The influence of station density on climate data homogenization

Gubler, S., Hunziker, S., Begert, M., Croci-Maspoli, M., Konzelmann, T., Brönnimann, S., Schwierz, C., Oria, C. and Rosas, G., 2017: The influence of station density on climate data homogenization. Int. J. Climatol., 37: 4670–4683. doi: 10.1002/joc.5114.

Abstract. Relative homogenization methods assume that measurements of nearby stations experience similar climate signals and rely therefore on dense station networks with high-temporal correlations. In developing countries such as Peru, however, networks often suffer from low-station density. The aim of this study is to quantify the influence of network density on homogenization. To this end, the homogenization method HOMER was applied to an artificially thinned Swiss network.

Four homogenization experiments, reflecting different homogenization approaches, were examined. Such approaches include diverse levels of interaction of the homogenization operators with HOMER, and different application of metadata. To evaluate the performance of HOMER in the sparse networks, a reference series was built by applying HOMER under the best possible conditions.

Applied in completely automatic mode, HOMER decreases the reliability of temperature records. Therefore, automatic use of HOMER is not recommended. If HOMER is applied in interactive mode, the reliability of temperature and precipitation data may be increased in sparse networks. However, breakpoints must be inserted conservatively. Information from metadata should be used only to determine the exact timing of statistically detected breaks. Insertion of additional breakpoints based solely on metadata may lead to harmful corrections due to the high noise in sparse networks.

Evaluating the impact of U.S. Historical Climatology Network homogenization using the U.S. Climate Reference Network

Hausfather, Z., K. Cowtan, M. J. Menne, and C. N. Williams Jr., 2016: Evaluating the impact of U.S. Historical Climatology Network homogenization using the U.S. Climate Reference Network. Geophys. Res. Lett., 43, 1695–1701, doi: 10.1002/2015GL067640.

Abstract. Numerous inhomogeneities including station moves, instrument changes, and time of observation changes in the U.S. Historical Climatological Network (USHCN) complicate the assessment of long-term temperature trends. Detection and correction of inhomogeneities in raw temperature records have been undertaken by NOAA and other groups using automated pairwise neighbor comparison approaches, but these have proven controversial due to the large trend impact of homogenization in the United States. The new U.S. Climate Reference Network (USCRN) provides a homogenous set of surface temperature observations that can serve as an effective empirical test of adjustments to raw USHCN stations. By comparing nearby pairs of USHCN and USCRN stations, we find that adjustments make both trends and monthly anomalies from USHCN stations much more similar to those of neighboring USCRN stations for the period from 2004 to 2015 when the networks overlap. These results improve our confidence in the reliability of homogenized surface temperature records.

Benchmarking homogenization algorithms for monthly data

Venema, V., O. Mestre, E. Aguilar, I. Auer, J.A. Guijarro, P. Domonkos, G. Vertacnik, T. Szentimrey, P. Stepanek, P. Zahradnicek, J. Viarre, G. Müller-Westermeier, M. Lakatos, C.N. Williams, M.J. Menne, R. Lindau, D. Rasol, E. Rustemeier, K. Kolokythas, T. Marinova, L. Andresen, F. Acquaotta, S. Fratianni, S. Cheval, M. Klancar, M. Brunetti, Ch. Gruber, M. Prohom Duran, T. Likso, P. Esteban, Th. Brandsma. Benchmarking homogenization algorithms for monthly data. Climate of the Past, 8, pp. 89-115, doi: 10.5194/cp-8-89-2012, 2012.

Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added.

Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.

Break detection of annual Swiss temperature series

Kuglitsch, F. G., R. Auchmann, R. Bleisch, S. Brönnimann, O. Martius, and M. Stewart, 2012: Break detection of annual Swiss temperature series. J. Geophys. Res., 117, D13105, doi: 10.1029/2012JD017729.

Abstract. Instrumental temperature series are often affected by artificial breaks (“break points”) due to (e.g.,) changes in station location, land-use, or instrumentation. The Swiss climate observation network offers a high number and density of stations, many long and relatively complete daily to sub-daily temperature series, and well-documented station histories (i.e., metadata). However, for many climate observation networks outside of Switzerland, detailed station histories are missing, incomplete, or inaccessible. To correct these records, the use of reliable statistical break detection methods is necessary. Here, we apply three statistical break detection methods to high-quality Swiss temperature series and use the available metadata to assess the methods. Due to the complex terrain in Switzerland, we are able to assess these methods under specific local conditions such as the Foehn or crest situations. We find that the temperature series of all stations are affected by artificial breaks (average = 1 break point / 48 years) with discrepancies in the abilities of the methods to detect breaks. However, by combining the three statistical methods, almost all of the detected break points are confirmed by metadata. In most cases, these break points are ascribed to a combination of factors in the station history.