This chapter discusses some aspects of the air quality monitoring networks concerned and their QA/QC activities in the light of requirements for using air quality data for the health impact assessment (HIA) of air pollution with the software programme AirQ. Requirements for the design of the networks and a "good practice in air quality monitoring" for HIA are described in WHO (1999). Some results of the evaluation of the network status regarding requirements of WHO/ECEH for HIA are summarised in Table 19.List of Tables and Figures
As stated in Chapter 1, the aim of the report is to support countries of the WHO European Region that are not EU Member States on their way towards a harmonisation process in air quality monitoring. International commitments concerning the improvement of air quality require the same quality of data for evaluation, i.e. comparability of data. This - vice versa - requires developed and well-harmonised QA/QC programmes, as different methods for data acquisition, data validation processes and data reporting have been established within the networks.
Although the selected networks are national ones (see Table 2), many of them are embedded in international programmes like EUROAIRNET, EMEP and GAW. This is another reason for comprehensive QA/QC programmes (EUROAIRNET 1999a, EUROAIRNET 1999b, EMEP 1995).
It is recommended to define the overall monitoring objectives of an air quality monitoring network thoroughly (WHO 1999). This step should be seen as the first step in the design of a network and has direct implications for the evaluation of a QA/QC programme.
Among other monitoring objectives, the objective "population exposure and health impact assessment" is of specific interest for WHO. In WHO (1999) is discussed that the use of monitored and reported air quality data for HIA is sometimes restricted, especially in cases where population exposure to air pollution is not explicitly addressed in the design of an air quality monitoring network.List of Tables and Figures
From Table 4 is derived that about 75% of the networks include the objective "population exposure and HIA". In the first step these networks might be identified as a target group for the WHO European Programme on Health Impact Assessment of Air Pollution (Table 19).
Table 19: Evaluation of Network Status Regarding Requirements for HIA according to WHO/ECEH (WHO 1999)
For health impact assessment, relevant air monitoring stations have to be selected, i.e. stations representative for areas where most of the population is exposed. A high share of exposure-relevant stations, located in the urban background and in suburban/residential areas, indicates their relevance for HIA (WHO 1997). In many networks, a surprisingly low number of stations is situated at these locations (see Table 5) compared to the total number of stations of a network (Table 19).
A detailed description of criteria for the classification of sites can be found in EUROAIRNET (1999a). The criteria developed herein are harmonised with the criteria of WHO (1997). This concerns station type (traffic, industrial, background), type of zone (urban, suburban, rural) and characterisation of zone (residential, commercial, industrial, agricultural, natural and combinations of these characterisations). The criteria for stations measuring natural background (remote stations) and rural background (regional stations) were adopted in EUROAIRNET (1999a) from requirements of EMEP (1996).List of Tables and Figures
One important requisite for the assessment of environmental health impacts of air pollutants is the knowledge of the actual prevailing concentration levels over time. The use of aggregated pollution indices is not recommended for HIA for reasons of divergent health effects of some pollutants combined (WHO 1997).
Table 20 analyses in a pollutant-specific way the percentage of monitoring stations (from Table 6) which are located at exposure-relevant sites.
Table 20: Pollutant-specific Evaluation of Monitoring Stations Regarding Requirements for HIA
In many networks, a high percentage of stations is not located at exposure-relevant sites. For instance with regard to NO2: only three networks set up more than 50% of their stations on measurement sites that are suitable for exposure assessment of the broad population.
In an earlier investigation of WHO CC (Mücke and Turowski 1995), the networks of BG, CZ 1, CZ 2, H, HR and SLO had already been considered, among others. A comparison of the components measured of this report with those of the report of 1995 indicates certain trends in the measuring programmes of the networks in question. New components (O3, BTX, PM10) were introduced into the measuring programme of BG. In the network of CZ 1 the number of monitoring stations was reduced for NO2 and increased for SO2 and TSP. The network of CZ 2 increased the number of monitoring stations for CO, SO2 and NO2. Furthermore, the pollutants O3 and PM10 were introduced for measurement. No big changes were made in the network of H. The network of SLO increased the number of stations for CO, NO2 and O3 and introduced the monitoring of PM10.
The shift from monitoring TSP to monitoring PM10, which can already be noticed in some networks, will certainly continue over the next years. This fact can be explained by the increasing knowledge about the more severe adverse health effects of the fine fraction of particulate matter (WHO 2000) and the coming into force of the Council Directive 1999/30/EC of the European Communities that also includes limit values for PM10 (EC 1999). Thus, an accelerated change from TSP measurement to PM10 measurement can be recommended in order to address this health-relevant component sufficiently. For the accession countries to the European Union, their compliance with the EU legislation will be required in the near future.List of Tables and Figures
The variability of measurement methods, the minimum averaging time and the frequency of measurement, which can be found within various networks (Table 7), limit the direct use of AirQ for HIA in those networks, where no sufficient data are available (Table 19). A health-based assessment of CO and NO2 needs one-hour averages. For the assessment of O3, maximum one-hour and maximum eight-hour averages of a day are requested. Daily (24-hour) averages are needed for SO2, TSP and PM10 (WHO 1999).
As a consequence, the sampling strategy of three to four samples per day, which can be found in many networks using manual methods for CO, SO2, NO2 and O3, cannot fulfil the criteria above for HIA. Also, these measurements give no basis for the detection of exceedances of WHO guideline values for CO, NO2 and O3, which require a higher resolution in time of measurement.
In some cases, estimations and dispersion modelling may help to improve the information for HIA. Nevertheless, the introduction of automatic methods for CO, NO2 and O3 is essential for the validation and use of these models. Some data gaps for areas under surveillance could be filled even with manual methods if measurement campaigns were performed with the frequency of measurement necessary.
At EU level, standardised reference methods for sampling, calibration and analysis are required for each pollutant (EC 1999). This provision will gain importance for the accession countries as other methods are allowed only if comparable results to the reference method are achieved.
The number of documented on-site visits required to achieve reliable results strongly depends on the monitoring devices used in the networks. Nevertheless, recommendations for the frequencies of certain on-site checks are given in WHO (1999). Automatic monitors should be calibrated (span gas, zero gas) every 24 hours. Active sampling systems (pumps) for subsequent manual analysis should be checked and calibrated at every site visit. For this and other site visit operations listed in Table 8 a weekly to monthly frequency is recommended, bearing in mind geographical constraints and the sufficient availability of qualified personnel. The review of Table 8 shows lacks concerning some of these QA/QC activities of the networks.List of Tables and Figures
Audits and intercalibrations are essential in a QA/QC system, especially for large networks. A frequency of at least once a year is recommended for audits. Intercalibrations should be performed every 3 to 6 months, depending on the network type, to establish a direct measurement traceability chain to primary standards (WHO 1999).
The EU Framework Directive (EC 1996) requires regular participation in QA/QC programmes. Accession countries to EU are invited to intercomparisons with automatic analysers by the EU Joint Research Centre/European Reference Laboratory for Air Pollution (JRC/ERLAP) in Ispra/Italy, in October 2000.
Compared to the recommended frequencies of regular QA/QC operations, lacks in the QA/QC systems are apparent in most of the networks (Table 10). Problems in data comparability will arise in networks where no or little attention is paid to the conduction of intercomparisons. This is the case in about two thirds of all networks surveyed. At the minimum, the networks using manual methods should perform intercomparisons of the calibration standards used on a laboratory scale. Automatic networks should conduct intercomparisons with the complete measurement system at each site.List of Tables and Figures
The involvement of subordinated branches and external organisations in on-site operations (Table 9) and in audits and intercalibrations (Table 11) shows clearly the need to include these institutions into a QA/QC programme, e.g. by means of a QA/QC handbook and training components (Table 12), under the responsibility of the central laboratory. The out-sourcing of audits provides the advantage of a control function. Nevertheless, external organisations are involved only to a small extent for this task (Table 11).
Table 19 makes an attempt to classify roughly the status of the QA/QC activities of the networks by evaluating the Tables 8, 10 and 12.
Indicators for the evaluation of the site visit functions (Table 8) are:
Indicators for the evaluation of audits and intercalibrations of the whole network (Table 10) are:
An indicator for the evaluation of training courses for subordinated branches and laboratories (Table 12) is:
As mentioned before, our evaluation is based on the requirements and recommendations of WHO (WHO 1997 and WHO 1999). The indicator for our evaluation of QA/QC measures is set as follows:
|The status of the QA/QC measures of a network is in good compliance ("+") if not more than one of the criteria above is missed. "+/-" is given if not more than two of the criteria are missed, and "-" if more than two criteria are missed (Table 19).|
From Table 13 can be concluded that the intercomparison workshops of WHO CC are of vital importance for the quantitative assessment of data for many networks. As the participants' countries are not part of the European Union, ERLAP has not been responsible for these countries. However, ERLAP has expanded its activities to EU accession countries, from the year 2000 onwards. Workshops of WHO CC will still be required by the Member States of the WHO European Region.
Good accuracy and precision of data is the key to reliable and comparable results. This survey indicates that high efforts should be undertaken to define accuracy and precision levels as a data quality objective if they are not introduced already. Even if the actual variations of the measured data are unknown and method-specific values are to be defined, the broad differences in the set values, ranging from 3 to 25 % for precision (Table 15), indicate a big variation in the comparability of the data among the air quality monitoring networks.List of Tables and Figures
In EUROAIRNET, an accuracy and precision of < 10 % and additionally a precision < 2 ppb is required as an overall uncertainty (EUROAIRNET 1999a). These data quality objectives are stricter than the criteria of EMEP and the criteria of EC (1999). In EMEP (1996), an accuracy of < 10 % for SO2 and NO2 is required, < 15 % for other components, and an overall uncertainty combining sampling and chemical analysis of 15 to 25 %. These uncertainties are regarded to be sufficient in order to provide a valid basis for the control of dispersion models of air pollutants. To meet the data quality objective "detection of exceedances of limit values", the Council Directive (EC 1999) provides a combined accuracy and precision of 15 % for SO2, NO2, and 25 % for PM and lead.
From the standpoint of HIA and the experience from our WHO CC intercomparison workshops, the EC criteria may be proposed as being sufficient and reasonably achievable.
From Table 16 can be derived that the data quality objective "capture rate" is to be introduced into many networks, especially for those networks equipped with automatic methods. The minimum requirements should be the 50 % criterion for the 24-hour and annual averages and the 75 % criterion for the one-hour and eight-hour averages, which are presented in WHO (1999).
In comparison to EUROAIRNET, where an annual capture rate > 90 % is proposed (EUROAIRNET 1999a), and in comparison to EC (1999) and EMEP (1996), which require >90 %, the WHO criteria are less strict.
The appropriate format of reported data is the last prerequisite for HIA, which is discussed in the context of this status report. A one-hour average is needed for CO, NO2 and O3, an 8-hour average for O3, and 24-hour averages for SO2, TSP and PM10. For all components, annual arithmetic means and the 98th percentile should be reported (WHO 1999). Inconsistent data formats, which are found in most networks surveyed (Table 18), hinder the usage of the AirQ calculation model proposed for HIA.List of Tables and Figures
The last column of Table 19 shows a comparison of the networks with respect to the WHO requirements for data management (Tables 15, 16 and 18).
Indicators for the evaluation of the data quality objectives accuracy and precision (Table 15) are:
Indicators for the evaluation of the capture rates (Table 16) and reported formats (Table 18) are:
|- 75% of the one-hour averages for CO|
|- 75% of the one-hour averages for NO2|
|- 75% of the one-hour averages for O3|
|- 75% of the eight-hour averages for O3|
|- 50% of the 24-hour averages for SO2|
|- 50% of the 24-hour averages for TSP|
|- 50% of the 24-hour averages for PM10.|
Important note: A component, which is not measured in the network, is not considered in the above performance rating.
The criteria for our evaluation of the status of data management are set as follows:
|Good compliance ("+") if both, data quality objectives and capture rate/formats of data, are fulfilled with a good performance. "+/-" is given if one, and "-", if both of these criteria are not fulfilled with a good performance (Table 19).|