Skip to content. Skip to navigation
Sections
Personal tools
You are here: Home
Featured Journal
Navigation
Site Search
 
Search only the current folder (and sub-folders)
Log in


Forgot your password?
New user?
Check out our New Publishers' Select for Free Articles
Journal Search

Atmospheric Environment (v.44, #25)

Editorial board (pp. i).
Editorial board (pp. i).

Validation of road vehicle and traffic emission models – A review and meta-analysis by Robin Smit; Leonidas Ntziachristos; Paul Boulter (pp. 2943-2953).
Road transport is often the main source of air pollution in urban areas, and there is an increasing need to estimate its contribution precisely so that pollution-reduction measures (e.g. emission standards, scrapage programs, traffic management, ITS) are designed and implemented appropriately. This paper presents a meta-analysis of 50 studies dealing with the validation of various types of traffic emission model, including ‘average speed’, ‘traffic situation’, ‘traffic variable’, ‘cycle variable’, and ‘modal’ models. The validation studies employ measurements in tunnels, ambient concentration measurements, remote sensing, laboratory tests, and mass-balance techniques. One major finding of the analysis is that several models are only partially validated or not validated at all. The mean prediction errors are generally within a factor of 1.3 of the observed values for CO2, within a factor of 2 for HC and NO x, and within a factor of 3 for CO and PM, although differences as high as a factor of 5 have been reported. A positive mean prediction error for NO x (i.e. overestimation) was established for all model types and practically all validation techniques. In the case of HC, model predictions have been moving from underestimation to overestimation since the 1980s. The large prediction error for PM may be associated with different PM definitions between models and observations (e.g. size, measurement principle, exhaust/non-exhaust contribution).Statistical analyses show that the mean prediction error is generally not significantly different ( p<0.05) when the data are categorised according to model type or validation technique. Thus, there is no conclusive evidence that demonstrates that more complex models systematically perform better in terms of prediction error than less complex models. In fact, less complex models appear to perform better for PM. Moreover, the choice of validation technique does not systematically affect the result, with the exception of a CO underprediction when the validation is based on ambient concentration measurements and inverse modelling. The analysis identified two vital elements currently lacking in traffic emissions modelling: 1) guidance on the allowable error margins for different applications/scales, and 2) estimates of prediction errors. It is recommended that current and future emission models incorporate the capability to quantify prediction errors, and that clear guidelines are developed internationally with respect to expected accuracy.

Keywords: Road traffic; Emission model; Accuracy; Validation; Error; Greenhouse gas


Validation of road vehicle and traffic emission models – A review and meta-analysis by Robin Smit; Leonidas Ntziachristos; Paul Boulter (pp. 2943-2953).
Road transport is often the main source of air pollution in urban areas, and there is an increasing need to estimate its contribution precisely so that pollution-reduction measures (e.g. emission standards, scrapage programs, traffic management, ITS) are designed and implemented appropriately. This paper presents a meta-analysis of 50 studies dealing with the validation of various types of traffic emission model, including ‘average speed’, ‘traffic situation’, ‘traffic variable’, ‘cycle variable’, and ‘modal’ models. The validation studies employ measurements in tunnels, ambient concentration measurements, remote sensing, laboratory tests, and mass-balance techniques. One major finding of the analysis is that several models are only partially validated or not validated at all. The mean prediction errors are generally within a factor of 1.3 of the observed values for CO2, within a factor of 2 for HC and NO x, and within a factor of 3 for CO and PM, although differences as high as a factor of 5 have been reported. A positive mean prediction error for NO x (i.e. overestimation) was established for all model types and practically all validation techniques. In the case of HC, model predictions have been moving from underestimation to overestimation since the 1980s. The large prediction error for PM may be associated with different PM definitions between models and observations (e.g. size, measurement principle, exhaust/non-exhaust contribution).Statistical analyses show that the mean prediction error is generally not significantly different ( p<0.05) when the data are categorised according to model type or validation technique. Thus, there is no conclusive evidence that demonstrates that more complex models systematically perform better in terms of prediction error than less complex models. In fact, less complex models appear to perform better for PM. Moreover, the choice of validation technique does not systematically affect the result, with the exception of a CO underprediction when the validation is based on ambient concentration measurements and inverse modelling. The analysis identified two vital elements currently lacking in traffic emissions modelling: 1) guidance on the allowable error margins for different applications/scales, and 2) estimates of prediction errors. It is recommended that current and future emission models incorporate the capability to quantify prediction errors, and that clear guidelines are developed internationally with respect to expected accuracy.

Keywords: Road traffic; Emission model; Accuracy; Validation; Error; Greenhouse gas


Prediction of the adsorption capability onto activated carbon of a large data set of chemicals by local lazy regression method by Beilei Lei; Yimeng Ma; Jiazhong Li; Huanxiang Liu; Xiaojun Yao; Paola Gramatica (pp. 2954-2960).
Accurate quantitative structure–property relationship (QSPR) models based on a large data set containing a total of 3483 organic compounds were developed to predict chemicals’ adsorption capability onto activated carbon in gas phrase. Both global multiple linear regression (MLR) method and local lazy regression (LLR) method were used to develop QSPR models. The results proved that LLR has prediction accuracy 10% higher than that of MLR model. By applying LLR method we can predict the test set (787 compounds) with Q2ext of 0.900 and root mean square error (RMSE) of 0.129. The accurate model based on this large data set could be useful to predict adsorption property of new compounds since such model covers a highly diverse structural space.

Keywords: Activated carbon adsorption capability; Quantitative structure–property relationship (QSPR); Genetic algorithm (GA); Local lazy regression (LLR)


Prediction of the adsorption capability onto activated carbon of a large data set of chemicals by local lazy regression method by Beilei Lei; Yimeng Ma; Jiazhong Li; Huanxiang Liu; Xiaojun Yao; Paola Gramatica (pp. 2954-2960).
Accurate quantitative structure–property relationship (QSPR) models based on a large data set containing a total of 3483 organic compounds were developed to predict chemicals’ adsorption capability onto activated carbon in gas phrase. Both global multiple linear regression (MLR) method and local lazy regression (LLR) method were used to develop QSPR models. The results proved that LLR has prediction accuracy 10% higher than that of MLR model. By applying LLR method we can predict the test set (787 compounds) with Q2ext of 0.900 and root mean square error (RMSE) of 0.129. The accurate model based on this large data set could be useful to predict adsorption property of new compounds since such model covers a highly diverse structural space.

Keywords: Activated carbon adsorption capability; Quantitative structure–property relationship (QSPR); Genetic algorithm (GA); Local lazy regression (LLR)


Testing DayCent and DNDC model simulations of N2O fluxes and assessing the impacts of climate change on the gas flux and biomass production from a humid pasture by M. Abdalla; M. Jones; J. Yeluripati; P. Smith; J. Burke; M. Williams (pp. 2961-2970).
Simulation models are one of the approaches used to investigate greenhouse gas emissions and potential effects of global warming on terrestrial ecosystems. DayCent which is the daily time-step version of the CENTURY biogeochemical model, and DNDC (the DeNitrification–DeComposition model) were tested against observed nitrous oxide flux data from a field experiment on cut and extensively grazed pasture located at the Teagasc Oak Park Research Centre, Co. Carlow, Ireland. The soil was classified as a free draining sandy clay loam soil with a pH of 7.3 and a mean organic carbon and nitrogen content at 0–20 cm of 38 and 4.4 g kg−1 dry soil, respectively. The aims of this study were to validate DayCent and DNDC models for estimating N2O emissions from fertilized humid pasture, and to investigate the impacts of future climate change on N2O fluxes and biomass production. Measurements of N2O flux were carried out from November 2003 to November 2004 using static chambers. Three climate scenarios, a baseline of measured climatic data from the weather station at Carlow, and high and low temperature sensitivity scenarios predicted by the Community Climate Change Consortium For Ireland (C4I) based on the Hadley Centre Global Climate Model (HadCM3) and the Intergovernment Panel on Climate Change (IPCC) A1B emission scenario were investigated. DayCent predicted cumulative N2O flux and biomass production under fertilized grass with relative deviations of +38% and (−23%) from the measured, respectively. However, DayCent performs poorly under the control plots, with flux relative deviation of (−57%) from the measured. Comparison between simulated and measured flux suggests that both DayCent model’s response to N fertilizer and simulated background flux need to be adjusted. DNDC overestimated the measured flux with relative deviations of +132 and +258% due to overestimation of the effects of SOC. DayCent, though requiring some calibration for Irish conditions, simulated N2O fluxes more consistently than did DNDC. We used DayCent to estimate future fluxes of N2O from this field. No significant differences were found between cumulative N2O flux under climate change and baseline conditions. However, above-ground grass biomass was significantly increased from the baseline of 33 t ha−1 to 45 (+34%) and 50 (+48%) t dry matter ha−1 for the low and high temperature sensitivity scenario respectively. The increase in above-ground grass biomass was mainly due to the overall effects of high precipitation, temperature and CO2 concentration. Our results indicate that because of high N demand by the vigorously growing grass, cumulative N2O flux is not projected to increase significantly under climate change, unless more N is applied. This was observed for both the high and low temperature sensitivity scenarios.

Keywords: DayCent; DNDC; Nitrous oxide; Pasture


Testing DayCent and DNDC model simulations of N2O fluxes and assessing the impacts of climate change on the gas flux and biomass production from a humid pasture by M. Abdalla; M. Jones; J. Yeluripati; P. Smith; J. Burke; M. Williams (pp. 2961-2970).
Simulation models are one of the approaches used to investigate greenhouse gas emissions and potential effects of global warming on terrestrial ecosystems. DayCent which is the daily time-step version of the CENTURY biogeochemical model, and DNDC (the DeNitrification–DeComposition model) were tested against observed nitrous oxide flux data from a field experiment on cut and extensively grazed pasture located at the Teagasc Oak Park Research Centre, Co. Carlow, Ireland. The soil was classified as a free draining sandy clay loam soil with a pH of 7.3 and a mean organic carbon and nitrogen content at 0–20 cm of 38 and 4.4 g kg−1 dry soil, respectively. The aims of this study were to validate DayCent and DNDC models for estimating N2O emissions from fertilized humid pasture, and to investigate the impacts of future climate change on N2O fluxes and biomass production. Measurements of N2O flux were carried out from November 2003 to November 2004 using static chambers. Three climate scenarios, a baseline of measured climatic data from the weather station at Carlow, and high and low temperature sensitivity scenarios predicted by the Community Climate Change Consortium For Ireland (C4I) based on the Hadley Centre Global Climate Model (HadCM3) and the Intergovernment Panel on Climate Change (IPCC) A1B emission scenario were investigated. DayCent predicted cumulative N2O flux and biomass production under fertilized grass with relative deviations of +38% and (−23%) from the measured, respectively. However, DayCent performs poorly under the control plots, with flux relative deviation of (−57%) from the measured. Comparison between simulated and measured flux suggests that both DayCent model’s response to N fertilizer and simulated background flux need to be adjusted. DNDC overestimated the measured flux with relative deviations of +132 and +258% due to overestimation of the effects of SOC. DayCent, though requiring some calibration for Irish conditions, simulated N2O fluxes more consistently than did DNDC. We used DayCent to estimate future fluxes of N2O from this field. No significant differences were found between cumulative N2O flux under climate change and baseline conditions. However, above-ground grass biomass was significantly increased from the baseline of 33 t ha−1 to 45 (+34%) and 50 (+48%) t dry matter ha−1 for the low and high temperature sensitivity scenario respectively. The increase in above-ground grass biomass was mainly due to the overall effects of high precipitation, temperature and CO2 concentration. Our results indicate that because of high N demand by the vigorously growing grass, cumulative N2O flux is not projected to increase significantly under climate change, unless more N is applied. This was observed for both the high and low temperature sensitivity scenarios.

Keywords: DayCent; DNDC; Nitrous oxide; Pasture


Measurements of nitrogen oxides from Hudson Bay: Implications for NOx release from snow and ice covered surfaces by S.J. Moller; J.D. Lee; R. Commane; P. Edwards; D.E. Heard; J. Hopkins; T. Ingham; A.S. Mahajan; H. Oetjen; J. Plane; H. Roscoe; A.C. Lewis; L.J. Carpenter (pp. 2971-2979).
Measurements of NO and NO2 were made at a surface site (55.28 °N, 77.77 °W) near Kuujjuarapik, Canada during February and March 2008. NOx mixing ratios ranged from near zero to 350 pptv with emission from snow believed to be the dominant source. The amount of NOx was observed to be dependent on the terrain over which the airmass has passed before reaching the measurement site. The 24 h average NOx emission rates necessary to reproduce observations were calculated using a zero-dimensional box model giving rates ranging from 6.9 × 108 molecule cm−2 s−1 to 1.2 × 109 molecule cm−2 s−1 for trajectories over land and from 3.8 × 108 molecule cm−2 s−1 to 6.6 × 108 molecule cm−2 s−1 for trajectories over sea ice. These emissions are higher than those suggested by previous studies and indicate the importance of lower latitude snowpack emissions. The difference in emission rate for the two types of snow cover shows the importance of snow depth and underlying surface type for the emission potential of snow-covered areas.

Keywords: NO; x; Snow–atmosphere interactions; COBRA; NO; x; emission; Box modelling; Photochemistry


Measurements of nitrogen oxides from Hudson Bay: Implications for NOx release from snow and ice covered surfaces by S.J. Moller; J.D. Lee; R. Commane; P. Edwards; D.E. Heard; J. Hopkins; T. Ingham; A.S. Mahajan; H. Oetjen; J. Plane; H. Roscoe; A.C. Lewis; L.J. Carpenter (pp. 2971-2979).
Measurements of NO and NO2 were made at a surface site (55.28 °N, 77.77 °W) near Kuujjuarapik, Canada during February and March 2008. NOx mixing ratios ranged from near zero to 350 pptv with emission from snow believed to be the dominant source. The amount of NOx was observed to be dependent on the terrain over which the airmass has passed before reaching the measurement site. The 24 h average NOx emission rates necessary to reproduce observations were calculated using a zero-dimensional box model giving rates ranging from 6.9 × 108 molecule cm−2 s−1 to 1.2 × 109 molecule cm−2 s−1 for trajectories over land and from 3.8 × 108 molecule cm−2 s−1 to 6.6 × 108 molecule cm−2 s−1 for trajectories over sea ice. These emissions are higher than those suggested by previous studies and indicate the importance of lower latitude snowpack emissions. The difference in emission rate for the two types of snow cover shows the importance of snow depth and underlying surface type for the emission potential of snow-covered areas.

Keywords: NO; x; Snow–atmosphere interactions; COBRA; NO; x; emission; Box modelling; Photochemistry


Variability of atmospheric dust loading over the central Tibetan Plateau based on ice core glaciochemistry by Shichang Kang; Yulan Zhang; Yongjun Zhang; Bjorn Grigholm; Susan Kaspari; Dahe Qin; Jiawen Ren; Paul Mayewski (pp. 2980-2989).
A Mt. Geladaindong (GL) ice core was recovered from the central Tibetan Plateau (TP) spanning the period 1940–2005 AD. High-resolution major ion (Na+, K+, Ca2+, Mg2+, Cl, SO42−, NO3) time-series are used to investigate variations in atmospheric dust loading through time. The crustal source ions vary seasonally with peaks in dust concentrations occurring during the winter and spring which are consistent with atmospheric dust observations at local meteorological stations. However, both similarities and dissimilarities are displayed between the decadal variation of atmospheric dust in the GL core and dust observation records from meteorological stations, which can be attributed to local environmental effects at the stations. This paper compares the 1980s and 1970s as case periods for low and high atmospheric dust loading, respectively, two periods reflecting shifts in spring atmospheric circulation (a weakening of zonal and meridional winds) from the 1970s (a period of enhanced dust aerosol transportation to central TP) to the 1980s (a period of diminished dust aerosol transportation to central TP), especially a significant decrease of meridional wind speeds in the 1980s. GL ice core dust proxies (Ca2+ and K+) are correlated with Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data in spring over the TP and in the northwestern China (especially for K+). Thus variability of crustal ions in central TP ice core provides a proxy for reconstructing a history of atmospheric dust loading not only on the TP, but also in northwestern China.

Keywords: Atmospheric dust; Glaciochemistry; Atmospheric circulation; Aerosol Index; Tibetan Plateau


Variability of atmospheric dust loading over the central Tibetan Plateau based on ice core glaciochemistry by Shichang Kang; Yulan Zhang; Yongjun Zhang; Bjorn Grigholm; Susan Kaspari; Dahe Qin; Jiawen Ren; Paul Mayewski (pp. 2980-2989).
A Mt. Geladaindong (GL) ice core was recovered from the central Tibetan Plateau (TP) spanning the period 1940–2005 AD. High-resolution major ion (Na+, K+, Ca2+, Mg2+, Cl, SO42−, NO3) time-series are used to investigate variations in atmospheric dust loading through time. The crustal source ions vary seasonally with peaks in dust concentrations occurring during the winter and spring which are consistent with atmospheric dust observations at local meteorological stations. However, both similarities and dissimilarities are displayed between the decadal variation of atmospheric dust in the GL core and dust observation records from meteorological stations, which can be attributed to local environmental effects at the stations. This paper compares the 1980s and 1970s as case periods for low and high atmospheric dust loading, respectively, two periods reflecting shifts in spring atmospheric circulation (a weakening of zonal and meridional winds) from the 1970s (a period of enhanced dust aerosol transportation to central TP) to the 1980s (a period of diminished dust aerosol transportation to central TP), especially a significant decrease of meridional wind speeds in the 1980s. GL ice core dust proxies (Ca2+ and K+) are correlated with Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data in spring over the TP and in the northwestern China (especially for K+). Thus variability of crustal ions in central TP ice core provides a proxy for reconstructing a history of atmospheric dust loading not only on the TP, but also in northwestern China.

Keywords: Atmospheric dust; Glaciochemistry; Atmospheric circulation; Aerosol Index; Tibetan Plateau


Can secondary organic aerosol formed in an atmospheric simulation chamber continuously age? by Li Qi; Shunsuke Nakao; Quentin Malloy; Bethany Warren; David R. Cocker III (pp. 2990-2996).
This work investigates the oxidative aging process of SOA derived from select aromatic ( m-xylene) and biogenic (α-pinene) precursors within an environmental chamber. Simultaneous measurements of SOA hygroscopicity, volatility, particle density, and elemental chemical composition (C:O:H) reveal only slight particle aging for up to the first 16 h of formation. The chemical aging observed is consistent with SOA that is decreasing in volatility and increasing in O/C and hydrophilicity. Even after aging, the O/C (0.25 and 0.40 for α-pinene and m-xylene oxidation, respectively) was below the OOAI and OOAII ambient fractions measured by high-resolution aerosol mass spectra coupled with Positive Matrix Factorization (PMF). The rate of increase in O/C does not appear to be sufficient to achieve OOAI or OOAII levels of oxygenation within regular chamber experiment duration. No chemical aging was observed for SOA during dark α-pinene ozonolysis with a hydroxyl radical scavenger present. This finding is consistent with observations by other groups that SOA from this system is comprised of first generation products.

Keywords: Secondary organic aerosol; Aging process; Hygroscopicity; Volatility; Elemental analysis


Can secondary organic aerosol formed in an atmospheric simulation chamber continuously age? by Li Qi; Shunsuke Nakao; Quentin Malloy; Bethany Warren; David R. Cocker III (pp. 2990-2996).
This work investigates the oxidative aging process of SOA derived from select aromatic ( m-xylene) and biogenic (α-pinene) precursors within an environmental chamber. Simultaneous measurements of SOA hygroscopicity, volatility, particle density, and elemental chemical composition (C:O:H) reveal only slight particle aging for up to the first 16 h of formation. The chemical aging observed is consistent with SOA that is decreasing in volatility and increasing in O/C and hydrophilicity. Even after aging, the O/C (0.25 and 0.40 for α-pinene and m-xylene oxidation, respectively) was below the OOAI and OOAII ambient fractions measured by high-resolution aerosol mass spectra coupled with Positive Matrix Factorization (PMF). The rate of increase in O/C does not appear to be sufficient to achieve OOAI or OOAII levels of oxygenation within regular chamber experiment duration. No chemical aging was observed for SOA during dark α-pinene ozonolysis with a hydroxyl radical scavenger present. This finding is consistent with observations by other groups that SOA from this system is comprised of first generation products.

Keywords: Secondary organic aerosol; Aging process; Hygroscopicity; Volatility; Elemental analysis


Air pollution impacts of speed limitation measures in large cities: The need for improving traffic data in a metropolitan area by José M. Baldasano; María Gonçalves; Albert Soret; Pedro Jiménez-Guerrero (pp. 2997-3006).
Assessing the effects of air quality management strategies in urban areas is a major concern worldwide because of the large impacts on health caused by the exposure to air pollution. In this sense, this work analyses the changes in urban air quality due to the introduction of a maximum speed limit to 80 km h−1 on motorways in a large city by using a novel methodology combining traffic assimilation data and modelling systems implemented in a supercomputing facility. Albeit the methodology has been non-specifically developed and can be extrapolated to any large city or megacity, the case study of Barcelona is presented here. Hourly simulations take into account the entire year 2008 (when the 80 km h−1 limit has been introduced) vs. the traffic conditions for the year 2007. The data has been assimilated in an emission model, which considers hourly variable speeds and hourly traffic intensity in the affected area, taken from long-term measurement campaigns for the aforementioned years; it also permits to take into account the traffic congestion effect. Overall, the emissions are reduced up to 4%; however the local effects of this reduction achieve an important impact for the adjacent area to the roadways, reaching 11%. In this sense, the speed limitation effects assessed represent enhancements in air quality levels (5–7%) of primary pollutants over the area, directly improving the welfare of 1.35 million inhabitants (over 41% of the population of the Metropolitan Area) and affecting 3.29 million dwellers who are potentially benefited from this strategy for air quality management (reducing 0.6% the mortality rates in the area).

Keywords: Urban air quality; Air quality modelling; Traffic emissions; Atmospheric management; Barcelona Metropolitan Area


Air pollution impacts of speed limitation measures in large cities: The need for improving traffic data in a metropolitan area by José M. Baldasano; María Gonçalves; Albert Soret; Pedro Jiménez-Guerrero (pp. 2997-3006).
Assessing the effects of air quality management strategies in urban areas is a major concern worldwide because of the large impacts on health caused by the exposure to air pollution. In this sense, this work analyses the changes in urban air quality due to the introduction of a maximum speed limit to 80 km h−1 on motorways in a large city by using a novel methodology combining traffic assimilation data and modelling systems implemented in a supercomputing facility. Albeit the methodology has been non-specifically developed and can be extrapolated to any large city or megacity, the case study of Barcelona is presented here. Hourly simulations take into account the entire year 2008 (when the 80 km h−1 limit has been introduced) vs. the traffic conditions for the year 2007. The data has been assimilated in an emission model, which considers hourly variable speeds and hourly traffic intensity in the affected area, taken from long-term measurement campaigns for the aforementioned years; it also permits to take into account the traffic congestion effect. Overall, the emissions are reduced up to 4%; however the local effects of this reduction achieve an important impact for the adjacent area to the roadways, reaching 11%. In this sense, the speed limitation effects assessed represent enhancements in air quality levels (5–7%) of primary pollutants over the area, directly improving the welfare of 1.35 million inhabitants (over 41% of the population of the Metropolitan Area) and affecting 3.29 million dwellers who are potentially benefited from this strategy for air quality management (reducing 0.6% the mortality rates in the area).

Keywords: Urban air quality; Air quality modelling; Traffic emissions; Atmospheric management; Barcelona Metropolitan Area


Trends in anthropogenic mercury emissions estimated for South Africa during 2000–2006 by K. Elizabeth Masekoameng; Joy Leaner; J. Dabrowski (pp. 3007-3014).
Recent studies suggest an increase in mercury (Hg) emissions to the global environment, particularly as a result of anthropogenic activities. This has prompted many countries to complete Hg emission inventories, based on country-specific Hg sources. In this study, information on annual coal consumption and Hg-containing commodities produced in South Africa, was used to estimate Hg emissions during 2000–2006. Based on the information, the UNEP toolkit was used to estimate the amount of Hg released to air and general waste from each activity; using South Africa specific and toolkit based emission factors. In both atmospheric and solid waste releases, coal-fired power plants were estimated to be the largest contributors of Hg emissions, viz. 27.1 to 38.9 tonnes y−1 in air, and 5.8 to 7.4 tonnes y−1 in waste. Cement production was estimated to be the second largest atmospheric Hg emission contributor (2.2–3.9 tonnes y−1), while coal gasification was estimated to be the second largest Hg contributor in terms of general waste releases (2.9–4.2 tonnes y−1). Overall, there was an increase in total atmospheric Hg emissions from all activities, estimated at ca. 34 tonnes in 2000, to 50 tonnes in 2006, with some fluctuations between the years. Similarly, the total Hg emissions released to general waste was estimated to be 9 tonnes in 2000, with an increase to 12 tonnes in 2006.

Keywords: Mercury; Emissions; Coal combustion


Trends in anthropogenic mercury emissions estimated for South Africa during 2000–2006 by K. Elizabeth Masekoameng; Joy Leaner; J. Dabrowski (pp. 3007-3014).
Recent studies suggest an increase in mercury (Hg) emissions to the global environment, particularly as a result of anthropogenic activities. This has prompted many countries to complete Hg emission inventories, based on country-specific Hg sources. In this study, information on annual coal consumption and Hg-containing commodities produced in South Africa, was used to estimate Hg emissions during 2000–2006. Based on the information, the UNEP toolkit was used to estimate the amount of Hg released to air and general waste from each activity; using South Africa specific and toolkit based emission factors. In both atmospheric and solid waste releases, coal-fired power plants were estimated to be the largest contributors of Hg emissions, viz. 27.1 to 38.9 tonnes y−1 in air, and 5.8 to 7.4 tonnes y−1 in waste. Cement production was estimated to be the second largest atmospheric Hg emission contributor (2.2–3.9 tonnes y−1), while coal gasification was estimated to be the second largest Hg contributor in terms of general waste releases (2.9–4.2 tonnes y−1). Overall, there was an increase in total atmospheric Hg emissions from all activities, estimated at ca. 34 tonnes in 2000, to 50 tonnes in 2006, with some fluctuations between the years. Similarly, the total Hg emissions released to general waste was estimated to be 9 tonnes in 2000, with an increase to 12 tonnes in 2006.

Keywords: Mercury; Emissions; Coal combustion


An enhanced PM2.5 air quality forecast model based on nonlinear regression and back-trajectory concentrations by W. Geoffrey Cobourn (pp. 3015-3023).
An enhanced PM2.5 air quality forecast model based on nonlinear regression (NLR) and back-trajectory concentrations has been developed for use in the Louisville, Kentucky metropolitan area. The PM2.5 air quality forecast model is designed for use in the warm season, from May through September, when PM2.5 air quality is more likely to be critical for human health. The enhanced PM2.5 model consists of a basic NLR model, developed for use with an automated air quality forecast system, and an additional parameter based on upwind PM2.5 concentration, called PM24. The PM24 parameter is designed to be determined manually, by synthesizing backward air trajectory and regional air quality information to compute 24-h back-trajectory concentrations. The PM24 parameter may be used by air quality forecasters to adjust the forecast provided by the automated forecast system. In this study of the 2007 and 2008 forecast seasons, the enhanced model performed well using forecasted meteorological data and PM24 as input. The enhanced PM2.5 model was compared with three alternative models, including the basic NLR model, the basic NLR model with a persistence parameter added, and the NLR model with persistence and PM24. The two models that included PM24 were of comparable accuracy. The two models incorporating back-trajectory concentrations had lower mean absolute errors and higher rates of detecting unhealthy PM2.5 concentrations compared to the other models.

Keywords: Air pollution; Air quality forecast; Particulate matter; PM; 2.5; Nonlinear regression; Trajectory analysis; Empirical air quality model


An enhanced PM2.5 air quality forecast model based on nonlinear regression and back-trajectory concentrations by W. Geoffrey Cobourn (pp. 3015-3023).
An enhanced PM2.5 air quality forecast model based on nonlinear regression (NLR) and back-trajectory concentrations has been developed for use in the Louisville, Kentucky metropolitan area. The PM2.5 air quality forecast model is designed for use in the warm season, from May through September, when PM2.5 air quality is more likely to be critical for human health. The enhanced PM2.5 model consists of a basic NLR model, developed for use with an automated air quality forecast system, and an additional parameter based on upwind PM2.5 concentration, called PM24. The PM24 parameter is designed to be determined manually, by synthesizing backward air trajectory and regional air quality information to compute 24-h back-trajectory concentrations. The PM24 parameter may be used by air quality forecasters to adjust the forecast provided by the automated forecast system. In this study of the 2007 and 2008 forecast seasons, the enhanced model performed well using forecasted meteorological data and PM24 as input. The enhanced PM2.5 model was compared with three alternative models, including the basic NLR model, the basic NLR model with a persistence parameter added, and the NLR model with persistence and PM24. The two models that included PM24 were of comparable accuracy. The two models incorporating back-trajectory concentrations had lower mean absolute errors and higher rates of detecting unhealthy PM2.5 concentrations compared to the other models.

Keywords: Air pollution; Air quality forecast; Particulate matter; PM; 2.5; Nonlinear regression; Trajectory analysis; Empirical air quality model


Smoke emissions from biomass burning in a Mediterranean shrubland by C.A. Alves; C. Gonçalves; C.A. Pio; F. Mirante; A. Caseiro; L. Tarelho; M.C. Freitas; D.X. Viegas (pp. 3024-3033).
Gaseous and particulate samples from the smoke from prescribed burnings of a shrub-dominated forest with some pine trees in Lousã Mountain, Portugal, in May 2008, have been collected. From the gas phase Fourier transform infrared (FTIR) measurements, an average modified combustion efficiency of 0.99 was obtained, suggesting a very strong predominance of flaming combustion. Gaseous compounds whose emissions are promoted in fresh plumes and during the flaming burning phase, such as CO2, acetylene and propene, produced emission factors higher than those proposed for savannah and tropical forest fires. Emission factors of species that are favoured by the smouldering phase (e.g. CO and CH4) were below the values reported in the literature for biomass burning in other ecosystems. The chemical composition of fine (PM2.5) and coarse (PM2.5–10) particles was achieved using ion chromatography (water-soluble ions), instrumental neutron activation analysis (trace elements) and a thermal–optical transmission technique (organic carbon and elemental carbon). Approximately 50% of the particulate mass was carbonaceous in nature with a clear dominance of organic carbon. The organic carbon-to-elemental carbon ratios up to 300, or even higher, measured in the present study largely exceeded those reported for fires in savannah and tropical forests. More than 30 trace elements and ions have been determined in smoke aerosols, representing in total an average contribution of about 7% to the PM10 mass.

Keywords: Forest fires; Greenhouse gas emissions; Particulate emissions; Organic and elemental carbon; Chemical elements; Water-soluble ions


Smoke emissions from biomass burning in a Mediterranean shrubland by C.A. Alves; C. Gonçalves; C.A. Pio; F. Mirante; A. Caseiro; L. Tarelho; M.C. Freitas; D.X. Viegas (pp. 3024-3033).
Gaseous and particulate samples from the smoke from prescribed burnings of a shrub-dominated forest with some pine trees in Lousã Mountain, Portugal, in May 2008, have been collected. From the gas phase Fourier transform infrared (FTIR) measurements, an average modified combustion efficiency of 0.99 was obtained, suggesting a very strong predominance of flaming combustion. Gaseous compounds whose emissions are promoted in fresh plumes and during the flaming burning phase, such as CO2, acetylene and propene, produced emission factors higher than those proposed for savannah and tropical forest fires. Emission factors of species that are favoured by the smouldering phase (e.g. CO and CH4) were below the values reported in the literature for biomass burning in other ecosystems. The chemical composition of fine (PM2.5) and coarse (PM2.5–10) particles was achieved using ion chromatography (water-soluble ions), instrumental neutron activation analysis (trace elements) and a thermal–optical transmission technique (organic carbon and elemental carbon). Approximately 50% of the particulate mass was carbonaceous in nature with a clear dominance of organic carbon. The organic carbon-to-elemental carbon ratios up to 300, or even higher, measured in the present study largely exceeded those reported for fires in savannah and tropical forests. More than 30 trace elements and ions have been determined in smoke aerosols, representing in total an average contribution of about 7% to the PM10 mass.

Keywords: Forest fires; Greenhouse gas emissions; Particulate emissions; Organic and elemental carbon; Chemical elements; Water-soluble ions


Measurements of aerosol optical properties in central Tokyo during summertime using cavity ring-down spectroscopy: Comparison with conventional techniques by Tomoki Nakayama; Rie Hagino; Yutaka Matsumi; Yosuke Sakamoto; Masahiro Kawasaki; Akihiro Yamazaki; Akihiro Uchiyama; Rei Kudo; Nobuhiro Moteki; Yutaka Kondo; Kenichi Tonokura (pp. 3034-3042).
A highly sensitive cavity ring-down spectrometer (CRDS) was used to monitor the aerosol extinction coefficient at 532nm. The performance of the spectrometer was evaluated using measurements of nearly monodisperse polystyrene particles with diameters between 150 and 500nm. By comparing the observed results with those determined using Mie theory, the accuracy of the CRDS instrument was determined to be >97%, while the upper limit for the precision of the instrument was estimated to be 0.6–3.5% (typically 2%), depending on the particle number concentration, which was in the range of 30–2300particlescm−3. Simultaneous measurements of the extinction ( bext), scattering ( bsca) and absorption ( babs) coefficients of ambient aerosols were performed in central Tokyo from 14 August to 2 September 2007 using the CRDS instrument, two nephelometers and a particle/soot absorption photometer (PSAP), respectively. The value of bext measured using the CRDS instrument was compared with the sum of the bsca and babs values measured with a nephelometer and a PSAP, respectively. Good agreement between the bext and bsca+ babs values was obtained except for data on days when high ozone mixing ratios (>130ppbv) were observed. During the high-O3 days, the values for bsca+ babs were ∼7% larger than the value for bext, possibly because the value for babs measured by the PSAP was overestimated due to interference from coexisting non-absorbing aerosols such as secondary organic aerosols.

Keywords: Aerosol optical property; Extinction coefficient; Cavity ring-down spectroscopy (CRDS); Nephelometer; Particle soot absorption photometer (PSAP)


Measurements of aerosol optical properties in central Tokyo during summertime using cavity ring-down spectroscopy: Comparison with conventional techniques by Tomoki Nakayama; Rie Hagino; Yutaka Matsumi; Yosuke Sakamoto; Masahiro Kawasaki; Akihiro Yamazaki; Akihiro Uchiyama; Rei Kudo; Nobuhiro Moteki; Yutaka Kondo; Kenichi Tonokura (pp. 3034-3042).
A highly sensitive cavity ring-down spectrometer (CRDS) was used to monitor the aerosol extinction coefficient at 532nm. The performance of the spectrometer was evaluated using measurements of nearly monodisperse polystyrene particles with diameters between 150 and 500nm. By comparing the observed results with those determined using Mie theory, the accuracy of the CRDS instrument was determined to be >97%, while the upper limit for the precision of the instrument was estimated to be 0.6–3.5% (typically 2%), depending on the particle number concentration, which was in the range of 30–2300particlescm−3. Simultaneous measurements of the extinction ( bext), scattering ( bsca) and absorption ( babs) coefficients of ambient aerosols were performed in central Tokyo from 14 August to 2 September 2007 using the CRDS instrument, two nephelometers and a particle/soot absorption photometer (PSAP), respectively. The value of bext measured using the CRDS instrument was compared with the sum of the bsca and babs values measured with a nephelometer and a PSAP, respectively. Good agreement between the bext and bsca+ babs values was obtained except for data on days when high ozone mixing ratios (>130ppbv) were observed. During the high-O3 days, the values for bsca+ babs were ∼7% larger than the value for bext, possibly because the value for babs measured by the PSAP was overestimated due to interference from coexisting non-absorbing aerosols such as secondary organic aerosols.

Keywords: Aerosol optical property; Extinction coefficient; Cavity ring-down spectroscopy (CRDS); Nephelometer; Particle soot absorption photometer (PSAP)


Urban tracer dispersion experiments during the second DAPPLE field campaign in London 2004 by D. Martin; C.S. Price; I.R. White; G. Nickless; K.F. Petersson; R.E. Britter; A.G. Robins; S.E. Belcher; J.F. Barlow; M. Neophytou; S.J. Arnold; A.S. Tomlin; R.J. Smalley; D.E. Shallcross (pp. 3043-3052).
As part of the DAPPLE programme two large scale urban tracer experiments using multiple simultaneous releases of cyclic perfluoroalkanes from fixed location point sources was performed. The receptor concentrations along with relevant meteorological parameters measured are compared with a three screening dispersion models in order to best predict the decay of pollution sources with respect to distance. It is shown here that the simple dispersion models tested here can provide a reasonable upper bound estimate of the maximum concentrations measured with an empirical model derived from field observations and wind tunnel studies providing the best estimate. An indoor receptor was also used to assess indoor concentrations and their pertinence to commonly used evacuation procedures.

Keywords: Dapple; Dispersion


Urban tracer dispersion experiments during the second DAPPLE field campaign in London 2004 by D. Martin; C.S. Price; I.R. White; G. Nickless; K.F. Petersson; R.E. Britter; A.G. Robins; S.E. Belcher; J.F. Barlow; M. Neophytou; S.J. Arnold; A.S. Tomlin; R.J. Smalley; D.E. Shallcross (pp. 3043-3052).
As part of the DAPPLE programme two large scale urban tracer experiments using multiple simultaneous releases of cyclic perfluoroalkanes from fixed location point sources was performed. The receptor concentrations along with relevant meteorological parameters measured are compared with a three screening dispersion models in order to best predict the decay of pollution sources with respect to distance. It is shown here that the simple dispersion models tested here can provide a reasonable upper bound estimate of the maximum concentrations measured with an empirical model derived from field observations and wind tunnel studies providing the best estimate. An indoor receptor was also used to assess indoor concentrations and their pertinence to commonly used evacuation procedures.

Keywords: Dapple; Dispersion


Retrospective prediction of intraurban spatiotemporal distribution of PM2.5 in Taipei by Yu Hwa-Lung; Wang Chih-Hsin (pp. 3053-3065).
Numerous studies have shown that fine airborne particulate matter particles (PM2.5) are more dangerous to human health than coarse particles, e.g. PM10. The assessment of the impacts to human health or ecological effects by long-term PM2.5 exposure is often limited by lack of PM2.5 measurements. In Taipei, PM2.5 was not systematically observed until August, 2005. Taipei is the largest metropolitan area in Taiwan, where a variety of industrial and traffic emissions are continuously generated and distributed across space and time. PM-related data, i.e., PM10 and Total Suspended Particles (TSP) are independently systematically collected by different central and local government institutes. In this study, the retrospective prediction of spatiotemporal distribution of monthly PM2.5 over Taipei will be performed by using Bayesian Maximum Entropy method (BME) to integrate (a) the spatiotemporal dependence among PM measurements (i.e. PM10, TSP, and PM2.5), (b) the site-specific information of PM measurements which can be certain or uncertain information, and (c) empirical evidence about the PM2.5/PM10 and PM10/TSP ratios. The performance assessment of the retrospective prediction for the spatiotemporal distribution of PM2.5 was performed over space and time during 2003–2004 by comparing the posterior pdf of PM2.5 with the observations. Results show that the incorporation of PM10 and TSP observations by BME method can effectively improve the spatiotemporal PM2.5 estimation in the sense of lower mean and standard deviation of estimation errors. Moreover, the spatiotemporal retrospective prediction with PM2.5/PM10 and PM2.5/TSP ratios can provide good estimations of the range of PM2.5 levels over space and time during 2003–2004 in Taipei.

Keywords: PM2.5; Spatiotemporal modeling; BME; Retrospective prediction


Retrospective prediction of intraurban spatiotemporal distribution of PM2.5 in Taipei by Yu Hwa-Lung; Wang Chih-Hsin (pp. 3053-3065).
Numerous studies have shown that fine airborne particulate matter particles (PM2.5) are more dangerous to human health than coarse particles, e.g. PM10. The assessment of the impacts to human health or ecological effects by long-term PM2.5 exposure is often limited by lack of PM2.5 measurements. In Taipei, PM2.5 was not systematically observed until August, 2005. Taipei is the largest metropolitan area in Taiwan, where a variety of industrial and traffic emissions are continuously generated and distributed across space and time. PM-related data, i.e., PM10 and Total Suspended Particles (TSP) are independently systematically collected by different central and local government institutes. In this study, the retrospective prediction of spatiotemporal distribution of monthly PM2.5 over Taipei will be performed by using Bayesian Maximum Entropy method (BME) to integrate (a) the spatiotemporal dependence among PM measurements (i.e. PM10, TSP, and PM2.5), (b) the site-specific information of PM measurements which can be certain or uncertain information, and (c) empirical evidence about the PM2.5/PM10 and PM10/TSP ratios. The performance assessment of the retrospective prediction for the spatiotemporal distribution of PM2.5 was performed over space and time during 2003–2004 by comparing the posterior pdf of PM2.5 with the observations. Results show that the incorporation of PM10 and TSP observations by BME method can effectively improve the spatiotemporal PM2.5 estimation in the sense of lower mean and standard deviation of estimation errors. Moreover, the spatiotemporal retrospective prediction with PM2.5/PM10 and PM2.5/TSP ratios can provide good estimations of the range of PM2.5 levels over space and time during 2003–2004 in Taipei.

Keywords: PM2.5; Spatiotemporal modeling; BME; Retrospective prediction


On the contribution of black carbon to the composite aerosol radiative forcing over an urban environment by A.S. Panicker; G. Pandithurai; P.D. Safai; S. Dipu; Dong-In Lee (pp. 3066-3070).
This paper discusses the extent of Black Carbon (BC) radiative forcing in the total aerosol atmospheric radiative forcing over Pune, an urban site in India. Collocated measurements of aerosol optical properties, chemical composition and BC were carried out for a period of six months (during October 2004 to May 2005) over the site. Observed aerosol chemical composition in terms of water soluble, insoluble and BC components were used in Optical Properties of Aerosols and Clouds (OPAC) to derive aerosol optical properties of composite aerosols. The BC fraction alone was used in OPAC to derive optical properties of BC aerosols. The aerosol optical properties for composite and BC aerosols were separately used in SBDART model to derive direct aerosol radiative forcing due to composite and BC aerosols. The atmospheric radiative forcing for composite aerosols were found to be +35.5, +32.9 and +47.6Wm−2 during post-monsoon, winter and pre-monsoon seasons, respectively. The average BC mass fraction found to be 4.83, 6.33 and 4μgm−3 during the above seasons contributing around 2.2 to 5.8% to the total aerosol load. The atmospheric radiative forcing estimated due to BC aerosols was +18.8, +23.4 and +17.2Wm−2, respectively during the above seasons. The study suggests that even though BC contributes only 2.2–6% to the total aerosol load; it is contributing an average of around 55% to the total lower atmospheric aerosol forcing due to strong radiative absorption, and thus enhancing greenhouse warming.

Keywords: Aerosols; Urban; Black carbon; Pune; India; Radiative forcing


On the contribution of black carbon to the composite aerosol radiative forcing over an urban environment by A.S. Panicker; G. Pandithurai; P.D. Safai; S. Dipu; Dong-In Lee (pp. 3066-3070).
This paper discusses the extent of Black Carbon (BC) radiative forcing in the total aerosol atmospheric radiative forcing over Pune, an urban site in India. Collocated measurements of aerosol optical properties, chemical composition and BC were carried out for a period of six months (during October 2004 to May 2005) over the site. Observed aerosol chemical composition in terms of water soluble, insoluble and BC components were used in Optical Properties of Aerosols and Clouds (OPAC) to derive aerosol optical properties of composite aerosols. The BC fraction alone was used in OPAC to derive optical properties of BC aerosols. The aerosol optical properties for composite and BC aerosols were separately used in SBDART model to derive direct aerosol radiative forcing due to composite and BC aerosols. The atmospheric radiative forcing for composite aerosols were found to be +35.5, +32.9 and +47.6Wm−2 during post-monsoon, winter and pre-monsoon seasons, respectively. The average BC mass fraction found to be 4.83, 6.33 and 4μgm−3 during the above seasons contributing around 2.2 to 5.8% to the total aerosol load. The atmospheric radiative forcing estimated due to BC aerosols was +18.8, +23.4 and +17.2Wm−2, respectively during the above seasons. The study suggests that even though BC contributes only 2.2–6% to the total aerosol load; it is contributing an average of around 55% to the total lower atmospheric aerosol forcing due to strong radiative absorption, and thus enhancing greenhouse warming.

Keywords: Aerosols; Urban; Black carbon; Pune; India; Radiative forcing


Optimal reduction of the ozone monitoring network over France by Lin Wu; Marc Bocquet; Matthieu Chevallier (pp. 3071-3083).
Ozone is a harmful air pollutant at ground level, and its concentrations are measured with routine monitoring networks. Due to the heterogeneous nature of ozone fields, the spatial distribution of the ozone concentration measurements is very important. Therefore, the evaluation of distributed monitoring networks is of both theoretical and practical interests. In this study, we assess the efficiency of the ozone monitoring network over France (BDQA) by investigating a network reduction problem. We examine how well a subset of the BDQA network can represent the full network. The performance of a subnetwork is taken to be the root mean square error (rmse) of the hourly ozone mean concentration estimations over the whole network given the observations from that subnetwork. Spatial interpolations are conducted for the ozone estimation taking into account the spatial correlations. Several interpolation methods, namely ordinary kriging, simple kriging, kriging about the means, and consistent kriging about the means, are compared for a reliable estimation. Exponential models are employed for the spatial correlations. It is found that the statistical information about the means improves significantly the kriging results, and that it is necessary to consider the correlation model to be hourly-varying and daily stationary. The network reduction problem is solved using a simulated annealing algorithm. Significant improvements can be obtained through these optimizations. For instance, removing optimally half the stations leads to an estimation error of the order of the standard observational error (10 μg m−3). The resulting optimal subnetworks are dense in urban agglomerations around Paris ( Île-de-France) and Nice ( Côte d’Azur), where high ozone concentrations and strong heterogeneity are observed. The optimal subnetworks are probably dense near frontiers because beyond these frontiers there is no observation to reduce the uncertainty of the ozone field. For large rural regions, the stations are uniformly distributed. The fractions between urban, suburban and rural stations are rather constant for optimal subnetworks of larger size (beyond 100 stations). By contrast, for smaller subnetworks, the urban stations dominate.

Keywords: Air quality; Ozone monitoring; Network design; Geostatistics


Optimal reduction of the ozone monitoring network over France by Lin Wu; Marc Bocquet; Matthieu Chevallier (pp. 3071-3083).
Ozone is a harmful air pollutant at ground level, and its concentrations are measured with routine monitoring networks. Due to the heterogeneous nature of ozone fields, the spatial distribution of the ozone concentration measurements is very important. Therefore, the evaluation of distributed monitoring networks is of both theoretical and practical interests. In this study, we assess the efficiency of the ozone monitoring network over France (BDQA) by investigating a network reduction problem. We examine how well a subset of the BDQA network can represent the full network. The performance of a subnetwork is taken to be the root mean square error (rmse) of the hourly ozone mean concentration estimations over the whole network given the observations from that subnetwork. Spatial interpolations are conducted for the ozone estimation taking into account the spatial correlations. Several interpolation methods, namely ordinary kriging, simple kriging, kriging about the means, and consistent kriging about the means, are compared for a reliable estimation. Exponential models are employed for the spatial correlations. It is found that the statistical information about the means improves significantly the kriging results, and that it is necessary to consider the correlation model to be hourly-varying and daily stationary. The network reduction problem is solved using a simulated annealing algorithm. Significant improvements can be obtained through these optimizations. For instance, removing optimally half the stations leads to an estimation error of the order of the standard observational error (10 μg m−3). The resulting optimal subnetworks are dense in urban agglomerations around Paris ( Île-de-France) and Nice ( Côte d’Azur), where high ozone concentrations and strong heterogeneity are observed. The optimal subnetworks are probably dense near frontiers because beyond these frontiers there is no observation to reduce the uncertainty of the ozone field. For large rural regions, the stations are uniformly distributed. The fractions between urban, suburban and rural stations are rather constant for optimal subnetworks of larger size (beyond 100 stations). By contrast, for smaller subnetworks, the urban stations dominate.

Keywords: Air quality; Ozone monitoring; Network design; Geostatistics

Featured Book
Web Search

Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: