|
|
Analytical and Bioanalytical Chemistry (v.380, #3)
A closer look at analytical signals
by Klaus Danzer (pp. 376-382).
Analytical chemistry will be considered here as the measurement science of chemistry that generates, treats, and evaluates signals which contain information about the composition and structure of samples. Analytical information is always obtained from signals. Characteristics and peculiarities of analytical signals will be considered from a very general point of view. A mathematical model of the influencing of signals will be given from which essential performance characteristics of analytical methods can be derived, e.g. sensitivity, cross-sensitivity, specificity, selectivity, and ruggedness (robustness).
Keywords: Analytical signal; Sensitivity; Cross-sensitivity; Selectivity; Specificity; Robustness
Different approaches to multivariate calibration of nonlinear sensor data
by Frank Dieterle; Stefan Busche; Günter Gauglitz (pp. 383-396).
In this study, different approaches to the multivariate calibration of the vapors of two refrigerants are reported. As the relationships between the time-resolved sensor signals and the concentrations of the analytes are nonlinear, the widely used partial least-squares regression (PLS) fails. Therefore, different methods are used, which are known to be able to deal with nonlinearities present in data. First, the Box–Cox transformation, which transforms the dependent variables nonlinearly, was applied. The second approach, the implicit nonlinear PLS regression, tries to account for nonlinearities by introducing squared terms of the independent variables to the original independent variables. The third approach, quadratic PLS (QPLS), uses a nonlinear quadratic inner relationship for the model instead of a linear relationship such as PLS. Tree algorithms are also used, which split a nonlinear problem into smaller subproblems, which are modeled using linear methods or discrete values. Finally, neural networks are applied, which are able to model any relationship. Different special implementations, like genetic algorithms with neural networks and growing neural networks, are also used to prevent an overfitting. Among the fast and simpler algorithms, QPLS shows good results. Different implementations of neural networks show excellent results. Among the different implementations, the most sophisticated and computing-intensive algorithms (growing neural networks) show the best results. Thus, the optimal method for the data set presented is a compromise between quality of calibration and complexity of the algorithm.
Keywords: Multivariate calibration; Nonlinear relationships
Selection of useful predictors in multivariate calibration
by M. Forina; S. Lanteri; M. C. Cerrato Oliveros; C. Pizarro Millan (pp. 397-418).
Ten techniques used for selection of useful predictors in multivariate calibration and in other cases of multivariate regression are described and discussed in terms of their performance (ability to detect useless predictors, predictive power, number of retained predictors) with real and artificial data. The techniques studied include classical stepwise ordinary least-squares (SOLS), techniques based on the genetic algorithms, and a family of methods based on partial least-squares (PLS) regression and on the optimization of the predictive ability. A short introduction presents the evaluation strategies, a description of the quantities used to evaluate the regression model, and the criteria used to define the complexity of PLS models. The selection techniques can be divided into conservative techniques that try to retain all the informative, useful predictors, and parsimonious techniques, whose objective is to select a minimum but sufficient number of useful predictors. Some combined techniques, in which a conservative technique is used to perform a preliminary selection before the use of parsimonious techniques, are also presented. Among the conservative techniques, the Westad–Martens uncertainty test (MUT) used in Unscrambler, and uninformative variables elimination (UVE), developed by Massart et al., seem the most efficient techniques. The old SOLS can be improved to become the most efficient parsimonious technique, by means of the use of plots of the F-statistics value of the entered predictors and comparison with parallel results obtained with a data matrix with random data. This procedure indicates correctly how many predictors can be accepted and substantially reduces the possibility of overfitting. A possible alternative to SOLS is iterative predictors weighting (IPW) that automatically selects a minimum set of informative predictors. The use of an external evaluation set, with objects never used in the elimination of predictors, or of “complete validation” is suggested to avoid overestimate of the prediction ability.
Keywords: Multivariate calibration; Predictor selection
Using chemometrics for navigating in the large data sets of genomics, proteomics, and metabonomics (gpm)
by Lennart Eriksson; Henrik Antti; Johan Gottfries; Elaine Holmes; Erik Johansson; Fredrik Lindgren; Ingrid Long; Torbjörn Lundstedt; Johan Trygg; Svante Wold (pp. 419-429).
This article describes the applicability of multivariate projection techniques, such as principal-component analysis (PCA) and partial least-squares (PLS) projections to latent structures, to the large-volume high-density data structures obtained within genomics, proteomics, and metabonomics. PCA and PLS, and their extensions, derive their usefulness from their ability to analyze data with many, noisy, collinear, and even incomplete variables in both X and Y. Three examples are used as illustrations: the first example is a genomics data set and involves modeling of microarray data of cell cycle-regulated genes in the microorganism Saccharomyces cerevisiae. The second example contains NMR-metabonomics data, measured on urine samples of male rats treated with either of the drugs chloroquine or amiodarone. The third and last data set describes sequence-function classification studies in a set of G-protein-coupled receptors using hierarchical PCA.
Keywords: PCA; PLS; Hierarchical modeling; Multivariate analysis; Omics data analysis
Total ranking models by the genetic algorithm variable subset selection (GA–VSS) approach for environmental priority settings
by M. Pavan; A. Mauri; R. Todeschini (pp. 430-444).
Total order ranking (TOR) strategies, which are mathematically based on elementary methods of discrete mathematics, seem to be attractive and simple tools for performing data analysis. Moreover order-ranking strategies seem to be a very useful tool not only to perform data exploration but also to develop order ranking models, a possible alternative to conventional quantitative structure–activity relationship (QSAR) methods. In fact, when data material is characterised by uncertainties, order methods can be used as alternative to statistical methods such as multilinear regression (MLR), because they do not require specific functional relationships between the independent and dependent variables (responses). A ranking model is a relationship between a set of dependent attributes, experimentally investigated, and a set of independent attributes, i.e. model attributes, which are calculated attributes. As in regression and classification models, the variable selection model is one of the main steps in finding predictive models. In this work the genetic algorithm–variable subset selection (GA–VSS) approach is proposed as the variable selection method for searching for the best ranking models within a wide set of variables. The models based on the selected subsets of variables are compared with the experimental ranking and evaluated by the Spearman’s rank index. A case study application is presented on a TOR model developed for polychlorinated biphenyl (PCB) compounds, which have been analysed according to some of their physicochemical properties which play an important role in their environmental impact.
Keywords: Multicriteria decision making; Priority setting; Total order ranking models; GA–VSS; PCB
Wavelet multiscale regression from the perspective of data fusion: new conceptual approaches
by Yang Liu; Steven D. Brown (pp. 445-452).
Wavelet regression is a very promising technique for modern multivariate calibration and calibration transfer. Multiscale analysis of wavelet scales provides a connection between wavelet regression and data fusion. In this paper, current wavelet regression methods are reviewed from the novel perspective of data fusion. Illustrated by analysis of a public domain near-infrared dataset, the advantages and drawbacks of these methods are examined. For wavelet regression, the non-uniformity of the wavelet components, the multiscale nature of the signal, and the prevention of information leakage are crucial issues that will be addressed.
Keywords: Wavelet; Wavelet regression; Data fusion; Multivariate calibration
Assigning environmental variables to observed biological changes
by Geir Rune Flåten; Helge Botnen; Bjørn Grung; Olav M. Kvalheim (pp. 453-466).
A method for assigning environmental variables to observed biological changes in benthic communities is proposed. The approach requires biological and environmental sampling at the same sites. Additionally, a biological gradient or trend such as a change in observed species or a significant change in their relative abundances is necessary in order to connect the biological observations to the environmental measurements. Whether there is a statistical significant correspondence between the environmental measurements and the biological changes is tested after quantifying the biological changes by using the community disturbance index (CDI ). Finally, the environmental variables that are most strongly associated with the biological changes are identified, and it is proposed that these are strong candidates as the pollutants responsible for the biological changes observed. However, this cannot be confirmed using the monitored data only. The approach is tested on data collected in monitoring surveys at the Ekofisk oil field in the North Sea. The results indicate the method is feasible for assigning environmental variables to observed biological changes.
Keywords: Environmental monitoring; Environmental variables; Community disturbance index (CDI ); Benthic ecology; Benthic environment; Oil and gas platforms; Pollution; Partial least-squares regression (PLSR)
Chemical databases evaluated by order theoretical tools
by Kristina Voigt; Rainer Brüggemann; Stefan Pudenz (pp. 467-474).
Data on environmental chemicals are urgently needed to comply with the future chemicals policy in the European Union. The availability of data on parameters and chemicals can be evaluated by chemometrical and environmetrical methods. Different mathematical and statistical methods are taken into account in this paper. The emphasis is set on a new, discrete mathematical method called METEOR (method of evaluation by order theory). Application of the Hasse diagram technique (HDT) of the complete data-matrix comprising 12 objects (databases) × 27 attributes (parameters + chemicals) reveals that ECOTOX (ECO), environmental fate database (EFD) and extoxnet (EXT)—also called multi-database databases—are best. Most single databases which are specialised are found in a minimal position in the Hasse diagram; these are biocatalysis/biodegradation database (BID), pesticide database (PES) and UmweltInfo (UMW). The aggregation of environmental parameters and chemicals (equal weight) leads to a slimmer data-matrix on the attribute side. However, no significant differences are found in the “best” and “worst” objects. The whole approach indicates a rather bad situation in terms of the availability of data on existing chemicals and hence an alarming signal concerning the new and existing chemicals policies of the EEC.
Keywords: Chemometrics; Environmetrics; Hasse diagram technique (HDT); METEOR; Environmental chemicals; Environmental chemical databases
Information theory for evaluating environmental classification systems
by Jörg Kraft; Jürgen W. Einax; Corinna Kowalik (pp. 475-483).
Environmental pollution data are often ranked in rule-based classification systems. These environmental data are separated in predetermined classes of a classification system for a better and smarter characterization of the state of pollution. Often the measured values are transformed, e.g. in pseudocolor maps, and can then be presented in maps. For some environmental compartments different classification systems for evaluating environmental loadings are used. Because of the dissimilarity of the various classification systems direct visual comparison is difficult. However, by means of information theory an objective comparison of these various classification systems based on their information content enables a decision to be made about which system is the most informative for objective assessment of the state of pollution. By means of the new measure “multiple medium information content” (multiple entropy) objective and simultaneous comparison of all channels (in an environmental classification system: pollutants) of each classification system is now possible. Furthermore the development of the state of pollution over the whole investigation period can be detected by means of information theory. On the basis of the conditions of the established rule-based systems the use of information theory enables definition of new ranges of classes in order to reach the optimum of information during conversion into the environmental classification system.
Keywords: Classification system; Entropy; Heavy metals; Information theory; Medium information content; Sediment
Interpolation and approximation of water quality time series and process identification
by Albrecht Gnauck (pp. 484-492).
Data records with equidistant time intervals are fundamental prerequisites for the development of water quality simulation models. Usually long-term water quality data time series contain missing data or data with different sampling intervals. In such cases “artificial” data have to be added to obtain records based on a regular time grid. Generally, this can be done by interpolation, approximation or filtering of data sets. In contrast to approximation by an analytical function, interpolation methods estimate missing data by means of measured concentration values. In this paper, methods of interpolation and approximation are applied to long-term water quality data sets with daily sampling intervals. Using such data for the water temperature and phosphate phosphorus in some shallow lakes, it was possible to identify the process of phosphate remobilisation from sediment.
Keywords: Chemometrics; Water quality time series; Interpolation; Approximation; Process identification
Time series analysis of long-term data sets of atmospheric mercury concentrations
by Christian Temme; Ralf Ebinghaus; Jürgen W. Einax; Alexandra Steffen; William H. Schroeder (pp. 493-501).
Different aspects and techniques of time series analysis were used to investigate long-term data sets of atmospheric mercury in the Northern Hemisphere. Two perennial time series from different latitudes with different seasonal behaviour were chosen: first, Mace Head on the west coast of Ireland (53°20′N, 9°54′W), representing Northern Hemispherical background conditions in Europe with no indications for so-called atmospheric mercury depletion events (AMDEs); and second, Alert, Canada (82°28′N, 62°30′W), showing strong AMDEs during Arctic springtime. Possible trends were extracted and forecasts were performed by using seasonal decomposition procedures, autoregressive integrated moving average (ARIMA) methods and exponential smoothing (ES) techniques. The application of time series analysis to environmental data is shown in respect of atmospheric long-term data sets, and selected advantages are discussed. Both time series have not shown any statistically significant temporal trend in the gaseous elemental mercury (GEM) concentrations since 1995, representing low Northern Hemispherical background concentrations of 1.72±0.09 ng m−3 (Mace Head) and 1.55±0.18 ng m−3 (Alert), respectively. The annual forecasts for the GEM concentrations in 2001 at Alert by two different techniques were in good agreement with the measured concentrations for this year.
Keywords: Time series analysis; Atmosphere; Mercury; Long-term trend; Forecast; Atmospheric mercury depletion events
New advances in method validation and measurement uncertainty aimed at improving the quality of chemical data
by Max Feinberg; Bruno Boulanger; Walthère Dewé; Philippe Hubert (pp. 502-514).
The implementation of quality systems in analytical laboratories has now, in general, been achieved. While this requirement significantly modified the way that the laboratories were run, it has also improved the quality of the results. The key idea is to use analytical procedures which produce results that fulfil the users’ needs and actually help when making decisions. This paper presents the implications of quality systems on the conception and development of an analytical procedure. It introduces the concept of the lifecycle of a method as a model that can be used to organize the selection, development, validation and routine application of a method. It underlines the importance of method validation, and presents a recent approach based on the accuracy profile to illustrate how validation must be fully integrated into the basic design of the method. Thanks to the β-expectation tolerance interval introduced by Mee (Technometrics (1984) 26(3):251–253), it is possible to unambiguously demonstrate the fitness for purpose of a new method. Remembering that it is also a requirement for accredited laboratories to express the measurement uncertainty, the authors show that uncertainty can be easily related to the trueness and precision of the data collected when building the method accuracy profile.
Keywords: Data quality; Method validation; β-Expectation tolerance interval; Uncertainty
Performance of quantitative analyses by liquid chromatography–electrospray ionisation tandem mass spectrometry: from external calibration to isotopomer-based exact matching
by Thierry Delatour (pp. 515-523).
Liquid chromatography–tandem mass spectrometry (LC-MS/MS) is a versatile coupling system which combines both selectivity and sensitivity and certainty. Hence, it is generally considered as the most reliable technique to quantify chemical compounds in complex matrices. In the present paper, we evaluate the performance of LC-MS/MS methods for the quantification of 3-nitrotyrosine in human urine in order to point out its dependence on the design of the quantification method, and emphasize the role of matrix effects in the performance. We compare external and internal calibrations, isotope dilution and isotopomer-based exact matching. The role of both sample preparation and multiple transitions monitoring is particularly addressed.
Keywords: Mass spectrometry; External calibration; Internal calibration; Isotope dilution; Exact matching; Internal standard; Matrix effect
Optimizing the extraction and analysis of DHEA sulfate, corticosteroids and androgens in urine: application to a study of the influence of corticosteroid intake on urinary steroid profiles
by E. Pujos; M. M. Flament-Waton; P. Goetinck; M. F. Grenier-Loustalot (pp. 524-536).
A method of detecting and quantifying dehydroepiandrosterone (DHEA) sulfate, corticosteroids, and androgens has been developed. All of the compounds were first extracted from urine using solid phase extraction (SPE), enzymatically hydrolyzed, and separated into three samples using a second SPE. A DHEA sulfate sample was acetylated and re-extracted using SPE for purification before analysis. Corticosteroid samples were oxidized and re-extracted using liquid/liquid extraction for analysis. Androgen samples were acetylated and re-extracted using SPE prior to analysis. The extraction and analysis methods were investigated and optimized. Analyses were performed with gas chromatography/mass spectrometry (GC/MS) and gas chromatography/flame ionization detection (GC/FID). The entire procedure was then applied to the study of urine profiles of healthy volunteers and patients treated with corticosteroids. The results showed that the quantities of androgens found in patient urines were lower than in those of healthy volunteers. In addition, other metabolites were detected in patient urines.
Keywords: DHEA; Corticosteroids; Steroids; Urine; Analysis
Determination of microcystins in natural blooms and cyanobacterial strain cultures by matrix solid-phase dispersion and liquid chromatography–mass spectrometry
by Ana Cameán; Isabel M. Moreno; María J. Ruiz; Yolanda Picó (pp. 537-544).
An analytical procedure based on matrix solid-phase dispersion (MSPD) and liquid chromatography–mass spectrometry (LC-MS) was developed for determining three microcystins (MCs) in natural water blooms and cyanobacteria strain cultures. The procedure involves sample homogenization with C18, washed with dichloromethane to eliminate interfering compounds, and elution with acidic methanol. Results were compared to those achieved by using an organic solvent standard method. Mean recoveries of MCs with MSPD were 85–92% with intra-day relative standard deviation (RSDs) of 9–19%, whereas organic solvent extraction resulted in recovery rates of 92–105% with intra-day RSDs ranging from 8 to 18%. Limits of quantification (LOQs) were 1 μg g−1 dry weight for the MCs either by MSPD or organic solvent extraction. The two analytical methods tested were specific and sensitive to the extraction of MCs and were applied to the detection of MCs in water blooms and culture strains. The concentration of MCs varied from 7 to 3,330 μg g−1 of lyophilized cells with MC-LR always showing the highest concentration. MCs levels were higher in culture strains than in water blooms, except for MC-LR, whose concentration in blooms was slightly superior to that determined in culture strains.
Keywords: Microcystins; Matrix solid-phase dispersion; Liquid chromatography–mass spectrometry; Natural blooms; Cyanobacteria strains
The electrochemistry and determination of Ligustrazine hydrochloride
by Ziyi Sun; Xiaofeng Zheng; Tomonori Hoshi; Yoshitomo Kashiwagi; Jun-ichi Anzai; Genxi Li (pp. 545-550).
Ligustrazine is one of the active ingredients contained in Ligusticum chuanxiong Hort. (Umbelliferae), which is widely used in traditional Chinese medicine for the treatment of cardiovascular problems. In this work, the electrochemistry of Ligustrazine hydrochloride (LZC) and its determination are investigated. The detection limit is estimated to be 8.0×10−8 M, with three linear ranges from 1.0×10−6 to 1.0×10−4 M, 1.0×10−4 to 5.0×10−4 M, and 6.5×10−4 to 1.6×10−3 M. The method has been proved to be highly sensitive, selective, and stable, and has been successfully applied to determining LZC in LZC injections.
Keywords: Ligustrazine hydrochloride; Determination; Cyclic voltammetry; Square wave voltammetry; Pyrolytic graphite
Rapid determination of mono and dinitrophenols by DPP, in the presence of lead and cadmium and using concentrated CaCl2 electrolyte
by Ihab Lubbad; Jean-Pierre Mayinda; Michelle Chatelut; Olivier Vittori (pp. 551-555).
The contamination of drinking water and industrial wastewaters is a critical environmental problem. The nitrophenol, dinitrophenol, cadmium, and lead contaminants are classified as hazardous compounds. Their rapid determination may be obtained using differential pulse polarography with concentrated electrolyte. CaCl2, which is very soluble to levels exceeding 5 mol l−1, allows separation of coalescent peaks at 0.1 mol l−1. A systematic study undertaken from 0.1 to 5 mol l−1 shows good separation of lead and cadmium from the organic compounds, and optimization of the electrolyte concentration according to the objective is described. Preconcentration of real samples is necessary because pollution levels are usually very low.
Keywords: Nitrophenols; Lead; Cadmium; Calcium chloride; DPP
Measurement of water by oven evaporation using a novel oven design.
by Sam A. Margolis; Kevin Vaishnav; John R. Sieber (pp. 556-562).
Using an automated oven evaporation technique combined with the coulometric Karl Fischer method, the mass fraction of water has been measured in cement, coal, and refined oil samples. The accuracy of this method was established by using SRM 2890, water-saturated 1-octanol that was added to white oil. The samples were analyzed for total reactive Karl Fischer reagent (KFR) material, for interfering materials, and for material that does not react with the aldehyde–ketone KFR. All of the samples yielded volatile material that reacted with the standard KFR. None of the samples contained significant masses of material that reacted with iodine. The cement and coal SRMs contained no material that reacted with methanol and very little material that did not volatilize at 107°C. The refined oils contained some material that was volatile at 107°C and some at 160°C. However, none of this material reacted with the aldehyde–ketone reagent. These results show that the material in the solid samples is water and that the material in the refined oils is a material other than water which reacts with methanol to form water.
Keywords: Water; Karl Fischer; Oven evaporation; Coal; Portland cement; Solvent neutral oils
A test strip for chloride analysis in environmental water
by L. F. Capitán-Vallvey; E. A. Guerrero; C. B. Merelo; M. D. F. Ramos (pp. 563-569).
A disposable and reversible test strip for chloride is proposed. It is based on a polyester strip containing a circular sensing zone, 6 mm in diameter and 9.5 μm in thickness, with all the reagents necessary to produce a selective response to chloride. This sensing zone comprises a plasticized poly(vinyl chloride) (PVC) membrane that incorporates trioctyltin chloride as ionophore and 4′,5′-dibromofluorescein octadecyl ester as chromoionophore. The prepared test strip works on a co-extraction system of chloride and hydrogen ions in the sensing zone. This disposable test strip can determine chloride simply by introducing the strip into a water sample containing a pH 2.0 buffer and measuring the absorbance at 534 nm as the analytical signal because the colour changes from red to orange. Experimental variables that influence the sensor response have been studied, especially those related to selectivity and response time. The sensor responds linearly in activities in the range 0.15–24.7 mM. The detection limit is 0.15 mM, the intermembrane reproducibility at a mid-level of the range is 6.4% relative standard deviation (RSD) of % MathType!Translator!2!1!AMS LaTeX.tdl!TeX -- AMS-LaTeX! % MathType!MTEF!2!1!+- % feaafiart1ev1aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaciiBaiaac+ % gacaGGNbGaamyyamaaBaaaleaacaqGdbGaaeiBamaaCaaameqabaGa % eyOeI0caaaWcbeaaaaa!3CAB! $$log a_{{ ext{Cl}}^-} $$ and the intramembrane reproducibility is 4.5%. The procedure was applied to the determination of chloride in different types of water (tap, well, stream and sea), validating results against a reference procedure. This proposed test system for chloride determination in waters is inexpensive, selective and sensitive and uses only conventional instrumentation.
Keywords: Chloride determination; Optical test strip; Neutral ionophore; Water analysis
Slurry sampling of sediments and coals for the determination of Sn by HG-GF AAS with retention in the graphite tube treated with Th or W as permanent modifiers
by Mariana Antunes Vieira; Anderson Schwingel Ribeiro; Adilson José Curtius (pp. 570-577).
A method for the determination of Sn in slurry samples of sediment and coal by hydride generation graphite furnace electrothermal atomic absorption spectrometry (HG-GF AAS) is proposed. The slurries were prepared by mixing the ground sample (particle size ≤50 μm) with 2.0 mol L−1 HCl for the sediment samples or with 2.0 mol L−1 HCl+1.0% v/v HF in a saturated boric acid medium for the coal samples. The slurry was placed in an ultrasonic bath for 30 min, before and after standing for 24 h, with occasional manual stirring. The graphite tube was treated with 0.5 mg of Th or W as a permanent modifier. Sn determination was carried out by electrothermal atomic absorption spectrometry at the optimized retention temperatures of 450 and 300°C for Th and W treatment, respectively. With this coupling, kinetic interference in the formation of the hydrides is avoided, and excellent detection limits can be obtained by using peak height. For the chemical vapor generation device, an optimized volume of 2 mL of sample slurry and an optimized NaBH4 concentration of 5% m/v were employed. The vapor produced was transported and retained on the graphite tube surface, which was further heated for Sn atomization. The accuracy of the method was verified by analyzing five certified sediments and three coals. By using the external calibration against aqueous standard solutions, the results obtained were in agreement with the certified values only for the sediment samples. For the coal samples, an addition calibration curve, obtained for one certified coal, was necessary to achieve accurate results. The obtained limits of detection were 0.03 μg g−1 for sediment and 0.09 μg g−1 for coal with Th as permanent modifier. The relative standard deviations were lower than 15%, demonstrating an adequate precision for slurry analysis. Sediment and coal samples from Santa Catarina, Brazil, were also analyzed.
Keywords: Tin; Slurry sampling; Sediment; Coal; Hydride generation electrothermal atomic absorption spectrometry; Permanent modifier
Extraction of U(VI), Th(IV), and La(III) from acidic streams and geological samples using AXAD-16–POPDE polymer
by D. Prabhakaran; M. S. Subramanian (pp. 578-585).
A new chromatographic extraction method has been developed using Amberlite XAD-16 (AXAD-16) resin chemically modified with (3-hydroxyphosphinoyl-2-oxo-propyl)phosphonic acid dibenzyl ester (POPDE). The chemically modified polymer was characterized by 13C CPMAS and 31P solid-state NMR, Fourier Transform–NIR–FIR–Raman spectroscopy, CHNPS elemental analysis, and thermogravimetric analysis. Extraction studies performed for U(VI), Th(IV), and La(III) showed good distribution ratio (D) values of approximately 103, even under high acidities (1–4 M). Various physiochemical parameters that influence the quantitative metal ion extraction were optimized by static and dynamic methods. Data obtained from kinetic studies revealed that a time duration of ≤10 min was sufficient to achieve complete metal ion extraction. Maximum metal sorption capacity values under optimum pH conditions were found to be 1.38, 1.33, and 0.75 mmol g−1 for U(VI), Th(IV), and La(III), respectively. Interference studies performed in the presence of concentrated diverse ions and electrolyte species showed quantitative analyte recovery with lower limits of analyte detection being 10 and 20 ng cm−3 for U(VI) and both Th(IV) and La(III), respectively. Sample breakthrough studies performed on the extraction column showed an enrichment factor value of 330 for U(VI) and 270 for Th(IV) and La(III), respectively. Analyte desorption was effective using 15 cm3 of 1 M (NH4)2CO3 with >99.8% analyte recovery. The analytical applicability of the developed resin was tested with synthetic mixtures mimicking nuclear spent fuels, seawater compositions and real water and geological samples. The rsd values of the data obtained were within 5.2%, thereby reflecting the reliability of the developed method.
Keywords: AXAD-16; Actinides; Nuclear spent fuels; Extraction and preconcentration
|
|