Skip to content. Skip to navigation
Sections
Personal tools
You are here: Home
Featured Journal
Navigation
Site Search
 
Search only the current folder (and sub-folders)
Log in


Forgot your password?
New user?
Check out our New Publishers' Select for Free Articles
Journal Search

Accreditation and Quality Assurance: Journal for Quality, Comparability and Reliability in Chemical Measurement (v.10, #9)

On (Certified) Reference Materials by Paul De Bièvre (pp. 459-460).

Performance indication improvement for a proficiency testing by D. Kisets (pp. 461-465).
The paper discusses peculiarities of Z scores and E n numbers, which are most often used for the treatment of proficiency test data. The important conditions of proper usage of these performance indicators and their improvement are suggested on the basis of systematic approach, on the idea of accuracy classification, and on some principles of optimality borrowed from information theory. The author believes that this paper may be of interest and practical value for all those engaged in applied metrology, specifically in the field of developing of and participating in proficiency testing programs, and in the activity connected with accrediting testing and calibration laboratories.

Keywords: Interlaboratory comparisons; Z scores; E n numbers; Informational optimality


Comparability of analytical results obtained in proficiency testing based on a metrological approach by Ilya Kuselman (pp. 466-470).
A “yes–no” type of criterion is proposed for the assessment of comparability of proficiency testing (PT) results when the PT scheme is based on a metrological approach, i.e. on the use of a reference material as the test sample, etc. The criterion tests a null hypothesis concerning the insignificance of a bias of the mean of the results from a traceable value certified in the reference material used for the PT. Reliability of such assessment is determined by the probabilities of not rejecting the null hypothesis when it is true, and rejecting it when it is false (the alternative hypothesis is true). It is shown that a number of chemical, metrological and statistical reasons should be taken into account for careful formulation of the hypotheses, enabling the avoidance of an erroneous assessment of the comparability. The criterion can be helpful for PT providers and laboratory accreditation bodies in analysis of PT results.

Keywords: Comparability; Traceability; Proficiency testing; Test of hypotheses; Reliability


Using uncertainty functions to predict and specify the performance of analytical methods by Michael Thompson; Roger Wood (pp. 471-478).
In both European legislation relating to the testing of food and the recommendations of the Codex Alimentarius Commission, there is a movement away from specifying particular analytical methods towards specifying performance criteria to which any methods used must adhere. This ‘criteria approach’ has hitherto been based on the features traditionally used to describe analytical performance. This paper proposes replacing the traditional features, namely accuracy, applicability, detection limit and limit of determination, linearity, precision, recovery, selectivity and sensitivity, with a single specification, the uncertainty function, which tells us how the uncertainty varies with concentration. The uncertainty function can be used in two ways, either as a ‘fitness function’, which describes the uncertainty that is fit for purpose, or as a ‘characteristic function’ that describes the performance of a defined method applied to a defined range of test materials. Analytical chemists reporting the outcome of method validations are encouraged to do so in future in terms of the uncertainty function. When no uncertainty function is available, existing traditional information can be used to define one that is suitable for ‘off-the-shelf’ method selection. Some illustrative examples of the use of these functions in methods selection are appended.

The determination of an accepted reference value from proficiency data with stated uncertainties by Kaj Heydorn (pp. 479-484).
Proficiency data with stated uncertainties represent a unique opportunity for testing that the reported uncertainties are consistent with the Guide to the expression of uncertainty in measurement (GUM). In most proficiency tests, however, this opportunity is forfeited, because proficiency data are processed without regard to their uncertainties. In this paper we present alternative approaches for determining a reference value as the weighted mean of all mutually consistent results and their stated uncertainties. Using an accepted reference value each reported uncertainty estimate can be expressed as an E n number, but a value of $$ |{E_n }| {<} 1$$ confirms its validity only if the uncertainty of the reference value is negligible in comparison.Reference values calculated for results from an International Measurement Evaluation Programme (IMEP-9) by “bottom up” as well as “top down” methods were practically identical, although the first strategy yielded the lowest uncertainty. A plot of individual coefficients of variation (CV) versus E n numbers helps interpretation of the proficiency data, which could be used to validate relative uncertainties down to <1%.

Keywords: Proficiency testing; E n numbers; Synthesis of precision; Validation of uncertainty; Reference values


A simplified approach to the estimation of analytical measurement uncertainty by Sébastien Populaire; Esther Campos Giménez (pp. 485-493).
The present study summarizes the measurement uncertainty estimations carried out in Nestlé Research Center since 2002. These estimations cover a wide range of analyses of commercial and regulatory interests. In a first part, this study shows that method validation data (repeatability, trueness and intermediate reproducibility) can be used to provide a good estimation of measurement uncertainty.In a second part, measurement uncertainty is compared to collaborative trials data. These data can be used for measurement uncertainty estimation as far as the in-house validation performances are comparable to the method validation performances obtained in the collaborative trial.Based on these two main observations, the aim of this study is to easily estimate the measurement uncertainty using validation data.

Keywords: Uncertainty; Reproducibility; Repeatability; Trueness; Statistics


Approach to accuracy assessment of the glass-electrode potentiometric determination of acid-base properties by G. Meinrath; A. Kufelnicki; M. Świątek (pp. 494-500).
Thermodynamic data are suitable subject for investigating strategies and concepts for the evaluation of complete measurement uncertainty budgets in situations where the measurand cannot be expressed in a mathematical formula. Some suitable approaches are the various forms of Monte Carlo simulations in combination with computer-intensive statistical methods that are directed to an evaluation of empirical distribution curves for the uncertainty budget. Basis of the analysis is a cause-and-effect diagram. Some experience is available with cause-and-effect analysis of thermodynamic data derived from spectrophotometric data. Another important technique for the evaluation of thermodynamic data is glass-electrode potentiometry. On basis of a newly derived cause-and-effect diagram, a complete measurement uncertainty budget for the determination of the acidity constants of phosphoric acid by glass-electrode potentiometry is derived. A combination of Monte Carlo and bootstrap methods is applied in conjunction with the commercially available code SUPERQUAD. The results suggest that glass-electrode potentiometry may achieve a high within-laboratory precision because major uncertainty contributions become evident via interlaboratory comparisons. This finding is further underscored by analysing available literature data.

Keywords: Potentiometric titration; Computer-intensive Resampling methods; Empirical distribution curve; Metrology in complex situations; Speciation


Using mixture models for bump-hunting in the results of proficiency tests by Michael Thompson (pp. 501-505).
The interpretation of the results of proficiency tests by the use of mixture models is described. The data are interpreted as a sample from a mixture of several normal populations. The calculation of the statistics (the means, variances and proportions of each component) is accomplished by means of the ‘EM’ algorithm. The method has several advantages over those previously advanced, principally that the algorithm is fast and easy to execute. Examples from proficiency testing are discussed.

Keywords: proficiency testing; bump-hunting; consensus; mixture models; the EM algorithm


Discrepancy in infrared measurement results of carbon monoxide in nitrogen mixtures due to variations of the 13C/12C isotope ratio by Gerard Nieuwenkamp; Adriaan M. H. van der Veen (pp. 506-509).
Significant errors in the non-dispersive infrared (NDIR) analyses of carbon monoxide (CO) can be made when the 13C/12C isotope ratio in the sample and the calibrant differ significantly. This paper shows that variations in the 13C/12C isotope ratio of 5×10−2 mol/mol CO in nitrogen mixtures on three different NDIR CO analysers may lead to serious deviations in the instrument response, whereas the instrument response using GC-TCD is unaffected. The observed deviations in the assigned amount-of-substance fraction CO for a 13C depleted mixture vary from +2 to −5% relative to the gravimetric amount-of-substance fraction for different NDIR analysers. A GC-MS method has been developed to perform a pre-screening of the isotopic composition of CO in nitrogen mixtures. This method proved to be an adequate tool to measure differences in the 13C/12C ratio. Based on the GC-MS results a suitable measurement technique can be selected, or information about a possible error in NDIR analysis can be given to the producer or user of the calibration gas mixture.

Keywords: NDIR; 13C; Isotope ratio


Chemical and environmental sampling: quality through accreditation, certification and industrial standards by Mads Peter Schreiber; Veikko Komppa; Margareta Wahlström; Jutta Laine-Ylijoki (pp. 510-514).
In the CEN/STAR Trends Analysis workshop on Sampling, initiated by request of the Nordic Innovation Centre, specially invited experts provided presentations on demands about regulation concerning sampling quality, sampling standard developments, quality assurance systems and practical experience from different sampling situations and cases. The workshop arrived at recommendations on the importance of proper sampling for environmental and product control purposes, especially to support European regulations, trade agreements and monitoring of environment. Sampling is an integrated part of the whole measurement process and should therefore be especially considered from the viewpoint of the end-user of the results. There is a need for raising quality control issues in sampling and for the establishment of a more uniformly co-ordinated European quality system for sampling. With the standard methods available, there are in principle two different ways of achieving third party assessment of the sampling protocols and procedures: accreditation of sampling organisations based upon international, national, or in-house standards and methods, and certification of individual samplers’ competences for sampling. Several activities or efforts as well as research and standardisation needs for raising the quality issues in sampling were identified and presented in a paper by the workshop.

Keywords: Sampling; Chemical and environmental analysis; Standardisation; Accreditation; Trade agreement

Featured Book
Web Search

Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: