Stats NZ has a new website.

For new releases go to

www.stats.govt.nz

As we transition to our new site, you'll still find some Stats NZ information here on this archive site.

  • Share this page to Facebook
  • Share this page to Twitter
  • Share this page to Google+
Explaining the framework in detail

Purpose of the error framework

Any statistical output contains imperfections or uncertainties. These can arise from choices in methodology, limitations of input data sources, processing problems, or many other sources. The effect on customers will depend on how they use the output – a data issue may be irrelevant to one customer but make the output useless for another.

To fully understand how good a final output is for a given need we need a comprehensive list of its limitations. The error framework gives us a way to categorise and understand the sources of these limitations and how they affect the final output.

How the framework operates

Li-Chun Zhang (2012) developed the error framework. It breaks down the steps between the ideal concepts and population we would like to capture in our dataset and the final unit-record data that we obtain in practice.

Zhang’s framework builds on the Total Survey Error framework developed by Groves et al (2004, figure 2.5). This model examines all possible sources of error in survey data, from design right through to the data’s use in producing statistical outputs.

The framework has two phases – each has separate flows for 'measurement' (relating to target concepts and values obtained from population units) and 'representation' (relating to target sets of units and the objects measurements are obtained from). These are explained in more detail below.

Note: steps in the error framework are not arranged in order of production processing steps or data flows from data receipt to statistical output, as in the Generic Statistical Business Process Model (UNECE, 2013). The framework is trying to capture compromises needed to produce the output; for example, in translating an ideal concept into a question or variable we can measure in a well-defined way. Identifying these compromises and limitations helps to understand the differences between the final data and the perfect data we would wish for.

By using Li-Chun Zhang's framework we can compile a comprehensive list of error sources for a given dataset. Use the quality measures (see ‘Available files’) to try to quantify or monitor each error source. The framework and quality measures assist your decision-making about cost/quality trade-offs when designing new outputs and improving old ones.

Elements of the framework

The error framework separates the 'life cycle' of statistical data into two phases. This division makes it easy to categorise sources of error and understand their causes. The idea is to first evaluate datasets against their original purposes, and then consider how well the combination of datasets making up the final dataset fits the target concept and population of the intended statistical output. This is very important when combining several administrative or survey datasets to produce an output, but it is also useful for single-dataset outputs – it allows us to separate source data issues from the problems caused by trying to reuse the data for a purpose it wasn't designed for.

The framework is also split into two sides, 'measurement (variables)' and 'representation (objects or units)', which are explained below.

Phase 1

Phase 1 allows us to evaluate a single data source against the purpose for which the data was collected. For a survey dataset, this purpose is defined for a statistical target concept and target population. For an administrative dataset, the entries or 'objects' in the dataset might be people or businesses, but they could also be transaction records, or other events of relevance to the collecting agency. At this stage, evaluation is entirely with reference to the dataset itself, and does not depend on what we intend to do with the data.

Phase 2

Phase 2 categorises the difficulties arising from taking variables and objects from source datasets and using them to measure the statistical target concept and population we are interested in. In this phase, we consider what we want to do with the data, and determine how well the source datasets match what we would ideally be measuring.

Dividing assessment into two phases has benefits. Firstly, it separates out the information about the source dataset, which means we can reuse the phase 1 assessments for other possible outputs without repeating a lot of work. This also lets us explain why an administrative dataset can be fit for purpose for one output, but inadequate for another.

Secondly, it makes it easier to identify the real cause of a quality issue and to come up with a solution or mitigation strategy that addresses the error at its source. For example, undercoverage in our final output could have many causes, such as poor quality processing at the source agency, mismatches between how matching variables are defined on different datasets, or overly strict edits in our system. Being able to determine which of these is the true cause is far more valuable than simply knowing there is undercoverage.

Measurement

The measurement side of figures 1 and 2 sets out steps that connect the target concept (ideal information we want about each object) with the final edited values in the dataset. Sources of error on the measurement side include the degree to which the operational measure used captures the target concept, and how many and what kind of errors are introduced by respondent misunderstanding or mistakes.

Example of measurement evaluation: look at taxable income recorded in the Employer Monthly Schedule administrative dataset as a measure of personal income. In phase 1 we see how well the figures in the administrative data meet their administrative purpose, whereas in phase 2 we evaluate the issues the administrative variable has for our ideal statistical variable or concept.

Representation

The representation side looks at the objects or units in the dataset and how well they match the desired target set (note: we use 'set' instead of 'population' because some administrative datasets are based on capturing events or transactions rather than a well-defined population of people or businesses). Ideally every object in the target set has a corresponding object recorded in the data. In phase 1, the focus is on objects, which could be events, transactions, or other entries in an administrative dataset, whereas phase 2 is concerned with units (the final statistical units in the dataset), which may be created artificially – based on a combination of objects from several linked datasets.

The representation side of figures 1 and 2 could be used to evaluate errors arising from combining administrative datasets to create a household register. Coverage problems, timing issues, data matching uncertainties, and problems in actually generating a list of household units are all included in the framework.

Steps for using the framework

The steps we recommend to assess the quality of an output or dataset using this framework are:

  1. Determine which datasets are relevant and collect basic information about each, such as the original purpose of the data collection, the set of objects or units in the target population, and definitions of the variables and how they are collected.
  2. Use phase 1 of the framework to collate detailed information about each dataset that relates to: processing the variables, rules used during collection, any specific restrictions on the units that make it into the final data, and any other known issues with the dataset. The aim is to define and explain each box in figure 1 (eg accessible set) in detail for each dataset. This includes categorising the known issues into the correct error types.
  3. Use the information gathered in step 2 to create a list of known or potential error sources, categorised according the framework.
  4. Use the list of measures and indicators (see ‘Available files’) to find ways to quantify or control each important source of error. Also consider the effect of each type of error on the final output for the most important error sources.

Once you’ve completed the phase 1 assessment for each source dataset, complete phase 2 using a similar process. Defining the statistical target population, concepts, and variables very clearly is important – so you can accurately compare the individual datasets assessed with the phase 1 framework to the statistical use for the data.

  • Share this page to Facebook
  • Share this page to Twitter
  • Share this page to Google+
Top
  • Share this page to Facebook
  • Share this page to Twitter
  • Share this page to Google+