A maior rede de estudos do Brasil

314 pág.
LCA part3

Pré-visualização | Página 45 de 50

- Research should focus on use of relational databases in LCA and on translating relational database
requirements into a data format and data sheet. Relational databases appear to be a very promising
tool for ensuring the integrity of the LCA database and, consequently, that of the LCA results.
Especially promising in this respect is the initiative, in the ISO context, to combine SPOLD en
SPINE: see ISO/TR 14048 (in prep.).
3.5 Data quality
For LCA models, like any other model, it holds that ‘garbage in = garbage out’. In other words, data
quality has a major influence on results and proper evaluation of data quality is therefore an important
step in every LCA. Even if the quality of individual data is high, however, such data can still yield
erroneous results if used to answer questions on which they have limited or no bearing. The data used in
a given case study should, for instance, be representative for that particular study. Quality requirements
thus refer to both the reliability and the validity of process data. As validity depends on the application in
question, it is not validity requirements as such that are specified here but the data needed to assess
that validity.
In ISO 14041 (1998E), clause 5.3.6, where ‘Data quality’ is part of the ‘Scope of the study ’, the following
statements are made with regard to data quality issues (see textbox):
Heijungs et al. (1992)
The ’92 guide treats this topic in section 2.2.2, on the representativeness and quality of data, as part of the
Inventory analysis (pp. 32–33). The following sub-steps are distinguished:
- the representativeness of the processes;
- the quality of the process data;
- the overall assessment of the process data.
Part 3: Scientific background 100 May 2001
The aforementioned EDIP report (Wenzel et al., 1997) emphasises the importance of process
characterisation. Data quality is described at two levels: the level of individual inputs and outputs and
that of the whole process. According to EDIP, data characterisation should generally cover the following
- Definition of the scope of the process1
· description of operations included and excluded and of inputs and outputs not linked to other
economic processes
· specification of co-products produced and method for allocating these
- Data characterisation
· description of known data gaps
· description of the data source
· how well do the data describe the process and how representative of the average are they?
· description of the representativeness of the process for the objective of the study
· description of the assessment/calculation of the coefficients of statistical variation for the
environmental inputs and outputs
· mass balance: calculation of the mass balance for the process
- technological development
· description of technological developments and trends in the most important inputs and outputs
· description of the projection of the process2
1 In the EDIP report the format is described in both table 22.5 and figure 9.1, which unfortunately use different
titles and terminology. Here, the terminology from table 22.5 has been employed.
2 The term ‘projection’ in the EDIP book refers to the extrapolation of process data to the future year in which
the newly developed product is to be launched on the market.
Descriptions of data quality are important for understanding the reliability of the results of a study and
properly interpreting its outcome. Data quality requirements shall be defined to enable the goal and scope
of the study to be met. Data quality should be characterised in both quantitative and qualitative terms and
with due reference to the methods used to collect and integrate the data.
Data quality requirements should be set for the following parameters:
- temporal coverage: the desired age of data (e.g. within the last five years) and the minimum period of
time (e.g. annual) over which data should be collected;
- geographical coverage: geographical area from which data on unit processes should be collected to
satisfy the goal of the study (e.g. local, regional, national, continental, global);
- technology coverage: technology mix (e.g. weighted average of the actual process mix, best available
technology, or worst operating unit).
Consideration shall also be given to additional descriptors defining the sort of data required, e.g. collected
from specific sites versus data from published sources, and whether the data is to be measured,
calculated or estimated. Data from specific sites or representative averages should be used for those unit
processes contributing the majority of the mass and energy flows in the systems under study, as
determined in the sensitivity analysis performed in 5.3.5. Data from specific sites should also be used for
unit processes deemed to have environmentally relevant emissions.
In all studies, the following additional data quality requirements shall be considered at an appropriate level
of detail depending on the Goal and scope definition:
- precision: measure of the variability of the data values for each data category expressed (e.g. variance);
- completeness: percentage of locations reporting primary data relative to the potential number in
existence for each data category in a unit process;
- representativeness: qualitative assessment of the degree to which the data set reflects the true
population of interest (i.e. geographical, temporal and technology coverage);
- consistency: qualitative assessment of how uniformly the study methodology is applied to the various
components of the analysis;
- reproducibility: qualitative assessment of the extent to which information about the methodology and
data values allows an independent practitioner to reproduce the results reported in the study.
- Where a study is used to support a comparative assertion that is disclosed to the public, all data quality
requirements described in this subclause shall be included in the study.
Where a study is used to support a comparative assertion that is disclosed to the public, all data quality
requirements described in this subclause shall be included in the study.
Source: ISO 14041, 1998E.
Part 3: Scientific background 101 May 2001
Several remarks are in order. In the assessment of statistical variation, uncertainties in economic flows
may be more important than those in environmental inputs and outputs. Furthermore, it may be useful to
make a clear distinction between the overall validity and reliability of data (and other data quality
aspects) at the process level, independent of the application involved (as given by database
requirements, for example), and their validity and reliability in a specific application.
Nordic Technical Report No. 9 (Lindfors et al., 1995b) starts out by referring to data collection as the
most time-consuming and thus the most expensive element of LCA. In screening product LCAs the
amount of time required for this task can be reduced by using readily accessible data. However, this
data may be of inferior quality. The report mentions three aspects to which due attention should be paid
when selecting data for use:
- level of technology;
- age of data;
- site specific vs. average data.
Lindfors et al. (1995b) consider all three aspects very important for the results of the study. A number of
parameters are suggested for assessing the quality of databases used in LCA:
- correctness;
- reliability;
- integrity;
- usability;
- portability;
- maintainability;
- flexibility;
- testability.
Building on Weidema (1994), Lindfors et al. (1995a) elaborate a detailed scheme of data quality
parameters, represented as a ‘data pedigree matrix’ (Table 3.5.1). They propose reporting data in the
following format:
2.3 MJ - 0.1, %95 (2,2,3,1,1,1)
 A B C D (a,b,c,d,e,f)
A = the magnitude of the data as a numerical or descriptive expression (here: 2.3)
B = the ‘unit base