10 Step Guide


This short guide stresses the importance of conducting an internal coherence assessment prior to the uncertainty and sensitivity analysis, so as to further refine and eventually correct the composite indicator structure. Expert opinion is needed in this phase in order to assess the results of the statistical analysis. Second, it stresses that there is a trade-off between multidimensionality and robustness in a composite indicator. One could have a very robust yet mono-dimensional index or a very volatile yet multi-dimensional one. This does not imply that the first index is better than the second one. In fact, this table suggests treating robustness analysis NOT as an attribute of a composite indicator but of the inference which the composite indicator has been called upon to support. Third, it highlights the iterative nature of the ten steps, which although presented consecutively in the Handbook, the benefit to the developer is in the iterative nature of the steps.


The theoretical framework provides the basis for the selection and combination of variables into a meaningful composite indicator which is fit for purpose. The involvement of experts and stakeholders is important. This step involves:

  • A clear understanding and definition of the multidimensional phenomenon to be measured.
  • Discussing the added-value of the composite indicator.
  • Building a nested structure of the various sub-groups of the phenomenon (if relevant).
  • Listing selection criteria for the underlying indicators, e.g., input/output/process, relevance, data requirements and so forth.
Read More

The selection of data and indicators should be based on the analytical soundness, measurability, country coverage, and relevance of the indicators to the phenomenon being measured and their relationship to each other. The use of proxy variables should be considered when data are scarce (again, here, the involvement of experts and stakeholders is important). This step may involve:

  • A quality assessment of the available indicators.
  • Discussing the strengths and weaknesses of each selected indicator.
  • A summary table on data characteristics, e.g., availability (across country, time), source, type (hard, soft or input, output, process), descriptive statistics (mean, median, skewness, kurtosis, min, max, variance, histogram).
Read More

After assembling a set of indicators, missing data can be imputed, outliers treated and transformations can be applied to indicators where necessary and appropriate. Specifically, this may involve:

  • Building confidence intervals for each imputed value that allows assessing the impact of imputation on the composite indicator results.
  • Discussing and treating outliers, to avoid them becoming unintended benchmarks (e.g., by Winsorisation, or applying Box-Cox transformations such square roots, logarithms, and other).
  • Making scale adjustments, if necessary (e.g., taking logarithms of highly skewed indicators, so that differences at the lower end of the scale matter more).
Read More

This can be used to study the overall structure of the dataset, assess its suitability, and guide subsequent methodological choices (e.g., weighting and aggregation). This can involve:

  • Assessing the statistical and conceptual coherence in the structure of the dataset (e.g. by principal component analysis, correlation analysis, and Cronbach’s alpha).
  • Identifying peer groups of countries based on the individual indicators and other auxiliary variables (e.g. by cluster analysis).
Read More

Normalisation brings indicators onto a common scale, which renders the variables comparable. Generally this involves:

  • Making directional adjustment, so that higher values correspond to better performance in all indicators (or vice versa).
  • Selecting a suitable normalisation method (e.g., min-max, z-scores, and distance to best performer) that respects the conceptual framework, the data properties, and can be easily interpreted by users.
Read More

When indicators are aggregated into a composite measure, they can be assigned individual weights. This allows the effect or importance of each indicator to be adjusted according to the concept being measured. Weighting methods can be statistical, based on public/expert opinion, or both. This step can involve:

  • Expert/public consultation to understand the relative importance of indicators or components of the index to stakeholders.
  • Selecting the appropriate weighting method—note that different methods can be trialled but keep in mind that the ability to communicate the final weighting scheme is important. Simpler methods can be more effective in this respect.
  • Exploring the sensitivity of the scores and ranks to different weighting approaches. This forms part of Step 8.
Read More

Aggregation combines the values of a set of indicators into a single summary ‘composite’ or ‘aggregate’ measure. The most common approach is to simply take the average of the normalised scores, but other techniques can be used based on other types of averaging, or using ranks. This step can involve:

  • Selecting the appropriate aggregation method, based on the concept being measured, particularly considering whether high values of one indicator should be allowed to compensate for low values of another.
  • Investigating alternative aggregation methods as part of an uncertainty/sensitivity analysis.
  • After aggregating, one might check the relationship of the aggregate with underlying indicators to reveal drivers for good/bad performance.
Read More

Uncertainty analysis quantifies the uncertainty in the scores and ranks of the composite indicator, as a result of uncertainty in the underlying assumptions. Sensitivity analysis quantifies the uncertainty caused by each individual assumption, which identifies particularly sensitive assumptions which might merit closer consideration, for example. This step involves:

  • Identifying which are the main uncertainties underlying the composite indicator (e.g. methodological choices, indicator selection, alternative frameworks, etc.)
  • Using tools such as Monte Carlo to investigate the effects on the scores and ranks of perturbing these assumptions (i.e. alternative weighting schemes, aggregation methods, etc.)
  • Assigning and plotting confidence intervals on ranks, and identifying key assumptions using sensitivity analysis.
Read More

The scores of the composite indicator (or its dimensions) should be correlated with other existing composite indicators and other indicators/data and to identify linkages through regressions. This can involve:

  • Correlating the composite indicator with relevant measurable phenomena (similar composite indicators but also relevant quantities e.g. GDP, GDP/capita, etc.) and explain similarities or differences.
  • Develop data-driven narratives based on the results. Keep in mind the significance level of the correlations and the implications of multiple testing.
  • Perform causality tests (if time series data are available).
Read More

Composite indicators are ultimately a communication tool, which can be greatly enhanced by proper visualisation, both static and interactive (online). Good visualisation helps to effectively communicate the message, gives a sense of professionalism and online data exploration tools give full transparency to the data set and allow users to drill down to underlying data. This step can involve:

  • Identifying the target audience and the best means of visualisation (e.g. simple vs technical).
  • Communicating key messages/conclusions through carefully selected charts and infographics which are clear and do not over-complicate or obscure the information.
  • Constructing a web platform for visualising the data, reporting methodology, making data available for download, etc.
Read More