Analysis instead of summation: why ICT indices don’t work

Global ICT indices are compelling because they allow multilateral organisations, like the ITU or World Bank, to produce indicators for member states to assess their progress against their proposed policies or so-called ‘best practice’ by competing against each other in the rankings. They are compelling for lobby groups and industry forums, such as such as the GSMA and the World Economic Forum (WEF), multinational organisations and global platforms such as Facebook, because they provide seemingly simple, generalised answers across countries to complex, often context-specific problems. But the answers provided by composite ICT indices, as they are currently compiled, are at best indicative of movements of countries towards generalised attainable goals without any causal explanations. Worse, with the absence of data available to measure key outcomes and the superficiality of single ‘expert reviews’ of country assessments they send the wrong signals, perpetuate poor policy or incorrect policy fixes.
Powerful epistemic communities have developed around these indices that leave them largely unchallenged intellectually and, in the absence of alternatives, the default reference points for governments, regional economic communities, UN and other multilateral agencies and aid and donor organisations.
We demonstrate that analysing ICT indicators, based on benchmarking through the application of a dynamic diagnostic approach with context relevant indicators, provides a far more valid evidence-base for policy and regulatory interventions and practice.