Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization

Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization




Evaluation Function Snapshot Independence Agenda Setting & Evaluation Planning Quality Assurance Use of Evaluation Joint Evaluation

Evaluation Function

From 2012 to 2014, the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) undertook a series of exercises to build and then test its operational capability to launch, conduct and recover an on-site nuclear test inspection (OSI) anywhere in the world within stringent Treaty timelines. Three build-up exercises first took place in Austria and Hungary during 2012-13, and then the final test exercise; the largest weapons of mass destruction (WMD) non-proliferation verification exercise ever staged; was conducted in Austria and Jordan at the end of 2014 involving over 200 international experts drawn from member states.

The ‘formative’ evaluation of the three build-up exercises first required the planners to formulate, design and implement a bespoke and primarily qualitative method and associated toolset, to assess OSI operational capability in what was essentially a fluid, fast changing and potentially hostile field environment. An external evaluation team (ET) of 10 scientific and operational experts, with contextual experience in the Treaty, or in other arms control and verification regimes, was then selected, trained, deployed and supported throughout the evaluations of each exercise.

The ‘summative’ evaluation of the final test exercise, known as the 2014 Integrated Field Exercise (IFE14), required the evaluation to assess the level of operational preparedness of the OSI regime at that moment in time, using only the equipments and techniques that it had in place rather than future aspirations. Moreover, it needed to assess the level of improvement since the last IFE in 2008, and identify areas for further improvement. The evolving evaluation method successfully codified OSI operational capability by drilling down into key target areas to identify over 4,500 performance indicators. The idea was that the ET would then gather up data on all these indicators to use when making its evidence-based assessment. These indicators were assigned to individual evaluators according to their roles and scientific expertise, and allocated across their tools to ensure data triangulation. A newly conceptualised ‘Information Acquisition Plan’ then provided direction to the ET to ensure they were in the right place at the right time, knowing what they were looking for and which of the tools was to be used.

In parallel with the development of, and training on, the method and tools, the planners also conceptualised and developed from scratch, an Evaluation Information Management System (EIMS) based on PostgreSQL software, that automated the handling and processing of the data, guided its acquisition, facilitated its collation in order to inform conclusions from which recommendations were derived. Throughout this process the responses to the 4,500 indicators, were automatically cross-referenced to the conclusions and recommendations that followed, thus providing for a chain of evidence to support, indeed guarantee, evidence-based reporting. Following the very successful IFE14 exercise, the evaluation used EIMS in 2015 to generate reports for the evaluation’s users, which included the operational division and member states.



Agenda Setting & Evaluation Planning

Quality Assurance

Use of Evaluation

Joint Evaluation

UNEG Members

Ian Oliver


Josep Vila

Evaluation Officer, CTBTO

Fact Sheet