Designing and delivering Theory of Change evaluations
Theory of Change evaluation explores the underlying assumptions, or small ‘t’ theories, on why certain activities are expected to produce a set of outcomes, or behavioural changes. The ‘hypothesis’ is then tested to find out what works and for which groups of people. It is a preferred methodology in evaluations of complex social change programmes because it can show the attribution and contribution the project or service makes to change.

At the beginning of the assignment, we work closely with our commissioners and their partners to develop their draft theory of change. We then test the ‘theory’ or hypothesis and at the end of the assignment produce a revised and refined theory of change.

Understanding why and how the changes occurred allows projects to be sustained and also adapted for replication elsewhere.

Paying attention to context
All services and projects take place in different contexts and different communities, in other words “the way we do things round here”. We pay particular attention to the local factors that influence how changes occur and whether the local context is receptive to change. In multi-site initiatives we favour a ‘place study’ method in order to explore in-depth how the local context influences outcomes.

Engaging stakeholders
We work very closely with our clients to ensure we understand the local culture and how the environment affects change.  We do this by interviewing a range of different people who have a stake in the service or project; this includes service users and volunteers, senior managers in partner organisations, executive officers in local authorities or health organisations and local politicians.

Measuring Outcomes
We always use an outcome driven approach, by which we mean we measure what changes in the life of the project. We measure outcomes by developing a range of indicators of success and then develop the right quantitative and qualitative tools to measure the change.

Working with peer evaluators
We are committed to using a peer evaluator methodology when it is appropriate to the assignment. We have trained and supported both younger people and older people in being part of the evaluation team, developing the evaluation tools, directly collecting data from their peers and being part of the overall analysis. As a result of our experience we have developed a training pack to support this approach.

Using a cost benefit analysis
When appropriate we introduce a cost benefit analysis to demonstrate the cost effectiveness of an initiative against evidence-based calculations.

Promoting reflective practice and self-evaluation
We support reflective practice and the development of self-evaluation in people who deliver projects, through using simple tools and creative facilitation activities. We find this works at all levels of organisations from small community groups to senior management teams. It has the potential to embed an evaluation culture in organisations and to make evaluation integral to practice rather than an ‘add-on’.