Interact Logo Sharing Expertise

Practical Handbook for Ongoing Evaluation

7. Which type of evaluation: strategic, thematic, cross-programme, operational and/or management?

7.1. European requirements & European guidance

The European Commission does not specify any requirements for the type of evaluation. Working Document No.5 mentions the flexible character of an evaluation during the programming period:

"The Regulation 1083/2006 provides for flexible arrangements for the thematic scope, design and timing of ongoing evaluation.

Within this flexible framework, Member States are not limited to evaluations at the level of the Operational Programme. In fact, they are encouraged to undertake evaluations by themes/priority axes/groups of actions/major projects or by policy fields (e.g. for ESF interventions) across Operational Programmes, or within a specific Operational Programme, as well as of their National Strategic Reference Framework (NSRFs), as appropriate."

7.2. Possible approach to types of evaluation

An important question when the Evaluation Plan and/or the Terms of Reference are being developed is what type of evaluation the programme would like to perform. Evaluation can be of a more strategic character, thematic, cross-programme or operational. An evaluation could also be a combination of these. Several possibilities are mentioned here:

A: Strategic evaluation: A strategic evaluation generally focuses on the longer term and includes the broader policy context to decide on current or future strategic decisions. An example would be to see how the programme is contributing to the (revised) Lisbon Agenda. A strategic evaluation generally investigates the programme's relevance and also its effectiveness.

B: Thematic evaluation: A thematic evaluation focuses on a specific theme, such as innovation or equal opportunities. Thematic evaluations mostly look at the effectiveness and relevance of the programme.

C: Cross-programme evaluation: A cross-programme evaluation focuses on several programmes, e.g. all transnational programmes, or all territorial programmes in the Danube area, or several programmes who would all like to evaluate e.g. monitoring procedures. A cross-programme evaluation can be focussed on one or more of the key evaluation issues: relevance, effectiveness and efficiency.

D: Operational evaluation: An operational evaluation deals with operational issues such as application procedures or performance of the programme. In an operational evaluation the efficiency and effectiveness of the programme will be the central focus.
These types of evaluation are often performed in combination. For example:

  • A thematic cross-programme evaluation could focus on innovation in several programmes;
  • An operational cross-programme evaluation would look at operational aspects of several programmes, such as the performance of the indicators in several programmes.

The diagram below shows when which type of evaluation can be used. For example, a strategic evaluation will look more at content-related issues (relevance, effectiveness and/or consistency). A thematic evaluation is also very much content-related and will therefore deal with relevance, effectiveness and/or consistency of the theme. A cross-programme evaluation can be of a more strategic and content-related character, theme-oriented and/or operational. Finally, an operational evaluation will deal more with the efficiency, effectiveness and/or consistency issues.

Diagram showing when which types of evaluation can be used

Source: INTERACT

Each type of evaluation is described below.


Experience of Adrian Constandache, Ministry of Regional Development and Housing, Romania:
Focus on the topics that are really important. There is a trade-off between the number of evaluation questions and the depth of the analysis carried out.



7.3. Type A: Strategic evaluation

Strategic evaluation is a part of the evaluation process that assesses whether proposals are relevant to the priorities and measures of the specific INTERREG programme and specific call. It investigates whether the programme fits properly into the political, geographical, socio-economic and cultural environment of the programme area.

7.3.1. Why conduct a strategic evaluation?

In a strategic evaluation the focus will be on the programme's relevance. The main question will be: is the programme still relevant, considering the actual developments?

7.3.2. Focus and methodologies

A strategic evaluation will generally include a policy analysis, interviews or workshops with policy-makers and/or members of the Monitoring Committee. Fundamental research, for example on the future of innovation in relation to Territorial Cooperation programmes, could be part of a strategic evaluation. For instance, the North Sea programme in 2004 developed several studies to investigate which elements needed attention in the short and long term.
Important elements of a strategic evaluation are:

  • The continuing relevance and consistency of the Operational Programme;
  • The contribution of the programme to broader European objectives such as the (revised) Lisbon Agenda.
The European Commission advises that evaluations should, where appropriate, include questions that place them in the broader policy context, which would imply finding answers to the following questions: To what extent have individual interventions contributed to the strategic objectives?
How coherent and complementary have these interventions been? Is there scope for simplifying existing interventions or legislation?

What progress has the programme made towards reaching its strategic-level objectives?

Include questions concerning cost-effectiveness and adequacy of resources since the corresponding evaluation results should be used to justify budgetary amendments and to arbitrate between competing demands for activities.

7.4. Type B: Thematic evaluation

A thematic evaluation is focussed on specific topics in a specific thematic area, or on an aspect of an activity. This can be done within a single programme or across different programmes. It tends to be more exploratory or compliance-oriented than other types of evaluation. Very often thematic evaluations will be strategic evaluations.

7.4.1. Why conduct a thematic evaluation?

Thematic evaluations are the best opportunity for programmes to investigate whether a specific topic or thematic area will need more or less attention in the current or next programme. The aim of a thematic evaluation could be one of the following:

  • Offering a comprehensive picture for further analysis of combined effects of other policy tools outside the cohesion policy;
  • Supporting coherence and relevance of strategies;
  • Checking the effectiveness of the programmes. Are the objectives of specific current priorities and operations being met?
  • Checking the relevance of the programme. Is the programme still relevant? (This would be a strategic thematic evaluation.)
  • Checking the consistency of the programme.

7.4.2. Topics of thematic evaluations

The European Commission recommends undertaking and designing the evaluation in line with the specific needs of the Operational Programme.
A thematic evaluation could focus, for example, on:

  • Added value of cooperation;
  • European integration;
  • Territorial cohesion;
  • Long-term (in)tangible results;
  • A priority: e.g. innovation, accessibility, sustainable communities;
  • Horizontal priorities: e.g. equal opportunities, environment;
  • A specific theme: e.g. SME development, information society;
  • The adjustment of the Operational Programme to changes in the socio-economic environment or in European, national and regional priorities;
  • Territorial Cooperation-specific aspects such as e.g. network building, new tools & approaches, use of results and cross-border added value.

7.4.3. Methodologies used

The methodologies used for thematic evaluations can be very different, depending on the evaluation topic and many other aspects. For this reason, no general approach can be identified.
One example could be the evaluation of the environmental impact of a programme, for which the methodology could be as follows:

  • Identification of the environmental issues (water, soil, air, biodiversity, etc.) that the programme is likely to have an impact on (they should be in the SEA report);
  • Analysing the impact of each project (or a sample of projects) on the environmental issues (data should be available in the project environmental report);
  • Reaching conclusions on the overall impact of the programme on the environmental issues;
  • Identifying the environmental objectives established by the authorities at national, regional or local level;
  • Analysing the contribution or impact of the programme (analysed projects) in terms of achieving those environmental objectives;
  • Proposing mitigation measures in the event that significant negative environmental impacts are identified.

Experience of Marie-Jose Zondag, evaluation expert:
Make a clear distinction between need and interest. A clear need for projects might have been identified in the SWOT of a programme, but only a small number of projects submitted on this topic. On the other hand, there could be a large number of projects on a topic for which there is no real need. An evaluation could be useful to identify this and show how the programme could be more geared towards the kind of projects needed and how these projects could be facilitated.


Experience of Kai Böhme, evaluation expert:
Standard evaluation and monitoring approaches often have difficulties capturing the real achievements of Territorial Cooperation. Thematic evaluations can be focussed differently and thus highlight what has been achieved. They can therefore be useful in the presentation of single programmes and of Territorial Cooperation as such.



7.5. Type C: Cross-programme evaluation

A cross-programme evaluation is an evaluation covering several programmes. This could include evaluations for:

  • Adjacent programmes, for example all programmes in the Dutch-German border area;
  • Several programmes in the same area, for example all programmes in the Danube area;
  • Several programmes that want to evaluate the same issue, such as their indicator system. In this case the location of the programme does not matter, so the programmes could be spread all over Europe.

7.5.1. Why conduct a cross-programme evaluation?

There are several reasons why a cross-programme evaluation could be useful:

  • Increased objectivity, transparency and independence of the evaluation. Higher credibility.
  • Better understanding of similarities/differences between programmes. Better understanding of programme-specific strengths.
  • Fostering common understanding of thematic priorities to support future strategic programming decisions (e.g. strategic projects).
  • Avoiding duplication of effort (enabling a single evaluation of an aspect which occurs in several Operational Programmes).
  • Capturing interaction between Operational Programmes.
  • Reduction of the overall number of evaluations. Combining evaluation resources to achieve more and reduce costs and resources for individual programmes.
  • Possibility to address broader evaluation questions in geographic or thematic terms (i.e. larger regions and/or wider scope). Facilitating a perspective on multi-programme impacts beyond the results of individual programmes.
  • Learning between partners: sharing evaluation techniques.
  • Objectivity and increasing the legitimacy of findings: joint working increases objectivity, transparency and independence of the evaluation and strengthens impact.
  • Broad participation increases ownership of findings and makes follow-up on recommendations more likely.
  • Harmonisation and reduced costs: limiting the number of evaluation messages, fostering consensus on upcoming priorities.
  • Broader scope: joint evaluation can address broader evaluation questions and facilitate a perspective on multi-agency impacts beyond the results of one individual programme.

7.5.2. Challenges for cross-programme evaluation

  • Building consensus between the partners and maintaining effective coordination processes can be costly and time-consuming.
  • Subjects are more difficult to evaluate than in single programme evaluation.
  • Little enthusiasm for joint evaluations: time-consuming. What is the added value?

7.5.3. Scope and methodologies

The scope of cross-programme evaluation can be very broad or very specific. It can be thematic, operational, strategic or management-oriented, so the methodologies to be used will vary. A cross-programme evaluation can focus on all kind of evaluation issues such as relevance, effectiveness, efficiency and/or consistency.


Experience of Adrian Constandache, Ministry of Regional Development and Housing, Romania:
These evaluations allow the programmes to analyse certain aspects taking into account "the wider picture" (by comparing themselves with other programmes and seeing which is the best approach). Benchmarking should be the key evaluation instrument here.

Experience of Kai Böhme, evaluation expert:
Benchmarking & learning from each other in an open atmosphere can be important aspects of honest cross-programme evaluations.


7.6. Type D: Operational evaluation

An operational evaluation will evaluate operational aspects of the programme, such as the monitoring system, the management system, the control system, the indicator system, etc. An operational evaluation will mainly focus on efficiency but could also look at (quantitative) effectiveness and consistency.

7.6.1. Why conduct an operational evaluation?

It is useful to start an operational evaluation when a programme would like to investigate its efficiency, effectiveness and consistency.

7.6.2. Scope and methodologies

An operational evaluation can, for example, also investigate the programme management systems (e.g. project selection system, monitoring system, etc.). Topics to deal with include, for example, the institutional system, the audit trail and the appraisal system. These evaluations can represent a tool for transfer of good practice in order to improve the efficiency and effectiveness of these systems. It can be done even in the early stages of programme implementation and could be seen as a small, separate evaluation or as part of a wider evaluation. Methods used are mostly quantitative approaches to measure effectiveness and efficiency, such as progress reports, questionnaires, interviews with beneficiaries.

7.7. Tool: Examples of cross-programme and thematic evaluations

7.7.1. Examples for inspiration

  • ECORYS, Mid-term evaluation of the North Sea Region Programme 2000-2006, 2003.
  • Examples of thematic evaluations:
  • ECORYS, Evaluation of the regional innovation strategies in 2007-2013 ERDF programmes, 2008-2009.
  • Böhme, K. & F. Josserand (2004) 'Transnational Cooperation: An instrument for organisational learning and social capital building', Journal of Nordregio 1/2004, pp 18-20.
  • Böhme, K., F. Josserand, P.I. Haraldsson, J. Bachtler & L. Polverari (2003) Transnational Nordic-Scottish Cooperation. Lessons for Policy and Practice. Stockholm: Nordregio.
  • Böhme, K. (2005) 'The ability to learn in transnational projects', Informationen zur Raumentwickling 2005 (11/12), pp 691-700.
  • Examples of cross-programme evaluations
  • INTERACT Cross-programme Evaluation, 2010.
  • R. Hummelbrunner: Ongoing Evaluation INTERREG IIIA Programmes A-CZ, A-HU, A-SI, A-SK. Synthesis of cross-programme topics (1), Graz, Feb 2004.

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.