Program Evaluation

By: Hal  09-12-2011

HAL has had extensive experience in program evaluation in the public sector for over two decades. Activities include: examining the rationale for programs both existing and proposed; measuring achievements of objectives; assessing impacts and effects of programs; determining more cost-effective methods to achieve objectives; and recommending future directions.

In recent years, our firm has undertaken numerous evaluation studies, including frameworks, reviews and evaluations, all according to the Treasury Board of Canada (TBC) Guidelines on Program Evaluation.

The Treasury Board policy is that departments are responsible for strategically evaluating their own policies and programs and for using the findings in decision-making and accountability reporting. Evaluations will consider how well the policies and programs are performing in terms of (i) their continued relevance, in light of changing circumstances, (ii) the results they are producing, and (iii) the opportunities for using alternative and more cost-effective policy instruments or program delivery mechanisms. In order to provide TBS with a basis for monitoring the implementation of evaluations, evaluation standards have been established by the Board.

The Treasury Board Evaluation Policy is based on three fundamental principles:

  • That achieving and accurately reporting on results is a primary responsibility of public service managers.
  • That rigorous and objective evaluation is an important tool in helping managers to manage for results.
  • That departments, with the support of the Treasury Board Secretariat, are responsible to ensure that the rigour and discipline of evaluation are sufficiently deployed within their jurisdictions.

Evaluation has two main purposes:

  • To help managers design, or improve the design of, policies, programs and initiatives.
  • To provide, where appropriate, periodic assessments of policy or program effectiveness, of impacts both intended and unintended, and of alternative ways of achieving expected results .

Departments should embed the discipline of evaluation into the lifecycle management of policies, programs and initiatives to:

  • Develop results-based management and accountability frameworks for new or renewed policies, programs and initiatives.
  • Establish ongoing performance monitoring and performance measurement practices.
  • Evaluate issues related to the early implementation and administration of the policy, program or initiative, including those that are delivered through partnership arrangements (formative or mid-term evaluation).
  • Evaluate issues related to relevance, success, and cost-effectiveness.

The standards cover the following:

Planning the Evaluation: Based on a departmental strategic plan, the evaluation takes account of the objectives and priorities of both the department and the government. The full range of evaluation issues is considered at the planning stage, as represented in a program framework for the evaluation.

Accountability: The evaluation addresses issues that are needed for accountability reporting including those involving key performance expectations.

Measurement and Analysis: The evaluation will produce timely, pertinent and credible findings and conclusions that management and other stakeholders can use with confidence. The evaluation findings are relevant to the evaluation issues addressed and follow from the evidence. Conclusions are consistent with and follow from the findings.

Evaluation Reports: Evaluation reports present the findings and conclusions in a clear and balanced manner and indicate their degree of reliability. Reports are consistent with Cabinet and Treasury Board submission procedures and external reporting requirements .

Results Based Management and Accountability Frameworks

Public sector managers are expected to define anticipated results, continually focus attention on results achievement, measure performance regularly and objectively, learn from this information, and adjust to improve efficiency and effectiveness.

The Results-based Management and Accountability Framework (RMAF) is intended to serve as a blueprint for managers to help them focus on measuring and reporting on results throughout the lifecycle of a policy, program or initiative.

The RMAF is intended to help managers achieve the following: a sound governance structure, a results-based logic model, a sound performance measurement strategy, and adequate reporting.

Different results-measurement activities occur at different points in time as part of the ongoing management of a policy, program or initiative. This continuum, from the initial consideration of performance measurement, through performance monitoring to formative and summative evaluation, is presented in the following table. While shown as a linear process, it must be remembered that performance measurement is iterative, and review and feedback are important parts of the process. Evaluators are key to stages 7 and 8, and often involved in stages 0, 1, and 2.

RMAF Activities

Step Description Stage Evaluator Involvement


Performance measurement understanding RMAF Development



Program profile  



Articulating performance measures  



Establish appropriate data gathering strategy  


Information system development and data gathering Implementation


Measuring / reporting of results information  


Review / assessment / modification  


Formative Evaluation (management issues) Evaluation



Summative Evaluation (fundamental program issues)  


There are three key parties involved in the development and implementation of a Results-base Management and Accountability Framework: managers, evaluation specialists, and in the case of a Treasury Board commitment, analysts of the Treasury Board Secretariat.

The RMAF contains several components:

1. Profile - a concise description of the policy, program or initiative, including a discussion of the background, need for the program, target population, delivery approach, resources, governance structure, and intended results.

2. Logic Model - an illustration of how activities are expected to lead to outputs, immediate outcomes, intermediate outcomes, and eventually, ultimate outcomes.

3. Ongoing Performance Measurement Strategy - a plan for the ongoing measurement of performance, including the identification of performance indicators and a measurement strategy describing how these indicators will be collected, how often, and at what cost.

4. Evaluation strategy - a plan for the evaluation of the policy, program, or initiative, including the identification of formative and summative evaluation issues and questions, the identification of associated data requirements, and a data collection strategy.

5. Reporting strategy - a plan to ensure the systematic reporting on the results of ongoing performance measurement as well as evaluation, to ensure that all reporting requirements are met.

Evaluations typically occur at two points in the lifecycle of a policy, program or initiative: Formative or midterm evaluations (normally within the first two years), and Summative evaluations (normally within five years of start-up).

There are three primary issue areas for evaluation that need to be considered:

  • Relevance - does the policy, program or initiative continue to be consistent with departmental and government-side priorities, and does it realistically address an actual need?
  • Success - is the policy, program, or initiative effective in meeting its objectives, within budget and without unwanted negative outcomes? Is it making progress toward the achievement of the intended outcomes?
  • Cost-effectiveness - are the most appropriate and efficient means being used to achieve objectives, relative to alternative design and delivery approaches?

Potential users of performance information might include: program/policy/initiative management, central agencies, and stakeholders (internal and external). Uses of this information will depend on the type of user and could include management decision-making, accountability and communication/information sharing.

Evaluation Implementation Issues 

We highlight below what we have found to be the critical or most difficult components of evaluation studies that we have experienced in projects for Industry Canada and other departments.

Incrementality and Attribution

The impacts and effects that we are concerned with in an evaluation are those which are directly due to the program under review. These impacts and effects are called incremental, a term which is usually defined as the difference between what did happen with the program in place, and what would have happened if the program had not been in place. As difficult as it might be to identify and measure the actual and relevant impacts and effects that did happen, it is usually much more difficult to estimate what impacts and effects would have happened without the program in place. This difficulty involves not only estimating how social and economic actors would have reacted in a world different from that which prevailed, but also attempting to specify what the different government programs or responses would have been to this different world.

A concept related to incrementality is that of attribution. It is often possible to determine that certain activities would not have taken place had it not been for the program under review (incremental activities), but that these same activities were the beneficiary of more than one government program (or influencing tax policies, or programs of a different level of government). Such incremental activities may give rise to impacts and effects that are not wholly attributable to the program under review. In these cases, if the cooperating programs are to be credited with some of the incremental impacts, these impacts must be attributed to the various contributing programs in some way.

Evaluation Issues and Scope of the Evaluation

Evaluation studies must be undertaken with limited resources, and perhaps as important in the current rapidly changing environment, within a given timeframe, in order to be relevant to decision makers. At the same time, however, there is almost always strong pressure for the evaluation to be comprehensive, to cover all the issues, and to report on all aspects of a program. Evaluation methodology calls for the issues to be identified and assessed for importance or priority by the deputy head or other client of the evaluation. Once issues are ranked, evaluation options can be prepared which address the important issues, as well as contribute partial information to the less important issues. Problems often arise at this point in the evaluation assessment if clear direction from the deputy head (or other senior managers or steering committee) is not forthcoming. In its place, too often, is a request to provide the broadest possible coverage but within a very tight budget and/or timeframe.

The Problems of Data Collection and Analysis

Most evaluations require that some additional data be collected and analyzed together with existing data from a variety of sources. The problems that are encountered are common to any primary data collection exercise, especially one that is severely time constrained. These problems involve approvals for collecting the data, preparation of satisfactory data collection instruments and procedures, identification of appropriate populations, samples, and contact specifics, securing adequate response rates within time and budget, and coding and analyzing the resultant database together with existing data. The challenge of data analysis is one of extracting unbiased, objective and supportable findings from the collection of databases, which, at best, provide quantitative information on a subset of relevant variables, but not complete information on the exact variables desired.

Contact Hal

Email - none provided

Print this page

Other products and services from Hal


Performance Measurement

Increased scrutiny of government spending, both from inside and outside government, and a need to ensure public trust has meant that organizations receiving public funds must clearly achieve and demonstrate continuing improvements in efficiency, effectiveness, and impact. For years organizations have sought ways to effectively measure their performance against plan and to then clearly understand the meaning and cause for any resulting variances.


Socio-Economic Impact Analysis

Estimation of Marginal Benefits by User Segment and Industry Segment – Through consultations with the providers and users of a program, and an understanding of their characteristics gained from the literature review, the benefits that will accrue to each into the future are estimated. This hides the pros and cons of an initiative from stakeholders and therefore benefit-cost analysis is a poor tool for gaining support among different groups.


Industry Cluster Analysis

Current Performance consists of three constructs that measure the cluster’s significance in terms of the number and size of core firms, the breadth of their responsibilities, and their reach to distant markets; interactions within the cluster and with the rest of the world; and the cluster’s dynamism in terms of innovativeness and growth.



Many programs have a diversity of stakeholders, and each will be interested in different benefits and different costs. In determining the appropriate approach for socio-economic analysis, two things need to be considered. First is the subject matter.