What is evaluation

Evaluation is the systematic and objective assessment of the design, implementation or results of a government program or activity for the purposes of continuous improvement, accountability and decision‑making.

It provides a structured and disciplined analysis of the value of policies, programs and activities at all stages of the policy cycle.

Continuous improvement

Evaluation uses robust analytical methods to provide evidence and input to decision‑makers about performance and good practices across the policy cycle.

It contributes to continuous improvement in policy design and organisational effectiveness.

Approach

Evaluation is an important part of a broader set of assurance mechanisms designed to support accountability, understand risk, and determine how well results are being delivered.

Evaluation in the Commonwealth is underpinned by a principles‑based approach. This recognises the range and diversity of programs and activities undertaken across the Commonwealth and the need for evaluative approaches to be flexible and fit for purpose.

Risk-based approaches and cost effectiveness

It is not feasible, cost effective or appropriate to fully evaluate all government programs and activities.

The cost and resourcing of an evaluation must be balanced against the risk of not doing an evaluation.

In some cases, well‑designed data collection and performance monitoring established during the early design phase of a new or amended program or activity can help to refine it over time.

In other cases, an evaluation will be necessary to assess whether a program or activity is appropriate, effective and/or efficient.

Accountability and decision-making

For evaluations to support continuous improvement, accountability and decision‑making across government, a principles‑based approach and consistency in planning are required.

Commonwealth evaluation principles

The key principles which, taken together, should be used to guide evaluation activity across the Commonwealth are detailed below.

Principle Description
Fit for purpose

The choice to evaluate and the scale of effort and resources allocated to an evaluation should be proportional to the value, impact, strategic importance and risk profile of the program or activity.

Methods should differentiate between evaluations to inform program administration and evaluations to inform policy decisions.
Useful

Evaluations to inform program delivery should be designed for the purposes of continuous improvement and accountability against objectives.

Evaluations for decision‑making should be designed for the purpose of defining achievable outcomes, taking account of any pilots, prototyping or experience from other jurisdictions.

A strong understanding of Government policy intent is required, both when evaluation is used as a monitoring tool and when it is an input to new program design.

Robust, ethical and culturally appropriate

Evaluations should be well‑designed, identify potential evaluator bias and take account of the impact of programs and evaluations on stakeholders.

Robust data and evidence should provide performance insights and drive continuous improvement for programs in the delivery stage.

Ethical and culturally appropriate approaches should be considered in all evaluation activities, including for the collection, assessment and use of information.

Credible

Evaluations should be conducted by people who are technically and culturally capable.

The collection and analysis of evidence should be undertaken in an impartial and systematic way, having regard to the perspectives of all relevant stakeholders.

Evaluations should adhere to appropriate standards of integrity and independence.

Transparent where appropriate

To be useful, evaluation findings should be transparent by default unless there are appropriate reasons for not releasing information publicly.

To support continuous improvement, accountability and decision‑making, evaluation findings should be provided to appropriate stakeholders.

Evaluation across the policy cycle

Evaluation activities are important at all stages of the policy cycle.

Approach

Evaluative approaches can:

  • help understand the need for a government program or activity
  • identify best practice in responding to a particular issue
  • improve the design and performance of a government program or activity
  • define expected benefits of a program or activity in the early planning stage
  • establish baseline information to measure and assess changes over time
  • determine effects/impacts by assessing whether expected benefits are being delivered
  • report on, and be accountable for, results achieved by a particular program or activity
  • inform decisions about future policy development.

Findings

Evaluation findings support informed decisions about whether a government program or activity should be changed, continued, replicated or terminated.

Fit for purpose approaches

Commonwealth entities and companies are encouraged to use fit for purpose evaluation approaches that support and complement the quality and robustness of their performance information.

It is important to plan how a program or activity will be evaluated from the start, so the necessary performance information and data is collected during implementation.

This supports effective performance management and allows evaluation at the most appropriate time in the policy cycle (that is, to assist with the policy design, or to assess implementation and/or the results of a program or activity).

Evaluative thinking

Evaluation is not limited to technical skills and knowledge. It needs to be integrated into the way people across the public sector work and think.

Evaluative thinking is a form of critical thinking where evaluative questions are asked as part of everyday business[1].

It involves using robust analytical methods to continually assess the performance of a government program or activity against its objectives.

Ask the right questions

Asking the right questions, at the right time, and to right people, will help you understand what does and doesn’t work, for whom, and why.

This knowledge can help improve the design, delivery and results of government programs and activities.

Theory of change and program logic tools

Using theory of change and program logic tools can help provide a balanced assessment of performance and describe how and why a desired change is expected to happen in a particular context.

If evaluative thinking is used at the start to establish appropriate metrics, data collection methods, and baseline information that allow delivery and impacts to be assessed over time, the evaluation experience is likely to be more positive and its results are likely to be more useful.

Evaluative culture

Organisational culture significantly influences the value placed on evaluative thinking and evaluation activities.

Commonwealth entities and companies are expected to deliver support and services for Australians by setting clear goals for major policies, programs and activities, and consistently measuring their progress towards achieving these goals.

Evaluative thinking supports this by generating meaningful performance information that is robust and credible, taking into account relevant ethical, cultural and privacy considerations. It can apply in evaluations and routine performance measurement and monitoring activities.

Promotes reflective thinking

An evaluative culture helps people working in, or for, a Commonwealth entity or company to do their jobs well by:

  • encouraging self‑reflection
  • looking for better ways of doing things, valuing results and innovation
  • sharing knowledge and learning from good practice and mistakes.

Leadership, empowerment, and learning

An organisational culture that values evaluative thinking requires:

  • strong leadership and a clear vision for achieving continuous improvement
  • clear responsibilities and expectations to empower staff, along with appropriate support to build evaluation capability and practices
  • building on existing evidence when designing new policies and programs
  • planning early for evaluation
  • knowledge‑sharing and tolerance for mistakes to encourage learning
  • a culture of reward to showcase effective evaluative approaches
  • support for the outcomes of robust evaluation to build trust
  • learning from successes as well as failures to improve performance.

Leadership that is positive about learning from performance monitoring and evaluation activities is a necessary condition for effective learning and delivering outcomes.

Good governance

Key governance actions to support an evaluative culture include:

  • plan to conduct fit for purpose monitoring and evaluation activities before beginning any program or activity. This includes identifying time‑frames, resources, baseline data and performance information
  • use strategic, risk‑based approaches to identify, prioritise and schedule evaluation activities
  • align internal review and evaluation activities with external requirements
  • treat and prepare for external scrutiny as part of standard practise
  • assign responsibility for implementing the recommendations of review, evaluation and performance monitoring activities. Identify who is responsible for implementation and establish timeframes for actions
  • ensure appropriate monitoring of implementation and the impact on performance and outcomes.

Evaluation in context

  • Evaluation
  • Research
  • Performance measurement and monitoring
  • Quality improvement
  • Audit


[1A Guide to Evaluation under the Indigenous Evaluation Strategy (Productivity Commission)

Back to top