the US, the late fifties and the early sixties witnessed calls for the federal
government to undertake curriculum development projects on a wide scale. The
National Defense Education Act of 1958 created some curriculum development
projects at the national level, allotting funds for the evaluation of these.
With the launching of the “War on Poverty” in 1965, huge subsidies were
allotted to social services, for the program has the goal of improving these.
Concerns were subsequently raised about the effective use of state subsidies,
and therefore the need to evaluate the curriculum development projects. After
the Elementary and Secondary Education Act of 1964 was enacted, it was amended
to include evaluation requirements.
response of evaluators was to use “traditional techniques and methodologies”
such as “survey techniques, standardized tests, and the type of
criterion-referenced tests developed on the objectives model.” Critiques were
raised about whether such an approach was appropriate because: (1) Students at
various schools have different levels and needs and no single set of objectives
could be responsive to these; (2) The language and functional levels of
disadvantaged students were not sufficiently considered in the crafting of the
standardized tests; and (3) The Tylerian Evaluation Rationale, which was being
used at the time, only showed results at the end of every year, and therefore
not useful as an evaluative feedback mechanism (Burrows 2012).
raised critiques of the traditional techniques and methodologies himself,
Daniel Stufflebeam proceeded to build a new model on the basis of their
weaknesses and limitations. Through a trial-and-error approach, he was able to
develop the Context, Input, Process, Product or CIPP approach in the late
sixties. The development of the CIPP model was part of the expansion of models
of evaluation, the professionalization of evaluation, and the recognition of
evaluation as a field of study in its own right. At the time, Stufflebeam’s
approach stood out for its emphasis on decision-making. He defined evaluation
as “the process of providing useful information for decision-making,” giving
authorities in the education system a means to improve the curriculum (Burrows
Stufflebeam’s untiring and continuous updating of the CIPP model, the latter
has evolved through the years. In the CIPP’s 2003 version, Stufflebeam makes
the following major points:
(1) There are five
basics in curriculum evaluation:
should work with a sound definition of evaluation. In general, evaluation “is a
systematic investigation of the value of a program or other evaluand.” In
particular, it “is a process of delineating, obtaining, reporting, and applying
descriptive and judgmental information about some object’s merit, worth,
probity, and significance.”
serves four main roles: (1) Formative that assists in development; (2)
Summative which judges past efforts; (3) Understanding which provides insights
into assessed phenomena; and (4) Dissemination that shares lessons that were
should be distinguished from research, because the two have different purposes.
Evaluators should focus on the social mission of evaluation, which is to
improve, while research produces new knowledge. Evaluation’s most important
purpose is to improve, not to prove.
should meet four main standards: Utility (must inform intended users),
Feasibility (must maintain procedural, political and financial viability),
Propriety (must evaluate legally and ethically), and Accuracy (must produce
(e) Evaluators and
their clients can benefit from evaluation by using appropriate evaluation
(2) The objectives
of CIPP’s four types of evaluation are as follows:
To assess needs and opportunities and help define and assess goals.
To assess alternative approaches and budgets and help guide and assess
To assess the implementation and help guide efforts and interpret
To assess outcomes and help promote and document success.
make the CIPP model closer to practical application, Stufflebeam clarifies the
questions raised by each type of evaluation at both the formative and summative
Evaluation Summative Evaluation
What needs to be done?
Were important needs addressed?
How should it be done?
Was a defensible design employed?
Is it being done?
Was the design well-executed?
Is it succeeding?
Did the effort succeed?
clarifies the objectives of the CIPP model that sets it apart from evaluation
models that existed before it: (1) It is an attempt to align evaluation
procedures, data, and feedback with project timetables and local, state, and
national information requirements. (2) It serves decision-making and
accountability needs (2003).
(1986) also clarifies what the CIPP model incorporated from other evaluation
models: (1) From Robert Stake’s “countenance of evaluation” approach, he
incorporated into product evaluation the need to search not merely for intended
effects but also for side effects. (2) From Michael Scriven’s
“formative-summative” approach, he incorporated the need for the CIPP to come
up with a summative evaluation, not just a formative one.
addressing the critique presented by Scriven’s model, Stufflebeam (1971)
clarified that the CIPP model can also be used to determine “educational
accountability” – in relation to ongoing operations of a system and to effort
to change that system. While he sees CIPP model as primarily a proactive model
because it guides decision-making at the start of a program. He believes that
it could also be a retroactive model which could determine responsibilities at
the end of a program.
adds two points to the using the CIPP model to address accountability needs:
(1) While the CIPP model was originally designed as an evaluation model
internal to the workings of a system, “there is still need for outside,
independent audits and checks on the system.” (2) There should be a
“cybernetic” relationship between an internal evaluation unit and all
decision-making levels in a system; which means that evaluation should serve
all levels of the decision-making process and not be filtered (1971, 18).
narrates the story of the negative reactions that evaluations, including his
CIPP model, faced from sections of the education sector: from avoidance to
anxiety, from immobilization to skepticism. He states however that
understanding the value, function and nature of evaluations are the keys to
making members of the education sector cooperate with evaluators. He also
outlined ways of designing evaluations, making these in particular match with
the demands and necessities of their clients. He emphasizes that evaluators
“must view design as a process, not a product, and they need to get their
clients to do likewise” (1986, 180).
an early critique of the CIPP model, Randall clarified that the difficulties
and problems faced by the use of the CIPP model. He argued that the high costs
of using the CIPP model – which in some cases can be costlier than the projects
being evaluated – can be lessened if “a systematic approach in identifying
major decisions and information on which those decisions can be based” (1969,
4) can be developed.
further enumerates the following problems: (1) Decisions are not always easily
recognized and criteria for making decisions also change; (2) Decision-makers
must be identified clearly as well as people who are or were influential in the
making of decisions; (3) The system must respond to critical moments when
important decisions must be made; (4) Cues from decision-makers as to what
constitutes relevant information must be taken seriously; and (5) Information
must be presented to decision-makers in a form that they will immediately
understand and grasp. Overall, Randall emphasizes the need for face-to-face
communication between evaluators and decision-makers to deepen and widen the
scope of an evaluation while at the same time ensuring that the information
being gathered remains relevant.
has further developed the CIPP model and has concretized it into a checklist,
which defines the activities of both the evaluator and the client/stakeholder
in the following phases: (1) Contractual agreements; (2) Context evaluation;
(3) Input evaluation; (4) Process evaluation; (5) Impact evaluation; (6)
Effectiveness evaluation; (7) Sustainability evaluation;(8) Transportability
evaluation; (9) Metaevaluation – which is an assessment of the evaluation
process itself; and (10) The final synthesis report (2007).