The Center for
Creative Community

bluebar.gif (128 bytes)

Things the Evaluator Needs to Evaluate

For Planning @

 @ -- permission for use-with-acknowledgment

bluebar.gif (128 bytes)

By Ivan Scheier

Please note: These questions are all inter-related and should be considered together, especially the ones with the same alphabetic designation. Also, your decisions on options are usually matters of emphasis, rather than all-or-none.

Al PURPOSE. Why do we want to do this evaluation? Possibilities include:

--Simple curiosity. Just to see what's happening
--Raise awareness of our program or organization
--Justification, win respect, appreciation, priority, money, resources--or hold onto what we have.
--Improve the program

A2 INTENDED AUDIENCE. Who do we want to reach, inform, impact? Possibilities include:

--People who influence the program at a policy level(Bosses, boards, etc,)
--People who implement the program on a daily hands-on basis

--Both --- the team

--The people or groups we serve--clients, consumers, etc
--People who can choose to provide more or less material support for the program -funders, donors, etc.
--Academic and/or professional people, in our field
--The community-at-large, the public
--Special segments of the community, e.g.

--Schools and colleges
--Political Leaders
--Religious Groups
--Members (May or may not also be volunteers or donors)


--Money or in-kind equivalent for:

--Paper, supplies
--Computer equipment and time
--Clerical, tabulation time
--Expertise, specialist(see below)

--People: Paid Staff or Volunteers (including expert volunteers)
--Time. Evaluation studies typically take twice as long s your most conservative estimate.

B2 Assurance of Impact 2: HOW CAN WE PREVENT THIS STUDY FROM GATHERING DUST ON THE SHELF (Like Most Evaluation Studies) ?

--Meaningful purpose (Al) well-focussed on relevant audience A2)
--Concise, well-produced reporting, multi-channel(includes graphics,video)
--Follow-up, FOLLOW-UP, FOLLOW-UP!!!!
--Participative buy-in of intended audience, in design of study as well as sources of information.

Cl Sources of Information, 1: WHO DO WE ASK OR MEASURE?

Your choices are essentially the same as the choices in intended audience (A2), especially whoever has either reason to know something significant bearing on the purpose of the study (Al) and/or has an important stake in it.


--Balance between numerical and qualitative (what people think about it, their impressions and feelings)
--Balance between Input and Output emphasis. Examples of input would be number of hours contributed by volunteers, and compliance with effective program procedures. Output measures concentrate on what was accomplished, the consequences or results.
--Balance between descriptive and experimental (difficult, tends to diminish meaningfulness in favor of laboratory-like conditions)
--Within descriptive can still be comparative (without being experimental). But at all costs avoid comparisons of volunteers VERSUS staff. The appropriate comparison, usually, is between staff alone, and staff "amplified" by volunteers (the solo vs. the team).
--Other balances to be aware of?



This is somewhat sensitive but the issue must be raised, because the acceptance and usage of evaluation studies. How prepared are you, and your intended audience, to accept results which are ambiguous, surprising, or downright opposite to what you fondly expected? It happens. All the time. So try and be ready for it.

Finally, is your intended audience and/or study participant group, threatened by the word "evaluation"? Many are, so you might want to consider alternative terms such as "feedback", assessment" (might be worse), or "operations analysis".

Good Luck with it.

    ivanbar.gif (136 bytes)

Post Your Response

Read Other Responses

ivanbar.gif (136 bytes)

Return to the Main Table of Contents

Ivan Scheier
607 Marr
Truth or Consequences, New Mexico, 87901
Tel (505) 894-1340

For comments and editing suggestions please contact Mary Lou McNatt