Evaluation
What do you wish you knew
about your own work?
This is the question we ask at the start of every evaluation. It is our role to understand the work you do, the information needs you have, and the decisions you have to make. Together, we then design an inquiry that meets those needs and we collaborate to identify reasonable steps forward.
When we facilitate evaluation work, we take on the roles of:
Researcher, who asks “what is happening?”
Methodologist, who asks “what methods make sense?”
Consultant, who says “based on what we learned, let’s try…”
Paisley has studied evaluation at the doctoral level and has published various resources based on her practice.
Bringing together the best of
real experience and theory.
In evaluation, theoretical knowledge falls short without the practical, tacit insights held by those who work with the program on a daily basis. On the other hand, navigating an evaluation without deep knowledge of systems, approaches, and methods can be overwhelming.
Let’s collaborate to meet your evaluation needs. We’ll bring 10 years of experience and training in evaluation theory and practice, you bring the program.
What might evaluation look like?
Evaluation does not always have the best reputation, sometimes conjuring images of impersonal judgment, decontextualized conclusions, and harsh program cuts. Wild Way Consulting does not practice in this way. We see evaluation as a supportive inquiry designed to meet the needs of a program, not to cut it down.
Below are some of the ways our evaluations have been used by programs that are already doing great work.
Improvement-Oriented Feedback
The caretakers of the program have a challenge or goal that they are not quite sure how to resolve. The evaluation is used to gather information about the situation and develop targeted recommendations. Topics may include:
How to increase participation
How to make the program more relevant for participants
How to streamline administrative load
The results of the evaluation can be used to refine internal processes and inform program alterations.
Demonstration of Impact
The caretakers of the program are required to demonstrate the impact of their program to some audience (e.g., funders.) These insights can also be used to promote the program to potential partners or participants. Topics may include:
Changes that participants experience, if any
Reach of the program
Factors that contribute/detract from participation
The results of the evaluation can be used in funding applications and to market the program.
Embedded Evaluation Activities
The caretakers of the program wish to have more frequent feedback in order to make real-time changes to the program. This evaluation service has an element of capacity building, because the goal is to develop tools and processes that can be used by the team after the evaluator leaves. Some examples include:
Feedback tools designed to be used at regular program activities
Suggested schedule of reflection activities to mobilize data
These embedded activities support the ongoing maintenance of the program. An external evaluator is still recommended for larger inquiries that go beyond the scope of regular check-ups.
Broader Program Planning
The caretakers of the program or organization see an opportunity to expand their offerings in ways they might not have explored before. This evaluation service has an element of program development because the goal is twofold: 1) to collect information about what participants actually want/need, and 2) to offer guiding structures that enable content experts to create the program. Some topics include:
The wants and needs of the target audience
Finding harmony between program worldview and program logistics
Constructing a plan for continued development after program launch
This form of evaluation usually results in a new program and a workplan to guide subsequent development activities.

