Home

Program Evaluation
in outdoor education, adventure education, & other experiential intervention programs

James Neill
Last updated:
21 Mar 2011

Starting Points for Program Evaluation in Outdoor & Experiential Education

Why Program Evaluation?

It has become increasingly important for outdoor and experiential education organizations to demonstrate clear evidence of effectiveness in achieving desired goals for two main reasons: necessity and morality.

Conducting program evaluation does not guarantee the quality of a program, but high quality programs are more likely to be engaged in program evaluation.

Reason #1: Necessity - Because you have to!

State, federal and even private funding for intervention programs increasingly requires rigorous program evaluations to be conducted.  Thus, conducting program evaluations is becoming a necessity for many programs.

Reason #2: Morality - Because you want to!

There is also a moral argument that it is a responsibility of those who design and conduct the programs to be rigorous in ensuring the best possible experiences are provided for participants.  Just as with safety, for which an outdoor organization is expected to provide best-practice management of risk, so too there is a psycho-educational responsibility for providing high quality experiences for participants.  In this day and age, this should probably include the conduct of peer-reviewed research and formal program evaluation.

There are several other motivations for research and evaluation (see 7 stage hierarchy), but necessity (external motivation) and morality (internal motivation) are the two major ends of the motivational spectrum underlying efforts to conduct program evaluation.

Common Practices in Program Evaluation

Richards, Neill and Butters (1997) surveyed 113 attendees at the 10th National Outdoor Education Conference in Australia about the nature of their outdoor education work and program evaluation practices.

More than half of the recipients reported working with programs which evaluated participants' and teachers' overall satisfaction with the program and satisfaction with operational aspects of the program (such as food, accommodation etc.)  (see Table 1).

Table 1. Aspects of program evaluated during last 12 months (Richards et al., 1997).

Evaluation focus

% of programs engaged in this type of evaluation

Participants' overall satisfaction of the program

76

"Operational" aspects of the program such as food, accommodation, etc.

61

Teacher/Client representative's satisfaction with the program

58

Skill of the instructor

37

Participants' attitudes towards others

36

Participants' attitudes towards the environment

30

Participants' attitudes towards the outdoors

12

Participants' attitudes towards school / workplace

12

Other evaluation focus

12

Participants' achievement in academic subjects

8

No evaluation of anything

3

The key sources of data were participants, instructors and accompanying teachers or client representatives.  (see Table 2).

Table 2. Evaluation sources for outdoor education programs (Richards et al., 1997)

Evaluation source

% of programs using this source

Program participants

90

Instructors

80

Accompanying teachers / client representatives

57

School / client administrators

34

Parents / carers of participants

28

Outdoor administrative staff

20

Independent researchers from a university or agency

8

Other

2

The main types of evaluation were group discussions with participants, written survey questions, observations of the program and individual discussions with participants (see Table 3).

Table 3. Type of program evaluation (Richards et al., 1997) 

Evaluation type

% of programs using this type of evaluation

Group discussions with participants

81

Written survey questions

78

Observations of the program

77

Individual discussions with participants

62

Logs or journals

31

Videotape, film, or audio recording for evaluation purposes

6

Standardised tests

6

Self-designed tests

5

Other

5

Recommended Resources

References

Greenaway, R. Evaluation of training/learning books & reviews. Active Reviewing.

Hendricks, B. (1994). Improving evaluation in experiential education. ERIC Clearinghouse on Rural Education and Small Schools.

Priest, S. (2001). A program evaluation primer. Journal of Experiential Education, 24(1), 34-40.

Richards, G. E., Neill, J. T., & Butters, C. (1997). Statistical summary report of attendees at the 10th National Outdoor Education Conference, Collaroy, NSW, 1997. National Outdoor Education & Leadership Services, Canberra, Australia.

Watters, R. (1988).  Benefit cost analysis of non-commercial outdoor programs. In Proceedings of the 1986 Conference on Outdoor Recreation.

Watters, R. (1991). Cost benefit analysis of recreation programs for the disabled. Idaho State University Outdoor Program, Pocatello, ID.