Home

Program Satisfaction Assessment Tool For Measuring Client Satisfaction with Challenge Course Programs

 

Jeffrey Heyliger

The Browne Center

University of New Hampshire

Kinesiology 900

Professor: James T. Neill

November, 2001
Total Words: 1477

IMPORTANT: If you use this instrument, or modify it, or use some parts of the instrument, please contact the instrument's authors so that your feedback and possibly your data can be used for further instrument development.  By contacting the instrument authors you can also find out obtaining comparison data or more recent information about the instrument, etc..

Introduction

The Browne Center is an experiential training and development organization that uses experiential activities, initiatives, and challenge course elements, to help clients reach stated goals and/or objectives.  The purpose for developing the Program Satisfaction Assessment Tool (PSAT) is to provide us, and our clients, with a definitive way of measuring to what degree our programs meet client goals and objectives.

It is difficult to define “Meets client goals and objectives.”  Consequently, it is difficult to define a problem or state a hypothesis.  However, I can define a problem the Browne Center is struggling with:  we do not currently have an assessment tool that gives us data to determine if we meet the clients’ goals and objectives.

To date the Browne Center has used one program evaluation questionnaire (appendix A) for the purposes of receiving feedback about its’ programs, facilities, food and facilitators.  A quick glance of the questionnaire reveals that it attempts to cover varied aspects of the programming in a very brief and unspecific way.  Roughly one half of the nine questions are open-ended, requesting narrative type information.  The remaining questions use a scale of one through four, with one equating to poor and four equating to excellent. 

The problem with the questionnaire is that it returns data that is very brief, fairly unrelated, lacks detail, and cannot be used in any “decent” quantitative manner.  Yes, we are concerned with what the participants think of our food and facility, but we are primarily concerned with whether or not we meet their training and development needs, and (in simplest of terms) whether or not we do what we say we will.  This assessment tool is our attempt to understand how we are doing in meeting client needs, and where to focus our attention for improvement.  It will be used to improve our programming, track our performance, and track client feedback of facilitators.

Method

The criteria used to develop the instrument were that it be succinct, quantitative, and relevant to the purpose(s) of the clients and the Browne Center.  “Relevant to the client” means that it should provide the client an opportunity to reflect on the program and perhaps glean some further value out of it, and also to feel as though their feedback will affect the delivery and effectiveness of future training programs.

Seventeen graduate students and one professor (N=17) from a graduate level applied statistics course were given the pilot survey (Appendix B) and everyone returned it.  Item 9 was not completed on one of the surveys, but all other items were and surveys were complete.  The pilot instrument used a five-point scale and asked participants to respond to twelve items by circling the one number that most closely matched their level of agreement or disagreement with a particular statement.  The scale ranged from one through five and was presented as follows:   

1=Strongly Agree         2=Agree           3=Neither         4=Disagree       5=Strongly Disagree

For example, respondents completed the following opening statement by circling one of the numbers following the statement: “1.  Overall, today’s program met our group’s stated goals/objectives.   1  2  3  4  5.”

The pilot surveys were delivered “on-line” via the UNH Blackboard to the respondents, who printed them out and completed them.  The printed, completed, pilot surveys were then collected in person at the beginning of class.

Results

Seventeen of the eighteen pilot surveys were returned complete.  As mentioned previously, one was returned incomplete, missing only the data for item nine.  Consequently, for the purposes of reviewing the proposed assessment tool, for the one incomplete pilot survey N=17, and the N for each remaining item is 18.

Means for each of the items ranged from a low M=1.9412 for item number 8 (“Facilitators were knowledgeable about the subject area”), to a high of M=3.1176 for item number 10 (“Program duration was appropriate for achieving our goals.”)  This indicates respondents for the most part strongly agreed or agreed with item number 8, and were generally unsure and/or disagreed with item number 10.  The range of responses was generally wide, with eight items returning a range of 3.00, and four items possessing a range of 4.00, indicating a varied response for the items.  

Table 1 at the end of this document reports the descriptives for the items.  For specific information about items you may refer to this table.

Analysis of the intercorrelations proved valuable for determining which items were the most appropriate for the final survey.  In general the items related to program duration correlated the least with other items on the survey.  For example, Pearson’ Correlations for Item 11 ranged from.339 (p =.169) to –2.56 (p =.306) for all but two cases.  The two cases that were borderline were with items 10 (Pearson correlation=-.469) and item 12 (Pearson correlation=.499.)  Another item of note was the particularly strong correlation between item 1 and item 2.  The Pearson correlation is .897 and significance p=.000, indicating the two items are strongly related. 

Table 2 provides Corrected Item Total Correlation data and Table 3 lists Cronbach’s alpha for all items, and the alpha with an item deleted.  Note that Cronbach’s Alpha for all items =.8850, indicating a fairly high level of correlation among the items in the pilot survey.  Also note that the alpha with item 10 or 11 deleted would rise significantly to either .8929 or .9166.  With the exception of items 10 and 11, the corrected item total correlations were strong, ranging from a low of .4718 for item 9 to a high of .8673 for item 12.  Further specific information for each item can be found in Tables 2 and 3. 

Discussion

At this point the Program Satisfaction Assessment Tool has been administered to one group of eighteen people and the recommendations based below are based on this fact.  That being said, overall the pilot survey proved effective at returning the data we are looking for, with items being for the most part appropriately related.  The obvious weakness in the survey is that items 10 and 11 don’t relate well with the other items.  The final survey (Appendix C) will have these two items and items 6 and 7 removed.  Doing so increases our Alpha from .8850 to .9192 (see Table 4.)  In the pilot survey Items 6 and 7 were also seen as weak because item 6 was worded differently from other items, and it was lengthy and worded somewhat unclearly.  Item 7 was removed because of concern over redundancy with items 1 and 12.

Another weakness is the wording of the scale rating for “3= Neither.”  For the final survey we decided to change it to “3= Unsure” when a respondent didn’t agree or disagree with an item.  “Neither” really didn’t fit because there were four options, not two. 

On the positive side, the pilot survey has a number of strengths we want to keep.  First, it is brief.  Respondents needn’t take a lot of time to complete it, and are therefore more likely to complete and return the survey.  That being said, a small change made to the final draft was the addition of a section for written comments by the respondents.  It was felt that an opportunity to solicit comments outside the realm of the items in the survey would be helpful for both the respondents (to have their input heard,) and for the Browne Center (so we don’t miss an opportunity for critical feedback.)

Another strength is the quantitative nature of the survey.  We can use the data for analyzing our effectiveness and measuring our results, as well as for measuring respondents’ feedback about facilitators.  In this way it is a useful tool for tracking what aspects of our program are strongest and/or weakest and can help us improve where we need to.

The Program Satisfaction Assessment Tool will be targeted at the client representative for school and youth groups, and for participants of adult and corporate training programs.  It is felt that having school children responding to this survey is inappropriate because they often are not fully aware of the goals and/or objectives, and they may not understand the wording of the survey.  It will be used both immediately following the program to catch immediate feedback.  A follow up will be necessary one month later with minor rewording of items 1 and 8 (we’ll change the word “today’s” from these items to “the Browne Center”.)   

The Program Satisfaction Assessment Tool is really a measure of how respondents feel the program met their stated goals and objectives.  It does not measure actual changes of behavior in the participants and should not be considered if that is the evaluation goal.  I recommend a participant and an observer based measurement tool for that application.

Program Satisfaction Assessment Tool

This assessment tool is designed to help you, the participant, determine if the program designed and delivered by The Browne Center met your stated goals and/or objectives. 

For each statement, please circle the ONE number listed that most closely matches your feelings about that statement.  Please do not add in decimal places.

1=Strongly Agree         2=Agree           3=Neither         4=Disagree       5= Strongly Disagree

1.  Overall, today’s program met our group’s stated goals/objectives.

2.  The facilitators presented material and/or activities in a clear, concise manner.

3.  Activities chosen by facilitators were appropriate for our organization.

4.  Discussions and/or presentations were applicable to our goals/objectives.   

5.  Facilitators actively engaged all participants in activity and discussion.   

6.  Browne Center staff facilitated connections between actions and behaviors occurring today and our stated goals/objectives.    

7.  Program content accurately reflected our group’s goals/objectives.

8.  Facilitators were knowledgeable about the subject area.

9.  Facilitators covered everything that needed to be discussed/presented.

10. Program duration was appropriate for achieving our goals.

11. The program was too short.

12.  As a result of today’s program we achieved/will achieve our goals/objectives.