Future Directions in Adventure-Based Therapy Research: Methodological Considerations and Design Suggestions
Sandra L. Newes, MA
Department of Psychology
Word Count: 5633 words
Correspondence regarding this
article should be addressed to: Sandra L. Newes. Department of Psychology, The
The need for more methodologically sound research in adventure therapy (AT) is commonly discussed among AT professionals. Clearly, as the culture of the psychological community is increasingly influenced by both managed care and the movement toward empirically-validated treatments, the need for solid research in AT is becoming progressively more urgent. Understandably, while persons affiliated with the AT field recognize the necessity of such efforts, this recognition is not always followed by action. One reason for this may be related to the difficulty in finding resources delineating specific considerations and methodologies to use in such research.
This article will attempt to contribute a piece of this needed information, focusing on summarizing some of the necessary considerations for conducting outcome studies in AT from the perspective of psychotherapy outcome research. Specific suggestions for future design strategies that can be implemented AT settings will also be put forth.
(Keywords: adventure therapy, adventure-based therapy, wilderness therapy, research, outcome)
The need for more methodologically sound research in adventure therapy (AT) is a commonly broached topic, both when discussing AT and when examining the literature (Bandoroff, 1989; Davis-Berman & Berman, 1994; Gass, 1993, Gillis, 1992; Gillis & Thomsen, 1996; Hattie, Marsh, Neill, & Richards, 1997; Herbert, 1998). Gillis and Thomsen (1996) vehemently advocate for change in AT research, stating “Do we have any other choice? Do we risk becoming inert? Do we deprive those who will benefit most from our services just because we have not done the work needed to make our case known?” (p. 12). Clearly, as the culture of the psychological community is increasingly influenced by both managed care and the movement toward empirically-validated treatments, the need for solid research documenting outcome and process in AT is becoming progressively more urgent.
Understandably, while persons affiliated with the AT field have come to recognize the necessity of such efforts, this recognition is not always followed by action. One reason for this may be related to the difficulty in finding resources delineating specific considerations and methodologies to use in such research (Note-a notable exception to this is Davis-Berman and Berman’s (1994) excellent chapter on program evaluation).
This article will attempt to contribute a piece of this needed information, focusing on summarizing some of the necessary considerations for conducting outcome studies in AT from the perspective of psychotherapy outcome research. Borrowing from the psychotherapy literature, specific suggestions for future design strategies that can be implemented in AT setting will also be put forth.
Borkovec (1994) and Kazdin (1992), noted experts in the field of psychotherapy research, provide excellent summaries of the important considerations to be recognized when conducting psychotherapy research. I will discuss several of their central points as they relate to research in AT, including some of the difficulties involved in meeting basic controls.
The first consideration is standardization of the independent variable. In more traditional research settings, standardization can take the form of protocol or manualized therapy. Gillis (1992) advocates for such approaches in AT research, suggesting that what is most essential in the field of adventure-based therapy research is “one clearly defined and researched method of conducting psychotherapy in outdoor learning experiences, in wilderness-adventure settings, or in adventure-based activities that can be assessed for effectiveness.” (p. 18).
The consideration of standardization is of supreme importance, for as Borkovec (1994) notes “Standardization of procedures across all subjects and the matching of treatment and control conditions on all procedures except the crucial manipulation are ways of holding constant aspects of the environment and the experience of the subjects in that environment.” (p. 249). Borkovec goes on to say that “Each of the known and unknown ways in which conditions do, in fact, differ represents a rival hypothesis that could just as likely explain any observed difference.” (p. 249). Thus, standardization of the independent variable provides the foundation from which to consider all other aspects of research design.
As is true in all psychotherapy research, there are significant difficulties involved in considering standardization of the independent variable in AT. Recognizably, it is difficult to standardize a treatment while allowing the flexibility to account for factors such as inclement weather or accidents. As currently we have no assessment of the impact of such factors, it is unknown to what degree they impact outcome. Therefore, manualized approaches may be difficult to implement successfully. In addition, it is uncertain exactly what techniques are regularly being utilized in AT and differences also exist across programs, further contributing to difficulties with standardization. At the risk of stating the obvious, without being clear on what the independent variable actually is, standardization is impossible to achieve.
One relatively straightforward way to begin approaching the issue of standardization may be to create descriptions of commonly utilized interventions. This can be done either at the individual program level or across programs. These descriptions could then be compiled into a checklist. In order to document what intervention techniques are being employed, staff in the field could complete the checklist at the end of each day, adding in any additional techniques/interventions that were not included on the list. Over time, these checklists could be expanded and utilized to create a more accurate and comprehensive picture of what occurs during the course of an AT program. Ultimately, such an approach could lead to the creation of flexible protocols for AT, protocols that encompass the many varieties of techniques available to adventure therapists and also allow for therapist judgment in choice of technique.
Related to the issue of standardization, attempts must be made to control for historical differences across groups. Given the dynamic nature of the outdoor environment, this presents a difficult challenge. While we could hope that large enough sample sizes and the inclusion of multiple groups in studies would randomize such differences, it may be difficult to identify the impacts of such factors. Thus, any differences in history must be considered carefully when determining group equivalence and also when consolidating data across groups. In fact, one early study commonly considered as a seminal work in the AT field reports that there were significant differences between outcomes for groups receiving the same treatment in different settings (Kelley & Baer, 1971). These results were consolidated in the final analyses. While it is commendable that such differences were examined and reported, the unfortunate possibility exists that such a consolidation may have skewed the final results.
Other considerations are related to assessment of the dependent variable. Basic factors such as maturation effects and regression towards the mean are often difficult to control for, particularly when conducting research on more long-term AT programs. In addition, repeated exposure to the same measures over time (repeated testing effects) may have an effect on the way clients respond to these measures (thus impacting the dependent variable). This is a highly relevant consideration, as designs where participants are tested on multiple occasions over the length of the program are commonly seen in the AT literature. Any postulated impact of repeated testing must be highlighted in reporting conclusions from such studies
Dependent measures must also have proven reliability and validity, as well as demonstrated clinical utility and relevance. In addition, measures must be chosen to answer specific theoretically-based questions. Given the paucity of studies in the AT literature utilizing such measures, numerous others have stressed the importance this as well (Bandoroff, 1989; Davis-Berman & Berman, 1994; Gillis & Thomsen, 1996; Hattie, Marsh, Neill, & Richards, 1997). Importantly, the use of reliable, valid, and also recognizable measures provides the means by which to link research in AT with the traditional psychotherapy literature, thus contributing to the legitimization of AT as a viable treatment modality. On a pragmatic level, this process of legitimization has the potential to meet such desirable goals as increases in third party payment, referrals, and external funding.
Multiple domain or multi-level assessment (Bandoroff, 1989) of the dependent variable is also necessary. Borkovec (1994) discusses this issue, noting that “multiple measures from different domains of assessment (e.g., cognitive, affective, physiological, and behavioral) and from different methods of assessment (e.g., pre-and post-assessment questionnaires, daily diaries, assessor ratings from interviews, observational measures, significant-other reports, and physiological laboratory assessments) provide more compelling outcome assessment than single domain measurement by a single instrument for the sake of providing converging and valid improvement indices.” (p. 278). The utilization of such an assessment allows for a more comprehensive evaluation, providing the opportunity to capture aspects of AT and the clients experience that may be overlooked using methods such as self-report alone.
Dependent measures must also be analyzed using appropriate statistical techniques and these analyses must be accurately interpreted. It is imperative that researchers are both informed of such techniques and able to make informed choices of particular statistical procedures in order to answer specific questions of interest. Relatedly, sample sizes must be big enough to provide enough power to detect group differences. (For an excellent review of multivariate statistical techniques and their applicability to research in the adventure field, see Ewert & Sibthorp, 2000). Strikingly, regression techniques, an important means of evaluating predictor models, are all but overlooked in the AT literature. This is indicative of the scarcity of studies exploring this important area.
Another dependent variable consideration is the time of testing. Often measures are taken just prior to beginning an AT program and immediately upon completion. As noted by Bandoroff (1989), such scores may be impacted by pre-trip emotionality or post-trip euphoria and may not be representative of an individual’s normal state. A better design would be to follow the example of Fry and Heubeck (1998) and administer pre-test measures sometime prior to beginning the treatment. Davis-Berman & Berman (1994) also note difficulties in conducting post-test evaluations immediately upon completion of an AT program, referring primarily to issues around setting and client motivation. As they discuss, participants must recognizably be warm, dry, willing, and unhurried when they complete a post-test battery. However, such a situation may be difficult to create, particularly in a wilderness setting or when clients are anxious to return home at the completion of a program. Such issues must be encompassed in study design, and an adequate setting for post-testing must be structured.
Post-test measures must also be extended to include follow-up data in order to ascertain the long-term effects of the treatment (Davis-Berman & Berman, 1994; Herbert, 1998). So vital is follow-up data to making valid conclusions about the impact of any treatment, that Borkovec (1994) notes “a thorough follow-up with its relatively small cost is more than just recommendable; it should be required.” (p. 278). Importantly, follow-up data must also include information about additional treatment or social services received, as well as the environment an individual enters upon completing the AT program. Clearly, the long-term effects of an AT program may be greatly impacted by such factors, and valid conclusions about the effects of any AT treatment cannot be made without information about these potential confounds. Regrettably, follow-up of any kind is rare in the AT literature.
Other dependent variable considerations involve the use of participant-observers as assessors. In the AT literature, field staff or therapists who have been actively involved in conducting the treatment are often involved in the measurement of client change. Unfortunately, such measurement is widely subject to bias based on existing relationships. In addition, any information that the therapist/staff member has about the client’s background may also potentially introduce bias into the assessment. Future designs must consider employing methodologies which utilize independent assessors of participant change or make use of other, more objective, indices in combination with staff ratings.
Client considerations are another methodological issue to be considered (Borkovec, 1994). Commonly used as selection criteria in controlled psychotherapy research, the provision of such characteristics meets the additional goal of allowing for an evaluation of generalizability. Relevant and important information about clients should include information about the severity and duration of the problems they are experiencing and diagnostic information. Client attrition rates must also be provided and available characteristics of those who drop out of treatment must be analyzed separately in order to determine if there are differences.
It is also important to consider the referral source of clients. Initially, separate analyses of participants from individual referral sources may be useful in determining if there are differences based on referral setting. Any dissimilarity may contribute to variation in outcome and thus be impacting results (Borkovec, 1994). This is particularly relevant for AT participants, who commonly come from such diverse referral sources as mental health providers, schools, justice system referrals, self-referrals, or parents. Different sources of referral may suggest widely discrepant client backgrounds.
There is also a need for discussion of the effect of concurrent or past treatment on AT participants (Borkovec, 1994). The range of potential referral sources suggest the possibility of some clients having been involved in earlier treatment, as well as ongoing treatment to which they will return. Effects of concurrent or earlier treatment must be evaluated in order to determine the impact of such treatment on any evaluation of the results of AT. This information is also necessary in order to begin the exploration of any effects of aftercare systems on AT outcome.
Overall, tighter control and reporting of client variables is necessary. In order to achieve this, once again it is necessary that AT researchers use solid diagnostic tools, recognizable and valid outcome assessments, and clinically relevant measures that assess specific dimensions in identified problem areas. According to Borkovec (1994), “The important requirements are that all criteria be specified, that they are reliably determined, and that they are based on a valid rationale related to the nature of the disorder and the nature of the questions being asked in the therapy investigation.” (p.271).
Notably, the provision of information such as this also allows us to begin the evaluation of the effects of AT for specific diagnostic groups. Ideally, were each program to begin gathering the above types of data for each client and tracking this information as part of program evaluation, eventually each program could begin to answer this question without initially relying on the utilization of homogeneous diagnostic groups (one commonly noted obstacle to implementing controlled studies). If such an approach were to be implemented and the findings reported, we would ultimately begin to have a better idea of what works better for which types of clients. This information could be used to inform future treatment planning, as well as future studies focusing on increasingly specific research questions
As an aside, it is important to point out that attending to considerations such as these may well have a circular effect. If AT researchers were to closely consider the theoretical basis of the questions that they are asking for the clients that are specified while utilizing appropriate methodologies, the results of such inquiry could be used to expand AT theory as well. To begin this process, AT researchers must ask themselves such basic questions as “Do we have an identified problem?”, “Are our measures linked specifically to that problem?” (Chambless & Hollon, 1998), and “What are the theoretical rationale for my hypotheses?”
Therapist considerations are another variable all but overlooked in the AT literature. Borkovec (1994) notes that therapists who participate in treatment outcome studies must be described in terms of background, training and experience. Any such differences can lead to a diffusion of the treatment (i.e., the treatment is not delivered in the same fashion to all clients). Given this, at the present time differences in training, experience, and techniques utilized by different staff members make valid comparisons across AT programs impossible. Future attempts must be made to hold these issues constant across groups and as Borkovec suggests, such information must be reported in AT outcome studies (Gills & Thomsen, 1996). Notably, information regarding the impact of therapist variables on outcome is vital to eventually understanding and delineating the therapeutic process of AT.
It is clear from the above discussion that there are many methodological issues to consider when designing an appropriately rigorous outcome study in AT. Understandably, it may appear to be an arduous task to conduct a study taking all such considerations into account. Davis-Berman and Berman (1994) address this issue quite eloquently, noting that:
“These potential problems are raised not to discourage the pursuit of methodologically sound research studies, but rather, to build the argument that effective research on wilderness programs must be innovative and creative, yet methodologically sound. We must develop ways to meet scientific criteria, yet not compromise the uniqueness of our programs. This, we believe, is one of the major challenges for wilderness programs in the 1990’s.” (p. 178).
One way to approach outcome studies in the innovative and creative way suggested by Davis-Berman & Berman is to utilize alternative designs.
The next section of this article will first discuss the limitations associated with traditional no-treatment comparisons and comparative designs. The use of two types of alternative designs, specifically dismantling (component control) and additive designs, will then be explored. It is my hope that this discussion can provide some new starting points for the creation of the methodologically sound outcome studies, studies that can be successfully implemented in AT settings while allowing for the consideration of the above methodological factors.
No-Treatment Comparison Design
One of the roadblocks that AT researchers come to repeatedly is the lack of a comparison/control group. Control groups can be difficult to find, yet notably this problem is not specific to AT. Innovative methods of creating adequate control groups have been put forth in the psychotherapy literature (Kazdin, 1992) and Davis-Berman and Berman (1994) discuss these methods as they pertain to AT research. The reader is urged to consult their excellent chapter on program evaluation for ideas on procuring an appropriate control group to use in initial investigations.
Importantly, however, while preliminary studies of a therapeutic technique not yet researched do necessitate a no treatment control group, studies such as these provide us with no information beyond that the treatment may have some impact beyond those of “history maturation, repeated testing, instrumentation, statistical regression, selection, attrition, and the interactions of these variables” (Borkovec, 1994; p. 254), in that particular setting for those particular clients. Although this is a seemingly long list of factors, it is noteworthy that through such a study we learn nothing about the treatment itself. We have only learned that there is something about the provision of therapy (at that particular time for those particular clients) that caused some effect (Borkovec, 1994). Fortunately, if some impact of the treatment is in fact documented under such a condition, researchers need not focus overly on such designs (Borkovec, 2000; personal communication) as they provide very little information beyond this.
A common misconception when considering research into AT (or any type of treatment for that matter), it that it is also a matter of necessity that the treatment be compared to another type of treatment. Increased positive effects of one treatment are assumed to indicate the relative efficacy of this treatment over another. Obviously, such comparative designs are difficult to implement in AT, as differences in treatment modality provide seemingly insurmountable confounds. Thankfully, such designs are so inherently methodologically flawed that in actuality they must be considered invalid (Borkovec, 1994). In fact, Borkovec (1994) states definitively that “Most comparative studies are of such limited value for both applied and theoretical purposes that they should not be conducted” (p. 260).
To explain this contention, there are a multitude of ways in which therapies differ from each other. Given this, differences in treatment procedure and quality of treatment delivery result in unlimited differences in client experience. As such, there is no way to achieve the required methodological control of holding the client’s experience constant across conditions (Borkovec, 1994). According to Borkovec, “differential outcome may be due to many, many differences in procedure. Thus, the investigator’s ability to rule out rival hypotheses is extremely limited.” (p. 260).
In discussing additional problems with comparative designs, Borkovec (1994) notes the dynamic nature of therapeutic techniques, and suggests that the continued evolution of treatment techniques makes comparisons at any point in time meaningless (i.e., given the amount of time required to conduct such studies, the treatment is likely to have changed by the time such results are reported), contributing further to the invalidity of such designs. He also highlights the impossibility of controlling for therapist factors across treatment modalities. Differences in therapist training, experience, and personal bias towards specific therapy techniques, even when employing noted experts in each condition lead to additional insurmountable confounds. In summary, Borkovec says that he believes such comparative research to be incapable of providing answers to questions of treatment efficacy due to the lack of validity associated with such designs.
For purposes of illustration, consider the difficulties inherent in comparing AT to any other type of therapy that utilizes a typical weekly therapy format. On the most basic level, glaring confounds are found in equivalency of client time spent in therapy (client contact hours). Given that in AT the therapist and client are together on a round-the clock basis, the amount of client contact hours add up rapidly. Consider this: if client and therapist are together for twenty-one days on a twenty-four hour basis, this constitutes 504 hours of therapeutic contact. This is an approximate time equivalent of ten years of weekly psychotherapy. Even with the exclusion of the time that is assumed to be spent sleeping, there is still roughly 330 hours of contact, comparable to approximately six and one-half years of weekly psychotherapy. Even were one to conduct a comparison of AT and inpatient therapy (one postulated approximation), there are still similar confounds as inpatient therapists do not develop the same round-the clock relationships with clients nor engage in the same types of activities.
There are obviously qualitative differences precluding direct comparisons on client contact hours across treatments. Clearly, when put in the above terms such a comparison seems ludicrous. However, this example does effectively highlight one of the many fundamental problems in conducting comparative studies. Any such basic differences lead to an unlimited amount of uncontrolled factors, and outcomes cannot be adequately compared. Thus, as has been noted, such comparisons should be avoided.
Fortunately, Borkovec (1994) recommends excellent alternatives to comparative designs, alternatives that can be easily implemented in AT settings. In their place, Borkovec recommends an overall approach that goes “deeply into one therapy technique, using increasingly sophisticated designs, methods and measures that explicitly provide basic knowledge about the pathology and the therapeutic change mechanisms of that therapy from which to devise hypotheses about increasingly effective modifications of that therapy (p. 262). Therefore, instead of simply comparing the effects of AT to another type of treatment, Borkovec’s approach facilitates the close and scientific examination of both the therapeutic process and the mechanisms of change in AT. Such an examination also allows for ongoing modification of the treatment in order to incorporate the most effective elements. Ultimately, this process can result in the most effective treatment possible, a goal well deserving of the effort involved.
One design suggested by Borkovec (1994) is a dismantling (component control) approach. In this approach, the various elements of a treatment within a particular setting are identified, or “dismantled”. Treatment conditions are then created in which a particular element is present in one condition but not others. An additional condition can also be created containing all elements, thus allowing for the evaluation of the possible synergistic effects of the overall treatment package. As all groups are being presented with the same treatment plus or minus this particular element, potential confounding factors are held constant across groups and the effects of the particular factor of interest are isolated. The independent variable is held constant to a greater degree than is true with other types of designs (Borkovec, 1994), thus superior validity is intrinsic to such designs.
Given these increased levels of validity, Borkovec (1994) notes that “demonstration of differential outcome across components and the total package provide extremely useful information regarding both the nature of the pathology itself and the nature of the therapeutic processes and mechanisms that lead to change.” (p. 257). Such findings can be used to inform future research, and studies can be designed to test progressively more specific cause and effect relationships (Borkovec, 1994; Castonguay & Borkovec, 1998). Through this process, the mechanisms of change can be increasingly isolated and evaluated .
The dismantling approach contains additional benefits as well, including the fact that ethical concerns about the withholding of treatment that can be encountered when using no-treatment comparison designs are circumvented. Related to this, concerns about the withholding of one component of the “package” (assuming that the package has been show to be more effective than some control condition) are also less critical, as it has not yet been determined which components of the treatment are contributing to client change. As it could be any of them, alone or in combination, the ethical concerns are less severe (Borkovec, 1994). Finally, this design eliminates the difficulties encountered in finding an adequate control group, as the groups are compared to each other.
A very similar alternative approach is the additive design. Borkovec (1994) recommends the use of such a design when “two or more therapy techniques are combined into one package”. Through the use of such an approach, a new therapy is actually created (as opposed to the dismantling approach which takes apart elements of an already existing therapy). For example, an additive design would be warranted in the instance that the addition of adventure activities to a pre-existing therapeutic program for adjudicated youth was to be evaluated for differential effects.
Additive designs have the same methodological and ethical benefits as discussed above for the dismantling approach. Moreover, both the dismantling and additive designs meet Borkovec’s suggestion for a comprehensive analysis of one treatment as opposed to direct treatment comparisons. As both designs allow researchers to evaluate individual elements of a therapy at increasing levels of specificity within the same setting, they lend themselves well to the to the exploration of such central questions as the therapeutic aspects of certain techniques or the effectiveness of particular program components. These have been suggested as important avenues for future exploration in AT (Davis-Berman & Berman, 1994; Gillis & Thomsen, 1996). Importantly, without the utilization of such designs, such questions can pose potentially insurmountable obstacles to AT researchers.
In execution, the two approaches are quite similar and answer virtually identical types of questions. To illustrate the potential uses of such designs, I will provide two brief examples. At the risk of overstatement, it is important to note at the onset of this discussion that the use of such designs does not preclude the consideration of the aforementioned methodological issues. Such factors must be incorporated regardless of design strategy.
The first example is focused on an important issue commonly brought forth in discussions with AT professionals: Does the addition of a therapist or of more traditional individual psychotherapy sessions differentially impact outcome? This question is particularly well-suited to this kind of design. In this type of comparison, groups from the same program could be created, some which included a trained therapist and some which did not. Should there be ethical concerns with the complete removal of a therapist, the design could be altered to compare groups where a therapist was included the entire time to groups where the therapist was present for only a portion of the trip. Relatedly, the same design could be employed in comparing groups that received individual psychotherapy sessions as part of the overall treatment package to those that did not. In fact, as part of a programmatic investigation devoted to isolating and testing increasingly specific therapeutic elements, the findings from the first suggested evaluation (therapist vs. no therapist vs. “part-time” therapist) could also be used to inform the second suggested evaluation (individual psychotherapy vs. none). As findings from this second investigation can also be used to test more specific cause and effect relationships, the process continues.
Regardless of the research question under consideration, as both groups come from the same overall treatment program, many other potential confounding factors are held constant. The particular treatment element under consideration can therefore be isolated, and effects of the experimental manipulation can be determined. Thus, the implementation of such designs will allow us to begin answering questions such as whether or not the addition of a therapist or traditional therapeutic techniques provides additional benefits for AT clients. As of yet, these important questions remains unanswered in the literature.
These designs are also easily adapted to situations in which AT is part of larger treatment plan. This is a problem that has created significant difficulties for AT researchers (Bandoroff, 1989), as such a situation is not uncommon. Certainly, when using traditional designs it is can be impossible to draw conclusions about the impact of the AT portion of any such treatment. However, this type of multimodal treatment is ideally suited for the use of these alternative designs.
In this second example, one group of participants could receive the treatment package with the adventure component and one group could receive the package without adventure activities (keeping in mind that equivalency of time in treatment must be accounted for). To add additional complexity, another group could receive the adventure activities alone.
The beauty of this design is that each individual component of the treatment is isolated. With all other factors held constant, the use of this approach allows for the determination of differential outcomes across components, or between specific components and the overall treatment package. If adequate methodological control has been otherwise provided, in such an instance it can be effectively determined whether the AT part of the program had additional impact beyond that of the other components.
As can be seen, through the use of dismantling and additive designs we begin to isolate of the effective elements of AT, and ultimately the mechanisms of change. Given the invalidity of comparative designs, not to mention the difficulties involved in comparing other treatments to AT, it will perhaps be more efficacious to approach future AT research from this perspective. Importantly, the use of such designs can begin to answer important questions that have troubled the AT field for many years. [Note-for a more detailed discussion of these types of designs, the reader is referred to Borkovec (1994), Kazdin (1992), or is encouraged to contact the author of this article].
Ultimately, all of the methodological considerations that have been discussed involve questions of validity. Without the implementation of tighter research control, we have neither internal nor external validity, and the best we can say is that some AT programs may have some effect at a particular moment in time. Simply put, without tighter control we will never be able to definitively say that AT is a causal agent of change, nor will we will be able to elucidate the mechanisms of change in an empirically-based fashion. Furthermore, it is only through the use of increasingly rigorous research methodologies that AT will ever be able to claim empirically-based efficacy as a treatment modality.
Perhaps most importantly, however, the use of such methodologies will also ultimately allow for us to improve and expand the treatment. Therefore, such an evaluation will not only facilitate the establishment of empirical support, it will also provide us the means by which to eventually provide the best treatment possible. This will result in increased growth and change for our clients. Ideally, this is the goal of all AT practitioners and thus, scientifically-based evaluation of outcomes in AT becomes important to us all. It is my hope that the methodological issues I have outlined may provide a useful resource for researchers who undertake these important future efforts.
Bandoroff, S. (1989). Wilderness adventure-based therapy for delinquent and pre-delinquent youth: A review of the literature. (ERIC Document Reproduction Service No. ED 377 428).
Borkovec, T.D. (1994). Between-group therapy outcome research: Design and
methodology. In L. S. Onken & J. D. Blaine (Eds.), NIDA Research Monograph
#137, pp. 249-289. Rockville, MD: National Institute of Drug Abuse.
Chambless, D.L. & Hollon, S.D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66(1), 7-18.
Davis-Berman, J. & Berman, D. (1994). Wilderness therapy. Dubuque, Iowa: Kendall Hunt.
Ewert, A. & Sibthorp, J. (2000). Multivariate analysis in experiential education: Exploring the possibilities. The Journal of Experiential Education, 23(2), 108-117.
Fry S.K. & Heubeck, B.G. (1998). The effects of personality and situational variables on mood states during Outward Bound wilderness courses: An exploration. Personality and individual Differences, 24(5), 649-659.
M.A. (1993). Adventure
therapy: Therapeutic applications of
Gillis, H.L. (1992). Therapeutic uses of adventure-challenge-outdoor-wilderness: Theory and research, 35-47. Paper presented at the meeting of the Association for Experiential Education.
Gillis, H.L. & Thomsen, D. (1996). A research update (1992-1995) of adventure-based therapy: Challenge activities and ropes courses, wilderness expeditions and residential camping programs. [On-line] Available: http://fdsa.gcsu.edu:6060/lgillis/
Hattie, J., Marsh, H.W., Neill, J.T., & Richards, G.E. (1997). Adventure education and Outward Bound: Out of class experiences that make a lasting difference. Review of Educational Research, 67(1), 48-87.
Herbert, J.T. (1998). Therapeutic effects of participating in an adventure-based therapy program. Rehabilitation Counseling Bulletin, 41(3), 201-216.
A.E. (1992). Research
design in clinical psychology.
Kelley, F.J. & Baer, D.J. (1971). Physical challenge as a treatment for delinquency. Crime and Delinquency, 17, 437-445.
Newes (2000). Adventure-Based Therapy: Theory, Characteristics, Ethics, and Research. Unpublished manuscript, The Pennsylvania State University.