Taras Second Assignment Download


I know that most men, including those at ease with problems of the greatest complexity, can seldom accept the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have proudly taught to others, and which they have woven, thread by thread, into the fabrics of their life. (Tolstoy, 1894)

This article presents a case study in the misrepresentation of applied behavior analysis for autism based on Morton Ann Gernsbacher's presentation of a lecture titled “The Science of Autism: Beyond the Myths and Misconceptions.” Her misrepresentations involve the characterization of applied behavior analysis, descriptions of practice guidelines, reviews of the treatment literature, presentations of the clinical trials research, and conclusions about those trials (e.g., children's improvements are due to development, not applied behavior analysis). The article also reviews applied behavior analysis' professional endorsements and research support, and addresses issues in professional conduct. It ends by noting the deleterious effects that misrepresenting any research on autism (e.g., biological, developmental, behavioral) have on our understanding and treating it in a transdisciplinary context.

Keywords: autism, applied behavior analysis, misrepresentation, research methodology, ethics


At the invitation of KU's Department of Psychology, Morton Ann Gernsbacher (University of Wisconsin) gave its Fern Forman Lecture on September 27, 2007. It was titled “The Science of Autism: Beyond the Myths and Misconceptions.” Gernsbacher is an award-winning educator, a well-funded and well-published researcher, and the 2006–2007 president of the Association for Psychological Science (APS). Her research is on cognitive mechanisms hypothesized to underlie language comprehension (e.g., Traxler & Gernsbacher, 2006). When her son, Drew, was diagnosed with autism at the age of 2 years in the spring of 1998, she became “motivated by personal passion” to address autism, too, in particular, why children with autism do not speak (www.Gernsbacherlab.org). Since then, she has become an active researcher and professional speaker in this and related areas, as well as a public advocate for the rights of individuals with autism (e.g., Dawson, Mottron, & Gernsbacher, 2008; Gernsbacher, 2007a, 2007b; Gernsbacher, Sauer, Geye, Schweigert, & Goldsmith, 2008). At KU, her lecture (a paid public lecture) filled a 990-seat on-campus auditorium largely, it appeared, with students earning course credit. In addition, it was simulcast to 200 more students and community members at KU's Edwards Campus in Kansas City. For the record, Gernsbacher had given four previous invited lectures by the same title at (a) a September, 2005, colloquium at Washington University, (b) the August, 2006, conference on Brain Development and Learning: Making Sense of the Science (Vancouver, British Columbia, Canada), (c) the February, 2007, meeting of the Southeastern Psychological Association, as a William James Distinguished Lecturer (New Orleans), and (d) the April, 2007, John S. Kendall Lecture Series at Gustavus Adolphus College (St. Peter, Minnesota).

In her lecture, Gernsbacher addressed several assumptions about autism's diagnosis and etiology, for instance, that it is epidemic (Maugh, 1999); that it was once caused by emotionally cold “refrigerator mothers” (Bettleheim, 1967); and that it is today caused by childhood measles-mumps-rubella vaccinations (Kirby, 2005). Emphasizing the importance of rigorous research methods and experimental designs, she concluded from her review of the literature, some of it her own research, that these assumptions were myths and misconceptions (see, e.g., Gernsbacher, Dawson, & Goldsmith, 2005; Gernsbacher, Dissanayake, et al., 2005). In the final section of her lecture, she addressed autism intervention and therapy, specifically the assumption that applied behavior analysis is an effective treatment. Before addressing her review of this literature and her conclusions, though, I put applied behavior analysis in a broader disciplinary framework and then in a local and historical context. This material is intended, in part, as a scholarly resource, so it is a tad academic.

Applied Behavior Analysis

Applied behavior analysis is more than intervention and therapy. It is a subdiscipline of the field of behavior analysis (J. Moore & Cooper, 2003; see The Behavior Analyst; www. abaintenational.org; www.behavior.org). The field comprises (a) a natural science of behavior (i.e., basic behavioral principles and processes; e.g., reinforcement, shaping; see Catania, 2007; Journal of the Experimental Analysis of Behavior), (b) related conceptual commitments (i.e., philosophy of science; e.g., naturalism, empiricism; see J. Moore, 2008; The Behavior Analyst), and (c) applied research on problems of societal importance and means for ameliorating them (Cooper, Heron, & Heward, 2007; Journal of Applied Behavior Analysis [JABA]; Behavior Analysis in Practice). For concise overviews, see Michael (1985) and Reese (1986).

Although applied behavior analysis arose at several U.S. and Canadian sites in the late 1950s and early 1960s (Kazdin, 1978), its first institutional base was KU's Department of Human Development and Family Life (established 1965), now the Department of Applied Behavioral Science (ABS; established 2004). This is where ABA's flagship journal (JABA) was founded (Wolf, 1993), the subdiscipline's basic dimensions were first articulated (Baer, Wolf, & Risley, 1968), and some of its earliest innovative programs of research were undertaken. These include the Juniper Gardens Children's Project for youth, school, and community development (Hall, Schiefelbusch, Greenwood, & Hoyt, 2006) and Achievement Place for juvenile offenders (i.e., the Teaching Family Model; Wolf, Kirigin, Fixsen, Blase, & Braukmann, 1995), both of them in collaboration with the Bureau of Child Research, now the Schiefelbusch Institute for Life Span Studies (Schiefelbsuch & Schroeder, 2006; see Baer, 1993a; Goodall, 1972).2

Applied behavior analysis involves an integration of research and application, including use-inspired basic research (i.e., basic research in the interests of application; e.g., stimulus control of stereotyped behavior; Doughty, Anderson, Doughty, Williams, & Saunders, 2007), discovery research (i.e., research on unplanned findings; e.g., on the overjustification effect; Roane, Fisher, & McDonough, 2003), and translational research (i.e., the translation of basic research into practice; e.g., reinforcer magnitude and delay; Lerman, Addison, & Kodak, 2006). In the main, however, ABA addresses atypical behavior (e.g., stereotypy; Reeve, Reeve, Townsend, & Poulson, 2007), methods for its assessment and analysis (e.g., functional assessment and analysis; R. H. Thompson & Iwata, 2007), behavior-change procedures (e.g., desensitization for phobias; Ricciardi, Luiselli, & Camare, 2006), packages of behavior-change procedures (e.g., self-management; peer-mediated treatments; Stahmer & Schreibman, 1992), and comprehensive programs of treatment (e.g., early intensive behavioral interventions; T. Smith, Groen, & Wynn, 2000).

Applied behavior analysis also ranges across several domains (Luiselli, Russo, Christian, & Wilczynski, 2008), for instance, (a) from individual procedures for specific behavior to comprehensive programs for problems in daily living (e.g., Iwata, Zarcone, Vollmer, & Smith, 1994; McClannahan & Krantz, 1994), (b) from inpatient to on-site service delivery (e.g., Hagopian, Fisher, Sullivan, Acquisto, & LeBlanc, 1998; Nordquist & Wahler, 1973), and (c) from staff training to organizational behavioral management (e.g., McClannahan & Krantz, 1993; J. W. Moore & Fisher, 2007; Sturmey, 2008; see Cuvo & Vallelunga, 2007). Finally, the field's interventions are, ideally, research, too, in that clinical decisions are data based (e.g., when to alter or amend them). In fact, the ethical guidelines of the Behavior Analysis Certification Board® (BACB) require data-based decision making (see Bailey & Burch, 2005, pp. 104–106, 212–214).

Gernsbacher's Review and Conclusions

Gernsbacher did not review all the applied behavior-analytic research in autism. That would have been too great a task. Over 750 articles were published between 1960 and 1995 (DeMyer, Hingtgen, & Jackson, 1981; Matson, Benavidez, Compton, Paclawskyj, & Baglio, 1996) and hundreds more since then. They appear in JABA, other applied behavioral science journals (e.g., Behavioral Interventions), and journals in related fields (e.g., American Journal on Mental Retardation, Journal of Consulting and Clinical Psychology). What Gernsbacher reviewed was a subset of the comprehensive programs for early intensive behavioral interventions (ABA-EIBI) that she referred to as “the Lovaas-style of behavioral treatment.”3 Based on her review, she concluded that the effectiveness of applied behavior analysis for autism was another myth and misconception and that the gains made during treatment were due to the children's “development,” not to ABA-EIBI.

These conclusions upset some audience members. A parent of an adolescent with autism, for whom applied behavior analysis had dramatically improved their lives, asked me what he should use instead. An ABS major bemoaned that her course of study was apparently for naught. A faculty member criticized Gernsbacher for overlooking the extensive literature on which Lovaas-style ABA-EIBI is based. This criticism, though, was not fully justified. Gernsbacher had to be selective in her review, given the size of the literature, the breadth of her audience, and the interests of time.

As for my reaction to her conclusions, I was stunned. However, I was stunned not so much by her conclusions per se. I had heard them before in antiscience rhetoric about autism's etiology and treatment, as well as in sentiment against applied behavior analysis in general (e.g., Meyer & Evans, 1993; www.AutCom.org; www.autistics.org; see “Is ABA the Only Way?” at http://www.autismnz. org.nz/articlesDetail.phpid23; contra. Baer, 2005: Eikeseth, 2001; Green, 1999; J. E. Jacobson, Foxx, & Mulick, 2004; Leaf, McEachin, & Taubman, 2008; Lovaas, 2002, pp. 287–407; T. Thompson, 2007a, pp. 187–203; in general, see Offit, 2008).

Sentiment against applied behavior analysis is not, of course, necessarily antiscience. No matter what Gernsbacher's sentiments may be, her achievements are anything but antiscience. What stunned me, then, was how she reached her conclusions: She inaccurately represented research reviews, wrongly characterized applied behavior-analytic interventions, misleadingly appealed to history, inaccurately conveyed research designs, selectively omitted research results, and incorrectly interpreted intervention outcomes. Although misrepresentations are often only a minor nuisance in science, they can have harmful consequences, which I believe hers did (and do), both locally and more broadly.

The local consequences included misinforming KU's community members about ABA-EIBI; hundreds of KU students about a science of behavior and its application; current and prospective ABS majors about a course of study at KU (and careers); and KU staff, faculty, and administrators about scholarship in a department renowned for its research in applied behavior analysis. The broader consequences include Gernsbacher's probable influence on behavioral, social, and cognitive scientists who teach, conduct research, and provide services in autism; funding agencies and foundations who set priorities and allocate resources for autism research and applications; and state and federal agencies that set standards for autism services and funding. She has standing and stature in most, if not all, of these venues: in APS, of course, but also in the American Association for the Advancement of Science (AAAS), where she is a psychology section member at large, and in the National Science Foundation (NSF), where she is on the Advisory Committee for the Social, Behavioral, and Economic Sciences. Although Gernsbacher surely gained these highly respected positions by conducting first-rate science, the hallmarks of her science were largely absent in this section of her lecture.

In Response

In what follows, I respond to Gernsbacher's misrepresentations, but remain agnostic, yet curious, about their source or sources. No matter what, though, misrepresentations remain misrepresentations. In addressing them, I reproduce this section of her lecture below,4 inserting bracketed material to provide context and continuity. Then, where they occur, I address the misrepresentations. For the sake of brevity, such as it is, I restrict my comments to her lecture and note her ABA-EIBI-related publications only in passing (e.g., M. Dawson et al., 2008; Gernsbacher, 2003). As a result, I do not address important issues in autism research and application that she did not cover, for instance, the incomplete reporting of treatment variables in research (Lechago & Carr, in press; see Kazdin & Nock, 2003), among them, therapist competence (Shook & Favell, 1996), treatment intensity (Graff, Green, & Libby, 1998), and treatment fidelity or integrity (Wolery & Garfinkle, 2002). I also set aside the literatures on treatment effects on brain structure (G. Dawson, 2008; T. Thompson, 2007b), autism recovery and its mechanisms (Helt et al., 2008), and ABA-EIBI's long-term costs and benefits (Chasson, Harris, & Neely, 2007; J. W. Jacobson & Mulick, 2000).

My response may give offense to Gernsbacher, but none is intended. I am concerned about scientific communication and reasoning, not about a person or persons. Indeed, my comments are made in the spirit of the behavior-analytic maxim: “The organism is always right.” It is not always right, of course, in a moral or factual sense, but it is “right” in the sense that behavior is a lawful subject matter for a science unto its own. In that science, behavior is a function of the organism's biology, its environment, and the history of their transactions in which organisms become individuals.5 Unfortunately, English grammar is not neutral in this matter. Its agent-action syntax implicates organisms as the agents of their actions (Hineline, 1980, 2003). As a result, in acquiring English, we acquire a philosophy of mind woven thread-by-thread unconsciously into the fabric of our lives. This philosophy is both inimical to a science of behavior qua behavior (e.g., mind–body dualism; Koestler, 1967; C. R. Rogers & Skinner, 1956) and a basis for counter-Enlightenment, postmodern critiques of it (e.g., humanistic, revelatory; Krutch, 1954; Rand, 1982). Its press (that science's press) is worse than that for evolution in Kansas (Frank, 2004). This syntax may also make my comments appear ad hominem and bereft of compassion for Gernsbacher as a parent of a child with autism. Where this occurs, I apologize (see Skinner, 1972, 1975). ABA-EIBI's critics are always right, too.


I now turn to Gernbacher's lecture. I begin where she began on autism intervention and therapy:

Finally, since I'm starting to talk about intervention and therapy, I am going to go to the last section of my talk and that is the empirical evidence for claims such as this: “There is little doubt that early intervention based on the principles and practices of applied behavior analysis can produce large, comprehensive, lasting, and meaningful improvements in many important domains for a large proportion of children with autism.” As you might know, the author is referring to what is known as the Lovaas-style of behavioral treatment for autistic children.

At this point, I offer a seemingly trivial observation, for which I beg the reader's indulgence. As I noted, I am curious about the sources of Gernsbacher's misrepresentations. One means of discerning them is to address them all, no matter how seemingly innocuous, to see if any patterns emerge. I begin with first instances.

Improvements in Children with Autism

The quotation above about “improvements … for a large proportion of children” was taken out of context. Its author, Gina Green (1996), qualified it in her next sentence: “For some, those improvements can amount to … completely normal intellectual, social, academic, communicative, and adaptive functioning” (p. 38). “Some” children is not “a large proportion of children.” Quoting material out of context is not inherently misleading, of course. Moreover, Gernsbacher could not quote ad infinitum; she had to be selective. In any event, the consequence was probably negligible because ABA-EIBI's effectiveness has been overstated by some of its advocates, too (Green, 1999; Herbert, Sharp, & Gaudiano, 2002). Many critics of these overstatements, however, also support ABA, as in, “ABA is one of the most—if not the most—promising interventions for childhood autism” (Herbert & Brandsma, 2001, p. 49). For an overview of applied behavior analysis in autism, see Harris and Weiss (2007).

Lovaas-Style ABA-EIBI Treatment for Autistic Children

The first ABA research on children with autism was published in 1964 by Wolf, Risley, and Mees.6 The first systematic report of Lovaas-style ABA-EIBI was published in 1973 by Lovaas, Koegel, Simmons, and Long. The first report of a comprehensive ABA-EIBI program was published in 1985 by Fenske, Zalenski, Krantz, and McClannahan. And, the first clinical trial of Lovaas-style ABA-EIBI was published in 1987 by Lovaas (see also Celiberti, Alessandri, Fong, & Weiss, 1993; Maurice, Green, & Luce, 1996).

In that trial, the experimental group (n  =  19; chronological age  =  2 years 11 months) received 2 years of 40 hr per week of one-on-one in-home ABA-EIBI from their parents and staff members from the UCLA Young Autism Project. The primary control group was a treatment comparison control group (n  =  19; chronological age  =  3 years 5 months) that received fewer than 10 hr per week of ABA-EIBI plus community treatment (e.g., special education). This controlled for maturational effects—or what Gernsbacher called “development”—over the course of the study; any such effects would presumably have been the same in both groups. A matched secondary control group (n  =  21; chronological age  =  3 years 6 months) was drawn largely from the same population and received community treatment. This controlled for selection bias and permitted a comparison between ABA-EIBI and treatment as usual (Freeman, Ritvo, Needleman, & Yokota, 1985).

Lovaas (1987) did not randomly assign his participants to the experimental and control groups, as he had planned, because of “parent protest and ethical considerations” (p. 4; Lovaas, 2002, pp. 388–389). Instead, he assigned them on the basis of staff availability for the experimental group. This is an accepted practice in clinical research, especially if the treatment and control groups can be matched a priori or are equivalent on pretreatment measures (Baer, 1993b; Eikeseth, 2001; Kazdin, 1992). In Lovaas's case, his groups were statistically equivalent on 19 of 20 pretreatment measures, among them, their IQs, which were 53 and 46, respectively (McEachin, Smith, & Lovaas, 1993). After treatment, the experimental group had significantly higher IQs than the control groups (83 vs. 52 and 58) and a significantly higher probability of passing first grade in regular education classrooms (9 of 19 vs. 1 of 40). The 9 participants who passed first grade had a mean IQ of 107 and were considered to be “recovered.” In a follow-up study, the experimental group was found to have maintained these and other gains (e.g., in adaptive behavior; McEachin et al.).

In describing Lovaas-style ABA-EIBI, Gernsbacher continued, “as illustrated in the intro to this 1980s film.” The film was Behavioral Treatment of Autistic Children (E. Anderson, Aller, & Lovaas, 1988), which reviewed and followed up on Lovaas et al. (1973) and Lovaas (1987). Its 15-s introduction showed a therapist and a child sitting at a table across from each other engaged in DTT. DTT is one of many technologies that has evolved from ABA research (T. Smith, 2001; Tarbox & Najdowski, 2008), but none of them is meant to be applied in a cookie-cutter fashion. Ideally, applications are individualized, taking into account developmental and individual differences (Schreibman, 2000), as well as differences in families and settings (on values, see e.g., Wolf, 1978).

DTT ranges along a continuum from more to less structured trials and from massed to distributed trials. Highly structured and massed DTT may consist of a therapist's request or instruction (e.g., to imitate a vocal or nonvocal model), a child's response (e.g., imitation), and a therapist's consequence (e.g., “yes,” “no,” hugs). The film's introduction shows the end of one such trial, in which the therapist says, “Oh, good boy; that's good” and leans in for a kiss. In the next trial, the therapist says “Sit up; get doll a drink,” the child gives the doll a drink, and the therapist says the child's name and “very nice.” In the next trial, the therapist says “Kiss doll,” but the child again gives the doll a drink, and the therapist says “No, kiss doll,” which ends that trial and begins another.

When possible, DTT moves from more to less structure and from massed to distributed trials, that is, to those that are more naturalistic (e.g., incidental teaching; see Allen & Cowan, 2008). Incidental teaching is also an applied behavior-analytic technology (Hart & Risley, 1975; see McGee, Krantz, & McClannahan, 1985), as well as DTT: Therapists set toys aside, children request them, and therapists provide them if requested correctly (or else are prompted). Structured and massed DTT is used to build the basic linguistic, social, and academic repertoires necessary for moving to less structured, more distributed DTT, which then builds repertoires necessary for functioning more fully in everyday life (e.g., functional communication, social reciprocity, and self-guidance; Leaf & McEachin, 1999; Lovaas, 1981, 2002; T. Smith, 2001). Where ABA-EIBI begins on this continuum and how quickly it moves toward more naturalistic procedures depend on children's developmental and individual differences and their rates of progress (see R. R. Anderson, Taras, & Cannon, 1996), not on developmental norms and theories, the latter of which remain largely unfounded.7

As for the film, Gernsbacher could not have played its full 43 min. She had to be selective again. However, the segment she played was not representative. It showed only structured, massed DTT, not the children later in social play and conversation as teenagers with peers without autism (and indistinguishable from them). In Gernsbacher's defense, no 15-s segment could have fairly represented the film. Thus, any such segment would merit a disclaimer, but none was provided. She continued,

I truly cannot underestimate how much attention this style of intervention has received. As just one metric, the Clinical Practice Guideline, distributed by the New York State Department of Public Health recommends that virtually no other intervention be conducted with young autistic children except for that one style of intervention [ABA-EIBI] because other interventions like speech therapy or physical therapy would take precious time away from the necessary treatment supposedly needed for that style of intervention. But what do the data show? Are there, as stated on the Surgeon General's Web site, “thirty years of research” demonstrating “the efficacy of applied behavioral methods in reducing inappropriate behavior and in increasing communication, learning, and appropriate social behavior”? [Gernsbacher, 2003, p. 20; the quoted material is from the U.S. Surgeon General's Web site: www. surgeongeneral.gov/library/mentalhealth/chapter3/sec6.html]

The New York State Department of Health Clinical Practice Guideline

In mentioning only the New York State Department of Health's (not Public Health's) (NYSDH, 1999a, 1999b, 1999c) Clinical Practice Guideline (Guideline)8 and the U.S. Surgeon General's Mental Health Report (1999), Gernsbacher omitted ABA-EIBI's endorsement by other academies, institutes, and councils at the time of her lecture, among them, the American Academy of Pediatrics (2001), the National Institute of Mental Health (2007), the National Research Council (2001), California's Collaborative Work Group on Autism Spectrum Disorders (1997), Maine's Task Force Report for Administrators of Services for Children with Disabilities (1999), and other state reports and guidelines (e.g., Alaska, Vermont). Again, time constraints may have kept her from mentioning these, which was fair, as long as her omissions were not systematically biased.

The claims

Turning to her claim that the NYSDH recommended that “virtually no other intervention be conducted with young autistic children except for that one style of intervention [ABA-EIBI],” I could not find this in the Guideline. So, perhaps it was an interpretation. For instance, although applied behavior analysis was just one of seven “experiential approaches” the NYSDH reviewed, it was the only one that was recommended as a primary treatment. This was not, however, a recommendation for Lovaas-style ABA-EIBI. The NYSDH (1999b) recommended only that the “principles of applied behavior analysis and behavior intervention strategies be included as important elements in any intervention program for young children with autism” (p. 33).

As for the claim that the NYSDH recommended that no other interventions be conducted because they “would take precious time away from the necessary treatment supposedly needed for [ABA-EIBI],” this was similar to Gernsbacher's (2003) assertion that the Guideline recommended that “some interventions not even be included in a child's therapeutic program because those interventions might take time away from an intervention that had been scientifically proven” (p. 20). Not only did I fail to find this in the Guideline, but the Guideline contradicts it. It notes that applied behavior analysis “may also incorporate some elements of other approaches, such as developmental and cognitive approaches” (NYSDH, 1999a, chap. 4, p. 14) and cites this as an advantage (p. 24), although some advocates of ABA-EIBI and treatment efficacy would disagree because those approaches generally lack empirical support (e.g., Green, 1996; Lilienfeld, 2007).

The closest the NYSDH (1999a) comes to Gernsbacher's claim is in describing another experiential approach: the developmental, individual difference, relationship (DIR) model also known as floor time (chap. 4, pp. 55–70). DIR seeks to alleviate the symptoms of autism as a psychiatric disorder by enhancing affective parent–child relations through child-led play and interactive motor, sensory, and spatial activities, taking the children's developmental level into account. In particular, it recommends that therapists and parents spend six to ten 20- to 30-min sessions per day on the floor “working on the child's ability for affective-based interactions” (NYSDH, 1999c, p. 153). DIR, however, seems little more than a program of intensive free-operant differential reinforcement of desired behaviors through successive approximations (i.e., shaping), along with some incidental teaching. The NYSDH, however, found no empirical support for it in the only study published at the time (a chart-review study; Greenspan & Wieder, 1997) and thus did not recommend it as a primary treatment. Furthermore, the NYSDH (1999a) cautioned that DIR “may interfere with an intensive behavioral educational program unless steps are taken to coordinate the two” and that, being intensive itself, DIR “may take time away from interventions that have been shown to be effective” (chap. 4, p. 56). These cautions were not admonitions against using ABA-EIBI.

If the source of Gernsbacher's claim was not in the Guideline, then it presumably lay elsewhere. In her 2003 article, she attributed the following to Behavior Analysts, Inc.: “Diverting attention, even for a brief period of time, away from treatment methods that have been scientifically proven to be effective is a disservice and can have serious consequences” (p. 20; see www.behavioranalysis.org/level2/EvaluatingTreatmentEffectiveness.htm). Behavior Analysts, Inc., however, was silent about ABA-EIBI; it was only offering a general precaution. So, too, was Green (1996), in arguing for using the most effective treatments (ABA-EIBI or not) as opposed to less effective or ineffective ones. Lilienfeld (2007) refers to the harm caused by the latter as “opportunity costs.” These include “lost time and the energy and the effort expended in seeking out interventions that are not beneficial” (p. 57), to which the benefits lost by delaying treatment need to be added.

As for Gernsbacher's claim that the NYSDH recommended against “speech therapy or physical therapy,” I also could not find this in the Guideline. Moreover, Behavior Analysts, Inc. recommends otherwise. Its answer to a frequently asked question (“How does speech therapy fit into your approach?”) was this: “Our program supervisors determine when speech (or other) therapy would benefit the child and make the appropriate referral. In fact, we offer speech therapy at some of our centers and clinics.” The book in which Green's (1996) chapter appeared also contradicts the claim: It contains a chapter on how to incorporate speech-language therapy into applied behavior analysis (Parker, 1996). As T. Thompson (2007a) has noted,

An experienced speech therapist can be invaluable in developing effective treatment methods that should be used by all therapists and teachers as well as the child's parents. … Many children with ASD have subtle perceptual-motor coordination problems, which can be addressed by occupational therapists. (pp. 42–43; see also Koenig & Gerenser, 2006)

By this, Thompson meant therapists who provide evidence-based treatments that are integrated with ABA-EIBI, not empirically unsupported pull-out services.

This is all I could find about the source of Gernsbacher's claim that “virtually no other intervention [than ABA-EIBI] be conducted.” If a source does exist, she should have cited it and then distinguished between quoting from it and providing an interpretation of it, so that the audience could have responded effectively to her claim. She continued,

[What do the data show?] Well, to answer that question, we can go back to the New York State Guideline books because, in formulating their guidelines, they conducted a thorough literature review. They found 232 articles that reported using behavioral and educational approaches in children with autism and these articles were systematically screened and five articles reporting four studies were found that met established criteria. So, of the 232 articles, they found in their exhaustive literature review, only five articles met their own standards [see also Gernsbacher, 2003, p. 20]. And, these are the people who believe that this [ABA-EIBI] is a very scientifically supported intervention.

Gernsbacher's description of the NYSDH's literature review elided so many details that it misrepresented the ABA-EIBI research. The NYSDH's (1999a) goal was to “identify relevant scientific articles that might contain evidence about intervention methods for young children with autism” (Appendix B, p. 3; see Noyes-Grosser et al., 2005). To identify them, its reviewers searched the 1980–1998 MEDLINE, PsychINFO, and ERIC databases under autism, infantile autism, and autistic children and read the abstracts of all the articles for those “that might contain evidence about intervention” and then obtained those articles. These were the 232 articles the NYSDH screened in its search of reports of original data on intensive behavioral treatment (see below).

Several consequences arise from eliding these and other details. First, in asking, “What do the data show?” Gernsbacher was asking, rhetorically, what the 232 articles that reported “using behavioral and educational approaches” showed about “the efficacy of applied behavioral methods.” This implied that the 232 articles were applied behavior-analytic articles, but this misrepresented the Guideline on three counts: (a) The keywords in the NYSDH's (1999a) search were “behavior therapy, behavior modification, psychotherapy, psychoanalytic therapy, psychotherapeutic techniques, instructional programs, and special education” (Appendix B, pp. 4–5). Psychoanalytic therapy is not applied behavior analysis. (b) Not all the 232 articles reported using behavioral and educational approaches. Many of them were descriptions of interventions, literature reviews, theoretical articles, and commentaries and critiques. (c) Of the behavior-analytic reports of research, most of them used within-subject replication (single-subject) designs to evaluate the effects of individual interventions for discrete behaviors (e.g., MacDuff, Krantz, & McClannahan, 1993). These were not ABA-EIBI or the comprehensive programs of research the NYSDH was selecting for.

Second, the claim that only five of the 232 articles “met established criteria” for ABA-EIBI confused the criteria. Of the 232 articles the NYSDH screened, a subset of “articles meeting criteria” (NYSDH, 1999a, Appendix B, p. 4) “was selected for more in-depth review if [they] appeared to contain original data about [a] … treatment method for autism” (NYSDH, 1999a, chap. 1, p. 9). The articles also had to meet “general criteria” (e.g., include participant age; NYSDH, 1999a, chap. 1, p. 16) and “additional criteria” (e.g., evaluate functional outcomes; NYSDH, 1999a, chap. 1, p. 17). Among these articles, those that reported intensive behavioral and educational programs had to “involve [the] systematic use of behavioral teaching techniques and intervention procedures, intensive direct instruction by the therapist, and extensive parent training and support” (NYSDH, 1999c, p. 229).

Given these criteria, eight of the 232 articles were selected for in-depth review, all of them control-group studies. These were Birnbrauer and Leach (1993), Koegel, Bimbela, and Schreibman (1996), Layton (1988), Lovaas (1987), McEachin et al. (1993), Ozonoff and Cathcart (1998), Sheinkopf and Siegel (1998), and T. Smith, Eikeseth, Klevstrand, and Lovaas (1997) (see NYSDH, 1999c, p. 57). From these, the NYSDH selected the articles that provided evidence for efficacy on the basis of several methodological criteria (e.g., controlled trials; NYSDH, 1999a, Appendix B, p. 4; 1999c, p. 229). These were the five articles reporting four studies that met what Gernsbacher referred to as the “established criteria,” all of them Lovaas-style ABA-EIBI studies: Birnbrauer and Leach (1993), Lovaas (1987), McEachin et al. (1993), Sheinkopf and Siegel (1998), and T. Smith et al. (1997) (see NYSDH, 1999a, chap. 4, pp. 17–21; Appendix 7, pp. 7–11). Thus, in the end, four of the seven studies (67%) the NYSDH reviewed in depth and four of the four (100%) ABA-EIBI studies met its criteria for efficacy, not five out of the 232 (2.2%), as implied. In eliding the distinctions among what the NYSDH searched and screened and the “articles meeting criteria” for in-depth review and those that met the criteria for efficacy, Gernsbacher misrepresented the quantity and quality of the ABA-EIBI research and the efficacy of applied behavior-analytic treatment overall. She continued,

However, as even the New York State Guideline notes [what follows is a quotation from the Guideline], “None of the four studies that met criteria for efficacy used random assignment of the children to the groups, such as to the group receiving intensive behavioral intervention versus the group receiving a comparison intervention” (see NYSDH, 1999a, chap. 4, p. 22). And, I believe everyone who has studied behavioral research realizes how absolutely critical it is to randomly assign participants to the treatment versus the control. For example, I could say, “Ah, I'm going to give out new iPhones tonight and I'm going to do it, you know, randomly. In fact, I'm going to give the first ten people sitting right over there my iPhones.” I think those of you up there [in the balcony] would get a little miffed, right? [She paused for the answer, “Yes.”] I would, too. Random assignment is absolutely critical. It is what enables you to draw scientifically supported conclusions.

Random assignment is indeed important, a point I address shortly, but first I note that Gernsbacher's claim that none of the four studies met what she called the NYSDH's “established criteria,” “own standards,” or “criterion for efficacy” was misleading. The four studies did meet the NYSDH's criteria for assigning participants to groups because the NYSDH had two criteria: The studies had to “assign subjects to groups either randomly or [italics added] using a method that did not appear to significantly bias the results” (NYSDH, 1999a, chap. 1, p. 17; 1999c, p. 199; e.g., Lovaas, 1987). The studies thus met the NYSDH's either–or criterion and thus its criteria overall.

Misrepresenting ABA-EIBI Research I

Gernsbacher continued,

But, of the four studies that were mentioned from this review, the first two weren't even experiments [Sheinkopf & Siegel, 1998; T. Smith et al., 1997]. In fact, they were just record reports, where we go back in time and we say, “This person has a 4.0. Let's see if she ate pasta every night her freshman year.”

The claim that Sheinkopf and Siegel (1998) and T. Smith et al. (1997) “weren't even experiments” and “were just record reports” misrepresented them, but then much depends on the meaning of “experiment.” It differs across the sciences. In the social sciences, control-group designs compare (a) the effects of a condition for one group of participants to (b) its absence (or another condition) for another group, after which the statistical significance of any differences in their correlated outcomes is inferred. In the natural sciences, within-subject and within-group replication designs are more the norm (T. Thompson, 1984). In these, experimental conditions are systematically applied, removed, and replicated within individuals or groups, with the differences between them displayed in graphs (on the greater use of graphs in “harder” vs. “softer” psychology, see L. D. Smith, Best, Stubbs, Archibald, & Roberson-Nay, 2002). This is also the applied behavior-analytic approach (Johnston & Pennypacker, 2009; Sidman, 1960), which is increasingly appreciated in clinical psychology (Barlow & Nock, 2009; Borckardt et al., 2008). For its use in autism research, see Wacker, Berg, and Harding (2008). I am not taking sides in this matter, just noting that experiment has a range of meanings.

In any event, although Sheinkopf and Siegel (1998) and T. Smith et al. (1997) were not planned experiments, they were not “just record reports” of a relation between treatment and its outcome. They were record reports that used treatment comparison control groups, another point Gernsbacher omitted. Sheinkopf and Siegel, for instance, found 11 children in a longitudinal study of autism whose parents had provided 19 hr per week of Lovaas-style ABA-EIBI. The authors then formed a matched treatment comparison control group from the same study; its participants had been provided 11 hr per week of treatment as usual (i.e., school-based interventions). Over the course of 18 to 20 months, the experimental group made a significant 25-point gain in IQ over the control group and had a significant reduction in symptom severity. See Lovaas (2002, pp. 399–400), however, for a critique of the study. As for T. Smith et al., they created an experimental group and a treatment comparison control group of preschool children with mental retardation and pervasive developmental disorder on the basis of records at the UCLA project and other sites. The experimental group (n  =  11) had received 30 hr of Lovaas-style ABA-EIBI per week, while the treatment comparison control group had received 10 or fewer hours per week. In the 2 to 3 years between intake and follow-up, the experimental group made a significant 12-point gain in IQ and a significant gain in expressive speech over the control group. Gernsbacher continued,

The other two studies were experiments [Birnbrauer & Leach, 1993; Lovaas, 1987; McEachin et al., 1993], but they didn't include the critical piece of random assignment. Instead, the participants were assigned to either the treatment or the control group by factors such as who lived closer, whose parents wanted them to be in the treatment group, who could pay for some of the treatment, et cetera, et cetera.

As for Gernsbacher's claims about participant assignment, first, her claim that children were assigned on the basis of “who lived closer” was presumably a rewording of who lived too far away, but this rarely occurred. Lovaas (1987) assigned only 2 of his 38 children to the control group “because they lived further away from UCLA than a 1-hr drive, which made sufficient staffing unavailable to those clients” (p. 4) And, although Birnbrauer and Leach (1993) excluded three families because they “lived too far away” (p. 64), the families were excluded from both the experimental and the control groups. Second, her claim that children were assigned on the basis of “whose parents wanted them to be in the treatment group” was presumably a rewording of “parent protest,” but is not true. This would have yielded groups that likely differed in parental involvement in treatment (e.g., effort, motivation), which is why the children were assigned on the basis of therapist availability. Third, I found nothing to support the claim that children were assigned on the basis of “who could pay for some of the treatment.” Rewording, overstating, and misstating research methodology are bound to misrepresent it.

As for the findings of these studies, I have already reviewed Lovaas (1987) and McEachin et al. (1993) and so here only describe Birnbrauer and Leach (1993). They provided 19 hr per week of ABA-EIBI to 9 children with autism and pervasive developmental disorder; the control group was comprised of 5 children who received unknown treatment. Although the groups were similar at pretreatment, the experimental group made more gains after 2 years than the control group on standardized and descriptive measures of intelligence, language, personality, and adaptive functioning. However, no statistical analyses were conducted.

For pre-2000 applied behavior-analytic research Gernsbacher did not review, see S. R. Anderson, Avery, DiPietro, Edwards, and Christian (1987), Fenske et al. (1985), Handleman, Harris, Celierti, Lilleheht, and Tomchek (1991), Harris, Handleman, Gordon, Kristoff, and Fuentes (1991), Harris, Handleman, Kristoff, Bass, and Gordon (1990), Hoyson, Jamieson, and Strain (1984), Perry, Cohen, and DeCarlo (1995), and Weiss (1999). For literature reviews, see S. J. Rogers (1998) and Matson et al. (1996).

Experimental Control

Gernsbacher continued, “Well, the New York State Guideline says it's been argued that the [nonrandom] method for group assignment probably did not bias the results [NYSDH, 1999a, chap. 4, p. 22; see Gernsbacher, 2003, p. 21].” The Guideline did not argue that nonrandom assignment will not bias results. It only described the outcome of nonrandom assignment in these studies: “In all cases the authors analyzed the pretreatment … data to see if the groups were equivalent in important variables. Most of the authors concluded that such analyses found no systematic bias in the assignment of subjects to the intervention or comparison group” (NYSDH, 1999a, chap. 4, p. 22). Furthermore, the NYSDH (1999a) noted that “all studies showed similar and consistent results” (chap. 4, p. 24). This does not mean that no biases existed, only that no (or few) biases were found among the important variables; that is, the variables were balanced across groups.

Other critics have also noted the possibility of bias on pretreatment measures, as well as the use of nonequivalent pretest–posttest measures and weak assessment measures (e.g., Foxx, 1993; Gresham & MacMillan, 1997; Kazdin, 1993; Munday, 1993; Schopler, Short, & Mezibov, 1989). This is not perfect science. These criticisms, though, have been subject to counter-criticisms (e.g., changes in pretest–posttest language skills, for instance, may require different measures; Eikeseth, 2001; Lovaas, 1993; Lovaas, Smith, & McEachin, 1989; McEachin et al., 1993; T. Smith & Lovaas, 1997; T. Smith, McEachin, & Lovaas, 1993), the counter-criticisms to counter-counter-criticisms (e.g., Gresham & MacMillan, 1998) and the counter-counter-criticisms to counter-counter-counter-criticisms (e.g., Lovaas, 2002, pp. 387–407)—science red in tooth and claw.

Nevertheless, Gernsbacher's criticism of the foregoing studies for not using random assignment has obvious merit. However, it is not as straightforward as it seems (S. J. Rogers & Vismara, 2008, p. 30). First, although the American Psychological Association (APA, 2002a) states that “Randomized controlled experiments … are the most effective way to rule out threats to internal validity in a single experiment” (p. 1054), it notes that the experiments remain subject to threats of external and construct validity and need replication.

Second, random assignment is but one component of randomized controlled trials (RCTs). The gold standard requires double-blind (or triple-blind) placebo control groups in which the experimenters, participants, therapists, evaluators, and statisticians do not know which participants are assigned to which group. Even then, it does not guarantee that statistically significant treatments are clinically significant.

Third, even when random assignment is planned for or used, practical problems ensue. (a) The treatment's intensity often makes it discriminable from control groups, which allows families to distinguish ABA-EIBI from other treatments (J. E. Jacobson, 2000). (b) Parents will protest the random assignment of their children to experimental and control groups, and withdraw them from research if assigned to the latter. (c) Given the empirical evidence for ABA-EIBI, institutional review boards and parents will balk at the ethics of assigning children to control groups. And (d), because of ABA-EIBI's intensity, experimental groups are often so small that random assignment can, by chance, create groups unbalanced on variables that are critical to the outcome (e.g., language, age, IQ; see Reichow & Wolery, in press).

Fourth, the claim that random assignment is absolutely critical may be overly conservative if, failing that standard, public health initiatives are delayed and treatments are withheld at irreversible risk to individuals and populations. For instance, given the standard of random assignment, no proof exists that smoking causes lung cancer in humans, yet a convergence of evidence was sufficient for the Surgeon General to take action regarding it.

In the end, scientific conclusions are supported by a range and convergence of methods, with logically permissible conclusions nested hierarchically within them (see T. Smith et al., 2006). Among the methods, randomization is a means for assigning participants to groups, not an end in itself. It does not guarantee unbiased assignment, except in the long run. Presumably, methods such as Lovaas's, could assign participants in an unbiased manner. Bias is an empirical matter (Baer, 1993b).

Appeals to History

Referring to what the NYSDH (1999a) argued about group assignment, Gernsbacher continued, “My academic great-grandfather [Wilhelm Wundt] would be rolling over in his grave.” (In her introduction, she mentioned tracing her academic lineage back to Wundt who is “typically credited with establishing the first experimental psychology laboratory, and who therefore earned the status of father of experimental psychology”; see Boring, 1950, pp. 316–347.)

Appeals to history can be perilous. First, in this case, most doctorates of psychology can trace their lineage back to either Wundt (1832–1920) or William James (1842–1910), so Gernsbacher's appeal to Wundt was rhetorical, not scholarly. Second, using history to justify apparently winning traditions (e.g., cognitivism), as opposed to apparently losing traditions (e.g., behaviorism), is a breach of historiographical method called presentism (Samelson, 1974; Stocking, 1965; see Furumoto, 1989). Third, citing Wundt on participant assignment was misleading. Although he likely knew of John Stuart Mill's (1843) “method of differences,” he was not expert in group designs. His research was mainly case studies of individuals who reported their introspectively observed experiences (e.g., mental elements and processes; Wundt, 1874/1904), studies that do not meet the standards of within-individual replication designs (e.g., Kennedy, 2005; Sidman, 1960). Moreover, his participants were a highly trained, nonrandom sample of the adult population. Wundt's research program died for methodological reasons: often poor reliability within studies and, more often, poor replicability across laboratories (Boring, 1950).

Research Review: Separating Fact from Fiction

Gernsbacher continued,

And, in fact, [Wundt] would probably have drawn the same conclusions as those drawn in an article titled, “Separating Fact from Fiction in the Etiology and Treatment of Autism” [Herbert et al., 2002]. This article states that “Methodological weaknesses of the existing studies severely limit the conclusion that can be drawn about their efficacy.” [p. 35; see Gernsbacher, 2003, p. 21]

This quotation from Herbert et al. (2002) inaccurately portrayed their conclusions about ABA-EIBI. First, they addressed ABA-EIBI in a section titled “Promising Treatments for Autism” (pp. 33–38), in which ABA-EIBI was a “fact,” not “fiction.” Second, although ABA-EIBI research has mainly used nonrandom assignment, Herbert et al. concluded that “the intervention programs … are based on sound theories, are supported by at least some controlled research, and clearly warrant further investigation” (p. 33). Third, after reviewing the ABA-EIBI research, Herbert et al. wrote, “Taken together, the literature on ABA programs clearly suggest that such interventions are promising” (p. 35). Gernsbacher, however, quoted the next sentence as their conclusion: “Methodological weaknesses of the existing studies [however] severely limit the conclusions that can be drawn [about] their efficacy.” Fourth, although Herbert et al. admonished the proponents of ABA-EIBI for their uncritical advocacy, they concluded, “Clearly, ABA does not possess most of the features of pseudoscience that typify many of the highly dubious treatments for autism. ABA programs are based on well-established theories of learning and emphasize the value of scientific methods in evaluating treatment effects” (p. 35; for critiques of pseudoscience in autism, see J. E. Jacobson et al., 2004; Offit, 2008).

Evidence for the Other Experiential Approaches

Although the NYSDH (1999a, 1999b, 1999c) and Herbert et al. (2002) noted limitations in the ABA-EIBI research, they also pointed out that the treatment was evidence based, which was more than they said of the other approaches, none of which they recommended as primary interventions. Among those the Guideline reviewed were DIR, sensory integration therapy, touch therapy, auditory integration therapy, facilitated communication (FC), and medical and diet therapies. Herbert et al. addressed these and other approaches under “Questionable Treatments for Autism”: sensory motor therapies (e.g., FC, sensory integration training); psychotherapies (e.g., psychoanalysis, holding therapy), and biological treatments (e.g., secretin, gluten- and casein-free diets, Vitamin B6). Of these, the NYSDH (1999a) and Herbert et al. were most critical of FC (see Biklen, 1990, 1993). As Herbert et al. described it,

Facilitated communication (FC) is a method designed to assist individuals with autism and related disabilities to communicate through the use of a typewriter, keyboard, or similar device. The technique involves a trained “facilitator” holding the disabled person's hand, arm, or shoulder while the latter apparently types messages on the keyboard device. The basic rationale behind FC is that persons with autism suffer from a neurological impairment called apraxia, which interferes with purposeful motoric functioning. (p. 28; see also NYSDH, 1999a, chap. 4, p. 64; 1999b, p. 43)

In its literature search, the NYSDH (1999a) screened 11 FC articles, none of which met its criteria for an in-depth review (NYSDH, 1999c, p. 245; see also Herbert et al., pp. 27–28). Of FC, the NYSDH (1999c) commented,

In studies of facilitated communication used in older children with autism, the messages typed by the children are often far beyond their capabilities as evidenced by their behavior or language. Studies of facilitated communication suggest that communication that exceeds baseline levels for a subject originates from the facilitator rather than the child. Use of facilitated communication has brought up a number of ethical and legal issues. There have been cases where messages produced with facilitated communication have caused emotional distress to parents or have led to accusations of abuse that resulted in legal proceedings [see also Herbert et al., pp. 28, 38; and the Public Broadcasting Service's Frontline report at video.google.com/videoplay?docid = 3439467496200920717]. Recommendations: Because of the lack of evidence for efficacy and possible harms of using facilitated communication, it is strongly recommended that facilitated communication not be used as an intervention method in young children with autism. (p. 160; see also the American Academy of Pediatrics, 2001; APA's 1994 resolution on FC at http//www.apa.org/divisions/div33/fcpolicy.html; J. W. Jacobson, Mulick, & Schwartz, 1995; Lilienfeld, 2007; Offit, 2008, pp. 6–13)

In critiquing FC, Herbert et al. properly distinguished it from augmentative and alternative forms of communication (e.g., keyboards and picture exchange systems; see Bondy & Frost, 1994; Reichle, York, & Sigafoos, 1991). Children with autism often benefit from such technologies and may need hands-on help in mastering them, but the content of their communication is their own, not the facilitators'.

Misrepresenting the ABA-EIBI Research II

Gernsbacher continued,

However, skip ahead to 2007 and there are now two studies of Lovaas-style ABA intervention that did employ the ever so important random assignment [Sallows & Graupner, 2005; T. Smith, Groen, & Wynn, 2000]. And, you're probably curious: What do those studies show? In one study [T. Smith, Groen, & Wynn], there was a slight but nonsignificant advantage for the autistic children. [Gernsbacher presented two figures of treatment gains graphed from intake to follow-up for expressive and receptive language. The lines in the figures were labeled the “ABA” and “Control.”] 9

T. Smith, Groen, and Wynn (2000)

Он сказал, что ты будешь очень расстроена, если поездку придется отложить. Сьюзан растерялась. - Вы говорили с Дэвидом сегодня утром. - Разумеется.

 - Стратмора, похоже, удивило ее недоумение.


Leave a Reply

Your email address will not be published. Required fields are marked *