top of page
Writer's pictureStephen Parker

A Brief History of Reading Instruction

Updated: Dec 23, 2021


Introduction


For more than a century, reading instruction has been involved in a high-stakes battle between supporters of two opposing methods for teaching a child to read. While this battle is often characterized as “Phonics” versus “Whole Word,” I think it's more accurate to describe the two methodologies as “Bottom-Up” versus “Top-Down.” (Alternatively, “Parts-to-Whole” versus “Whole-to-Parts”).


Top-down methods start instruction with whole words – typically called “sight words.” The child rote-memorizes a cache of such words based on their visual characteristics rather than on the sound value of their individual letters. The reason I don’t like calling this battle “Phonics” versus “Whole Word” is there are several types of phonics that are top-down in their orientation. Analogy phonics, analytic phonics, and onset-rime phonics all require that the child memorize a large collection of sight words before such phonics can commence. (For a more complete description of all these types of phonics, see this blog.)


The only bottom-up method that exists is a specific type of phonics called Synthetic Phonics. Here instruction starts, not with whole words, but with the most basic sounds in English, called phonemes. These phonemes are connected to the letters (graphemes) that symbolize them in our alphabetic system. Whole words are then built (bottom-up) by blending these individual sounds. (For more on Synthetic Phonics, see here.) (For a more thorough definition of phoneme and grapheme, see here.)


The two methods are not compatible. One starts with the final product (a whole word) and then may (or may not) eventually get down to the phoneme-grapheme level; the other starts with phonemes and graphemes and then builds up to a whole word. Top-down methods stress “meaning” from the start; the bottom-up method stresses knowledge of the code as a necessary condition for reading comprehension. (See the Simple View of Reading here.)


It may surprise some readers to find out that the battle between various top-down methods and synthetic phonics has been going on since the mid-nineteenth century. Here, then, is a short history of reading instruction from the viewpoint of one (somewhat biased) observer. (Spoiler: synthetic phonics – the bottom-up method – has been losing this battle for a long time.)

 

The History

(dates are approximate)


1800 – 1900: Most children who learn to read during the 19th century are taught from either Noah Webster’s Blue-Backed Speller or from the famous McGuffey Readers. Both sold over 100 million copies, placing them in the same league as the Bible. McGuffey was explicit in his directions to teachers: they could use his primer with what he called the “word” method (top-down), the “phonic” method (bottom-up), or a combination of the two methods. Here’s how McGuffey described [1] these methods:


“The Word Method teaches a child to recognize words as wholes. This method pays no attention to elementary sounds and diacritical marks. After a number of words are taught as wholes, the children are told the names of the letters, and learn to spell.”


“By the Phonic Method, the child is first taught the elementary sounds of letters ; he is then taught to combine these elementary sounds into words. The sound is first taught, and then the character which represents it; the spoken word is learned, and then its written and printed form. This method pays no attention to words as wholes until the elementary sounds composing them are learned.”


“The Combined Word and Phonic Method first presents the word as a whole, and after a number of words are learned in this way, the elementary sounds composing them are taught, with the characters which represent them.”


McGuffey then reveals his bias against a strict Word approach:


“While McGuffey's Readers are prepared to meet the demands of each of these recognized methods, they are especially adapted to the Phonic Method and to the Combined Word and Phonic Method, which are the two methods most extensively used by successful teachers of primary reading.”


1900 – 1930: A transitional period. The Beacon Readers, an improved phonics series, gradually supplants the McGuffey Readers. In the Beacon Readers, the sound of individual letters (phonics) is taught from the start, as well as memorization of whole words:

“It should be clearly understood that at first there must be two distinct lines of teaching carried on side by side: (1) the drill upon phonetic lists for the purpose of developing phonetic power in the child; (2) the reading of simple lessons [stories], mainly by the Word method, until the child’s power in phonetics is far enough advanced to enable him to apply it in his reading.” [2]

In 1908, Edmond Huey publishes his book The Psychology and Pedagogy of Reading which quickly becomes the manifesto of a growing Whole Word (anti-phonics) movement. Here’s an excerpt:

“Even if the child substitutes words of his own for some that are on the page, provided that those express the meaning, it is an encouraging sign that the reading has been real… The shock that such a statement will give to many a practical teacher of reading is but an accurate measure of the hold that a false ideal has taken of us, viz., that to read is to say just what is upon the page, instead of to think, each in his own way, the meaning that the page suggests…It may even be necessary, if the reader is to really tell what the page suggests, to tell it in words that are somewhat variant; for reading is always in the nature of translation and, to be truthful, it must be free.” [3] (emphasis mine)

In 1927, a nationally known educator, Dr. Arthur Gates, from Teachers College at Columbia University, joins the Whole Word movement. Writing in The Journal of Educational Psychology, he sums up his position this way:

“That it will be the part of wisdom to curtail phonetic instruction in the first grade very greatly is strongly implied; indeed, it is not improbable that it should be eliminated entirely.” [4]

By 1930, phonics – meaning explicit teaching of the code – has been abandoned in most of the nation’s classrooms.


1930 – 1965: Whole Word becomes the dominant top-down method for teaching reading in the United States. Words viewed as a single unit (or picture) are drilled individually and rote-memorized based on their visual characteristics. Holding up a large flash card with the target word printed on it, the teacher says the word: “horse.” The children look at the word printed on the card (which includes a picture if possible) and then they repeat the word each time the teacher says it. The goal is to have the children memorize the word as having a particular shape or contour, rather than to decode the word based on individual letter sounds.

Logically enough, this Whole Word method becomes known as Look/Say. Once children learn 30-50 sight words in this manner, they are given repetitive readers consisting largely of these exact words. An unknown word in these readers is accompanied by a picture to allow its identification. The most famous basal reader of this period is the beautifully illustrated Dick and Jane series. See above for a sampling from a “story” from this series, whose main purpose is to drill sight words.


1955: The Rudolf Flesch book, Why Johnny Can’t Read, becomes a runaway best seller in the US. It’s a passionate (and polemic) plea for the elimination of Whole Word memorization and guessing, and for a return to phonics. Flesch sums up his book this way:

“Memorizing or guessing the meaning of whole words is not reading; on the contrary, it is an acquired bad habit that stands in the way of your child’s ever learning to read properly… My advice is, teach your child yourself how to read.” [5]

Unfortunately, the phonics program offered by Flesch, taking up the final third of his book, is inadequate. There are only two pages of instructions, followed by page after page of nothing but word lists. The publication of this book ignites the “Reading Wars,” a battle over how to teach beginning reading that, to this day, remains unresolved.


1955: In a defensive response to the Flesch book, the education establishment and textbook publishers create the International Reading Association (later rebranded as the International Literacy Association). William S. Gray of the University of Chicago is named the first president. It was Gray who developed the popular Look/Say reading series, Dick and Jane, so heavily criticized by Flesch in Why Johnny Can’t Read. [Future IRA presidents will include such noteworthy names as Kenneth Goodman (1981), co-founder of Whole Language (see below), and Marie S. Clay (1992), founder of Reading Recovery.]


1967: Jeanne Chall’s book, Learning to Read: The Great Debate is published. In it, Chall, a leading member of the education establishment from Harvard, surveys the scientific studies done on reading from 1912 through 1965. She concludes that "code emphasis," her term for synthetic phonics, produces better results than the Look/Say method in the teaching of beginning reading. She calls for “a correction in beginning reading instructional methods” and then, speaking of decoding (or "sounding out" words) adds:

“The results are better, not only in terms of the mechanical aspect of literacy alone, but also in terms of the ultimate goals of reading instruction – comprehension and possibly even speed of reading. The long-existing fear that an initial code emphasis produces readers who do not read for meaning, or with enjoyment, is unfounded. On the contrary, the evidence indicates that better results in terms of reading for meaning are achieved with the programs that emphasize code right at the start…” [6]

1965 – 1975: Another period of transition. Because of the Flesch and Chall books, and mounting pressure from parents, some schools return to phonics. Most schools, however, stick with the Look/Say method, but they now include teaching a part of the code using “analytic phonics.” This is a top-down form of phonics that can be employed after the child has memorized enough sight words to make it work. So, for example, once the child visually memorizes (as sight words) BAT, BOY, and BOAT, the teacher can call attention to these words and, with some prompting, the child will “discover” that all 3 words start with the same sound (“buh”) and the same letter (B). Therefore, B must symbolize the sound “buh.” Once the child knows a sound for all the consonants, her guessing can become more accurate. She can use the unknown word's first letter to "get her mouth ready" to say the word.


1973: Various researchers propose a Dual Route model for turning print into speech. Here is an early expression of the theory by researchers Ken Forster and Susan Chambers:

“The pronunciation of a visually presented word involves assigning to a sequence of letters some kind of acoustic or articulatory coding. There are presumably two alternative ways in which this coding can be assigned. First, the pronunciation could be computed by application of a set of grapheme–phoneme rules, or letter-sound correspondence rules. This coding can be carried out independently of any consideration of the meaning or familiarity of the letter sequence, as in the pronunciation of previously unencountered sequences, such as flitch, mantiness and streep. Alternatively, the pronunciation may be determined by searching long-term memory for stored information about how to pronounce familiar letter sequences, obtaining the necessary information by a direct dictionary look-up, instead of rule application. Obviously, this procedure would work only for familiar words.”

Note: In the 80s, 90s, and 00s, this theory will be further refined by reading researchers Max Coltheart, Mark Seidenberg, and James McClelland, leading (by 2005) to general acceptance of Coltheart’s Dual-Route Cascaded (DRC) model of word recognition.


1981: Rudolf Flesch publishes Why Johnny Still Can’t Read, again condemning the Whole Word method, as well as the analytic phonics that it now includes. Looking back over the 26 years since he published his first book, he criticizes analytic phonics as being “a minimum of phonics, served up in a look-and-say sauce of context clues and guesswork.” [7]


1981: Theodor Geisel (also known as Dr. Seuss), in an interview for Arizona Magazine, discusses how he was limited by his publisher to using 220 specific words from the Dolch List of sight words when he created The Cat in the Hat. Here is what he says about phonics and about having children memorize sight words:

“That was due to the Dewey revolt in the Twenties in which they threw out phonics and went to word recognition, as if you’re reading Chinese pictographs instead of blending sounds of different letters. I think killing phonics was one of the greatest causes of illiteracy in the country.”

1983: Jeanne Chall updates her book as well, examining the scientific research done on reading from 1966 through 1981. She again concludes that Synthetic Phonics, not Whole Word, leads not only to better word recognition but also to better comprehension. She adds that the scientific support for synthetic phonics and decoding “seems to be even stronger than it was in 1967.” [8] Regarding the use of analytic phonics, a practice that has become popular since her first book, Chall states:

“It would seem that many of the characteristics of direct phonics, such as teaching letter sounds directly, separating the letter sounds from the words, giving practice in blending the sounds, and so forth, are more effective than the less direct procedures used in current analytic phonics programs.” [9]

1975 – 2000: Under growing pressure from parents, and the weight of the scientific evidence in Jeanne Chall’s books, Look/Say is fully abandoned in the 70’s. However, what takes its place is not the Synthetic Phonics championed by both Flesch and Chall. Instead, the Whole Word (top-down) method is reaffirmed as a new model for teaching reading appears. It’s called Whole Language.


Developed by Kenneth Goodman and Frank Smith in the 1970’s, Whole Language differs from Look/Say in some fundamental ways. First, it rejects the boring, artificial, and repetitive readers of the Look/Say era, claiming to replace those readers with real children’s stories. (Those stories, however, are read to the children. What the children read initially are repetitive “little books” whose main function is to drill sight words.) Second, phonics, understood as explicit, systematic teaching of the full code, is outright rejected. According to Goodman, “matching letters with sounds (decoding) is a flat-earth view of the world, since it rejects modern science about reading and writing and how they develop.” [10] Frank Smith is just as adamant: “Reliance on phonics – or spelling-to-sound correspondence – is dysfunctional in fluent reading and interferes with learning to read.” [11]


Third, writing is emphasized at the earliest stages of learning to read, even if it requires, as it must, “invented spelling.” Fourth, learning to read is to be as easy and natural as learning to speak. Children will discover the necessary letter/sound relationships as they read books and express themselves in writing, using their invented spellings. Finally, an unfamiliar word is to be identified, not by sounding it out, but by “asking somebody what the word is” – or by guessing what the word might be using context or “similarity to words that are already known.” [12]


Despite its differences with Look/Say, Whole Language is another top-down approach for teaching reading. In the first couple years of instruction, reading consists largely of memorizing sight words and guessing – with some incidental phonics taught on an as-needed basis. Ken Goodman is famous for his characterization of reading as “a psycholinguistic guessing game.” [13] For Frank Smith, guessing – in the sense of making predictions and thus eliminating unlikely alternatives – “is the most efficient manner in which to read and learn to read.” [14] Whole Language spreads throughout the US at an unprecedented pace, even though there is no research whatever for its effectiveness.


1983: Reading researchers David Share and Anthony Jorm propose their Self-Teaching Hypothesis (further elaborated in 1995 by Share). Recognizing that skilled, educated readers have a sight word vocabulary of 60,000 or more words, and that such a feat would be impossible via rote-memorization or via guessing based on context, Jorm and Share propose that only the independent decoding of unknown words could explain the ability of skilled readers. Such decoding depends on only two factors: knowledge of letter/sound (phoneme/grapheme) relationships and the ability to blend an unknown word’s individual sounds (phonemes) into a recognizable pronunciation. Share calls these twin co-requisites the sine qua non of reading acquisition [15]. This places Share and Jorm in direct opposition to Whole Language methodology.


1986: Reading researchers Philip Gough and William Tunmer propose their Simple View of Reading. Under the Simple View, reading comprehension (RC) is the product of two independent factors: word recognition (WR) and language comprehension (LC). The model states succinctly: RC = WR x LC. Word recognition, made possible by decoding, thus takes its rightful place as a necessary (though by itself, insufficient) condition for reading comprehension to occur. This places Gough and Tunmer in direct opposition to Whole Language methodology. [Note: my separate blog on the Simple View can be found here.]


1987: Educational leaders in California, through the state’s English/Language Arts Framework, institute a large-scale, statewide adoption of Whole Language as the method for teaching beginning reading in the state’s grade schools. Many states tragically follow California’s lead.


1993: The National Assessment of Educational Progress [16], a federal study doing a state-by-state comparison of reading proficiency, ranks California fourth-graders fifth from the bottom among the fifty states. Three years later, gobsmacked Californians find they are ranked at the very bottom (just behind Mississippi). An astounding 77% of fourth graders are ranked “below grade level.” [17]


1998: Reading researcher Linnea Ehri proposes four phases of sight word learning [18]. Her studies reveal that it is only when beginning readers can form “complete connections” between all the letters (graphemes) seen in a word’s written form and all the sounds (phonemes) heard in its spoken form, that sight word learning becomes unconscious and automatic – a process she calls orthographic mapping. This re-emphasizes the importance of knowing grapheme/phoneme correspondences and being able to blend (decode) unknown words by sounding them out. Share’s Self-Teaching Hypothesis and Ehri’s Orthographic Mapping complement each other. Both theories are in direct opposition to Whole Language.


1997 – 2000: The US Congress convenes a National Reading Panel with the mandate to examine all reputable scientific research available on how to teach children to read, and then to determine the most effective method. The Panel’s members examine several hundred studies conducted in the previous 3 decades. After three years of effort, in 2000, the Panel completes its 480-page report, delivering a strong rebuke to Whole Language proponents. It concludes that “systematic” phonics, not Whole Language, is the best method for teaching beginning readers – and that such phonics must be taught explicitly, rather than on a “discovery” or “as-needed” basis. It also concludes that the best time to teach phonics is in kindergarten or first grade (the traditional start of formal reading instruction), before a child starts to read by other means.


Because this report is such a thorough rejection of 25 years of Whole Language methodology, I think it best to allow the Panel members to speak for themselves. The numbers after the quote use the pagination found in the Final Report, available online here.

“Systematic phonics instruction makes a bigger contribution to children’s growth in reading than alternative programs providing unsystematic or no phonics instruction.” (2-92)
“The hallmark of systematic phonics programs is that they delineate a planned, sequential set of phonic elements [letter-sound correspondences] and that they teach these elements explicitly… A key feature that distinguishes systematic phonics instruction from nonsystematic phonics is in the identification of a full array of letter-sound correspondences to be taught. The array includes not only the major correspondences between consonant letters and sounds but also short and long-vowel letters and sounds, and vowel and consonant digraphs (e.g., oi, ea, ou, sh, ch, th)… Learning vowel and digraph spelling patterns is harder for children; therefore, special attention is devoted to learning these relations.” (2-99)

The Panel states clearly that systematic phonics instruction is not to commence after children are reading by other means, but rather, at the very start of instruction:

“Phonics instruction, taught early, proved much more effective than phonics instruction introduced after first grade… Phonics instruction produces the biggest impact on growth in reading when it begins in kindergarten or 1st grade, before children have learned to read independently.” (2-93)

Stating that it is “not sufficient” simply to teach the code, the Panel offers three specific ways to apply code knowledge to reading and writing:

“Programs provide practice in various ways. Phonics programs may teach children decoding strategies that involve sounding out and blending individual letters and digraphs... Programs may provide children with text whose words can be decoded using the letter-sound relations already taught. Programs may have children write their own text using the letter-sounds relations already taught and then have children read their own and others’ stories.” (2-99) [emphasis added]

Stressing the importance of phonemic awareness (PA), the Panel strongly endorses two types of PA exercises: blending and segmenting. Both are to be done “with letters.” The Panel does not endorse any type of “advanced” oral-only PA exercises such as phoneme deletion or phoneme substitution.

"PA training is more effective when it is taught by having children manipulate letters than when manipulation is limited to speech." (2-26)
"Teaching children to manipulate phonemes using letters produced bigger effects than teaching without letters. Blending and segmenting instruction showed a much larger effect size on reading than multiple-skill instruction did." (2-29)
"In the rush to teach phonemic awareness, it is important not to overlook the need to teach letters as well. The NRP analysis showed that PA instruction was more effective when it was taught with letters. Using letters to manipulate phonemes helps children make the transfer to reading and writing." (2-33)

The Panel goes out of its way to counter the notion that phonemic awareness training is something new, or something that should be done apart from phonics:

"It is important to note that when PA is taught with letters, it qualifies as phonics instruction. When PA training involves teaching students to pronounce the sounds associated with letters and to blend the sounds to form words, it qualifies as synthetic phonics. When PA training involves teaching students to segment words into phonemes and to select letters for those phonemes, it is the equivalent of teaching students to spell words phonemically, which is another form of phonics instruction. These methods of teaching phonics existed long before they became identified as forms of phonemic awareness training. Although teaching children to manipulate sounds in spoken words may be new, phonemic awareness training that involves segmenting and blending with letters is not. Only the label is new." (2-34)

Contrasted with the above approval of systematic phonics, blending (with letters), and segmenting (with letters), the Panel presents a strong critique of Whole Language:

“Beginning reading programs that do not teach phonics explicitly and systematically may be of several types. In Whole Language programs, the emphasis is upon meaning-based reading and writing activities. Phonics instruction is integrated into these activities but taught incidentally as teachers decide it is needed.” (2-90)
“Whole Language teachers typically provide some instruction in phonics, usually as part of invented spelling activities or through the use of grapheme-to-phoneme prompts during reading. However, their approach is to teach it unsystematically and incidentally in context as the need arises… Whole language teachers believe that phonics instruction should be integrated into meaningful reading, writing, listening, and speaking activities and taught incidentally when they perceive it is needed. As children attempt to use written Language for communication, they will discover naturally what they need to know about letter-sound relationships and how letters function in reading and writing.” (2-102)

According to its advocates, one of the main advantages for using Whole Language to teach beginning readers is that it results in better comprehension than does a phonics approach. The Panel addresses this fallacy directly. Not only does the evidence show that explicit, systematic, and early phonics results in enhanced reading skill, but also in enhanced comprehension:

“The conclusion drawn is that growth in word-reading skills is strongly enhanced by systematic phonics instruction when compared to non-phonics instruction for kindergartners and 1st graders as well as for older struggling readers. Growth in reading comprehension is also boosted by systematic phonics instruction for younger students and reading disabled students. These findings should dispel any belief that teaching phonics systematically to young children interferes with their ability to read and comprehend text. Quite the opposite is the case.” (2-94)

Unfortunately, the Panel’s 14 experts are themselves divided on many matters of importance in reading instruction. One Panel member goes so far as to publish her own Minority Review, calling the work of her fellow Panel members “unbalanced and, to some extent, irrelevant.” The disagreements result in a final report with a soothing call for “systematic” phonics (a planned, sequential set of letter-sound correspondences taught explicitly) and with an explicit approval of all types of phonics: the bottom-up method that is synthetic phonics, or top-down methods like analytic phonics, analogy phonics, and onset-rime phonics. (2-99) This makes it easy (and utterly predictable) for what happens next: supporters of Whole Language agree to embrace some ineffective top-down phonics, and they change the name of their method to Balanced Literacy.


2000 – present: Many members of the education establishment (the ILA, the NCTE, professors in teaching colleges, many school administrators) do not react favorably to the National Reading Panel’s final report. However, the Panel’s multiple recommendations in support of systematic phonics can’t simply be ignored – many parents and legislators are clamoring for a “return to phonics.” What happens is that the name, “Whole Language,” vanishes from the education scene and from education journals. What takes its place is called “Balanced Literacy” or “The Balanced Approach.” In the UK, the functional equivalent of Balanced Literacy is called Searchlights.


Note: Balanced Literacy is described in detail in another blog on this site. Balanced Literacy is Whole Language, but now with a veneer of top-down phonics: analytic phonics and/or analogy phonics.


2005: The Clackmannanshire (Scotland) Report. The results of a seven-year study on the effectiveness of bottom-up synthetic phonics in teaching reading and spelling are published by researchers Rhona Johnston and Joyce Watson. Three training programs had been conducted with 300 children for 16 weeks, beginning soon after entry to the first year of formal schooling. For 20 minutes per day, children were taught either: (a) by a synthetic phonics program, or (b) by an analytic phonics program, or (c) by an analytic phonics plus phonological-awareness training program.


At the end of the 16-week program, the group taught by synthetic phonics were:


(a) reading words seven months ahead of the other two groups

(b) reading seven months ahead for their chronological age

(c) spelling eight to nine months ahead of the other groups

(d) spelling seven months ahead for their chronological age.


The synthetic-phonics-taught group also read irregular words better than the other groups and was the only group that could read unfamiliar words by analogy.


These 300 children were then followed for 7 additional years to see if these gains persisted. They not only persisted, they accelerated. By the end of the children’s seventh year of primary schooling, the gains made in reading achievement by the children who had been taught synthetic phonics during their first year had increased six-fold, increasing from seven months to three years and six months ahead of chronological age. The gain in spelling was 4.5-fold, improving from seven months to one year and nine months ahead of chronological age. (The full study can be found here.)


2005: Australia publishes its own national inquiry into the teaching of reading, available online here. The study closely follows the lead of the US National Reading Panel in that it rejects Whole Language and, in its place, recommends systematic phonics. Like the NRP, it also recommends an “integrated” approach to reading instruction that includes the Big Five: phonemic awareness, phonics, fluency, vocabulary, and comprehension.


Unlike the NRP however, the Australian study lays blame for the Whole Language disaster, pointing its finger directly at a philosophy of knowledge called Constructivism:

“Essentially, the whole-language approach to teaching and learning reflects a constructivist philosophy of learning in which children are viewed as inherently active, self-regulating learners who construct knowledge for themselves, with little or no explicit decoding instruction.” (p28)
“Constructivism is a mixture of Piagetian stage theory with postmodernist ideology that is devoid of evidence-based justification for its adoption as an effective method of teaching… Too many faculties and schools of education in Australian higher education institutions currently providing pre-service teacher education base their programs on constructivist views of teaching.” (pp29-30)
“At the same time as constructivist approaches have been promoted, direct teaching methods have been overtly or covertly criticized and dismissed as inappropriate, with the suggestion that they simply don’t work and are dull and boring for learners. The message that most teachers appear to have absorbed is that all direct teaching is old-fashioned and should be abandoned in favor of student-centered enquiry and activity-based learning.” (p37)

My favorite quote from the Report, however, is this:

“In sum, the incontrovertible finding from the extensive body of local and international evidence-based literacy research is that for children during the early years of schooling to be able to link their knowledge of spoken language to their knowledge of written language, they must first master the alphabetic code – the system of grapheme-phoneme correspondences that link written words to their pronunciations. Because these are both foundational and essential skills for the development of competence in reading, writing and spelling, they must be taught explicitly, systematically, early and well.”

2006: Yet another national inquiry, the Rose Report, is published in England, available online here. Unlike the national reports published in the US and Australia, the Rose Report singles out and fully embraces only bottom-up synthetic phonics rather than the more inclusive umbrella term, "systematic" phonics:

“Synthetic phonics is the form of systematic phonics that offers the vast majority of beginners the best route to becoming skilled readers. Among other strengths, this is because it teaches children directly what they need to know, i.e. the four principles set out below, whereas other approaches, such as 'analytic' phonics, expect children to deduce them.” (section 47)

The Rose Report is quite specific about what these four principles of synthetic phonics are: “Having considered a wide range of evidence, the review has concluded that the case for systematic phonic work is overwhelming and much strengthened by a synthetic approach, the key features of which are to teach beginner readers:

1) grapheme/phoneme (letter/sound) correspondences (the alphabetic principle) in a clearly defined, incremental sequence

2) to apply the highly important skill of blending (synthesizing) phonemes in order, all through a word to read it

3) to apply the skills of segmenting words into their constituent phonemes to spell

4) that blending and segmenting are reversible processes.” (section 51)


“The sum of these represent 'high quality phonic work'… High quality phonic work is not a ‘strategy’ so much as a body of knowledge, skills and understanding that has to be learned. From work considered by this review, the balance of advantage favors teaching it discretely as the prime approach to establishing word recognition. This is because successful phonic work for word recognition is a time-limited activity that is eventually overtaken by work that develops comprehension.” (sections 52-53)


The Rose Report also devotes an entire section (Appendix 1) to a discussion of the Simple View of Reading, a topic inexplicably missing from the national reports of the US and Australia.


The Rose Report is a game-changer for England. From this point on, reading instruction in England starts to diverge from that of the rest of the English-speaking world. The main reason is that the Rose Report avoids two pitfalls:


1) It opts specifically for the only bottom-up approach to reading instruction that exists: synthetic phonics. The reports from the US and Australia, in recommending only systematic phonics, left the door wide open for top-down forms of phonics (analytic and analogy phonics) to be grafted onto Whole Language. The result: Balanced Literacy.


2) It avoids making the anodyne call for the Big Five (phonemic awareness, phonics, fluency, vocabulary, and comprehension), focusing attention instead on the Simple View of Reading. Those who understand the Simple View understand reading comprehension correctly: it’s the product of both Word Recognition and Language Comprehension. Recognizing that, for most children, Language Comprehension is already age-appropriate, Synthetic Phonics teachers place an early emphasis on decoding - the skill that makes Word Recognition possible.


2009: Modern brain imaging methods and recent advances in neuroscience are brought into the mainstream with the publication of Reading in the Brain: The New Science of How We Read by Stanislas Dehaene. While mapping out precisely what happens in the reading brain is still in its early stages, Dehaene’s book affirms 3 important points:

First, neuroscience verifies the Dual-Route model for converting print into sound and/or meaning.

“Two information processing pathways coexist and supplement each other when we read. When words are regular, rare, or novel, we preferentially process them using a ‘phonological route,’ in which we first convert the letter string into a pronunciation, and then attempt to access the meaning of the sound pattern (if any). Conversely, when we are confronted with words that are frequent, or whose pronunciation is exceptional, our reading takes a direct route that first recovers the meaning of the word and then uses the lexical information to recover its pronunciation… Both routes are in constant collaboration and each contributes to the specification of word pronunciation.” (p38)

Second, Dehaene’s research makes him an unequivocal proponent of using bottom-up, synthetic phonics to teach a child to read. Here’s what he says:

“The goal of reading instruction is clear. It must aim to lay down an efficient neuronal hierarchy, so that the child can recognize letters and graphemes and easily turn them into speech sounds. All other aspects of the literate mind depend on this crucial step. There is no point in describing the delights of reading to children if they are not provided with the means to get there… Considerable research converges on the fact that grapheme-phoneme conversion radically transforms the child’s brain. This [decoding] process must be taught explicitly. It does not develop spontaneously; it must be acquired. Reading via the direct route, which leads straight from letter strings to their meaning, only works after many years of practice using the phonological decoding route.” (p219)
“Only the teaching of letter-to-sound conversion allows children to blossom, because only this method gives them the freedom to read novel words in any domain they choose. It is therefore misguided to pit the intellectual freedom of a child against rigorous drill. If a child is to learn quickly and well, he must be given well-structured grapheme-phoneme instruction. The effort is real, but the payoff in independence is immediate when children discover, often with awe, that they can decode words they never learned in class.” (p227)
“Performance is best when children are, from the beginning, directly taught the mapping of letters onto speech sounds. Regardless of their social background, children who do not learn this suffer from reading delays.” (p227)

The bottom line for Dehaene is this:

“The punch line is quite simple: we know that conversion of letters into sounds is the key stage in reading acquisition. All teaching efforts should be initially focused on a single goal: the grasp of the alphabetic principle whereby each letter or grapheme represents a phoneme… Children need to understand that only the analysis of letters one by one will allow them to discover a word’s identity.” (p228)

Last, Dehaene is adamant about using only decodable text in the early stages:

“At each step, the words and sentences introduced in class must only include graphemes and phonemes that have already been explicitly taught. Reading lessons provide little room for improvisation… The words given to beginning readers must be analyzed letter by letter in order to ensure that they do not contain spelling problems that are beyond the child’s current knowledge.” If teachers do not follow this advice, “it can make children think that reading is arbitrary and not worth studying.” (p230)
end history
 

Some Final Thoughts


With the single exception of England, the landscape for beginning reading instruction worldwide is disheartening: Balanced Literacy, lists of sight words to be memorized, three-cueing, guessing from pictures, guessing from a word’s first letter, guessing what might make sense, invented spellings, reading levels, learning styles, “predictable” books, and, of course, “discovery” learning. Many of the teachers using these techniques know little of reading science because they were not exposed to it during their teacher training.


Reading teachers are far more likely to be familiar with Constructivist authors like Ken Goodman, Frank Smith, Lucy Calkins, Irene Fountas, Gay Su Pinnell, and Jennifer Serravallo than they are to be with serious researchers like Jeanne Chall, David Share, Philip Gough, Linnea Ehri, Max Coltheart, and Stanislas Dehaene. Few reading teachers are conversant about such essential topics as the Simple View of Reading, Dual-Route Theory, the Self-Teaching Hypothesis, and Orthographic Mapping.


Reading researcher Mark Seidenberg, in Language at the Speed of Sight, states “there is a profound disconnect between the science of reading and educational practice. Very little of what we’ve learned about reading as scientists has had any impact on what happens in schools because the cultures of science and education are so different.” (p11) And, of course, big money is involved. “Education is a multi-billion-dollar industry involving multiple stakeholders – governments, business, educators, parents, children, taxpayers, unions, interest groups – whose perspectives and interests often conflict.” (p11).


Take a look at the cost in the picture below (copied from an internet advertisement) for a single grade school to outfit a single grade 1 classroom with Balanced Literacy reading materials from Heinemann Publishing. Similar materials are marketed by Heinemann for each of grades K through 6. [Taxpayers: this is your money.]



In an interview with NPR, Seidenberg adds: “The Reading Wars are over, and science lost.” I think, however, this statement may be a bit of hyperbole. Science has not lost in England where, with synthetic phonics and decodable books mandated, reading outcomes continue to improve. Many reformers, myself included, are not ready to concede defeat in this century-old battle between top-down and bottom-up instruction. Given the resources available online, any reading teacher or parent wishing to educate themself can, with some effort, do so.


According to the US Nation's Report Card, nearly two out of every three students in grades 4 and 8 do not read at a proficient level. Instead, they read at a "basic" level or they're functionally illiterate. Needless, easily avoidable suffering should be an affront to everyone. In primary schools around the English-speaking world, this type of suffering is endemic, due in large part to the top-down manner in which so many children are forced to learn the vital skill of reading. The Reading Wars cannot, and should not, be over. Reading Science, with more clarity now than it has ever had regarding reading instruction, can’t afford to lose this battle; far too much is at stake.


© Copyright, Stephen Parker (2019)

 

Reading Teachers and Parents – My bottom-up Synthetic Phonics books are available, absolutely FREE (with no strings attached) right here. I invite you to download an appropriate copy for yourself and to help me spread the word. You can also find me on Twitter @ParkerPhonics. Thank you.

 

Notes


[1] William McGuffey, The Eclectic Manual of Methods (New York: Van Antwerp, Bragg, & Co, 1885), 24-26.


[2] James Fassett, The New Beacon Primer (New York: Ginn and Company, 1921), 121.


[3] Edmund Huey, The Psychology and Pedagogy of Reading (1908; reprint, Cambridge: M.I.T. Press, 1968), 135.


[4] Arthur Gates, “Studies of Phonetic Training in Beginning Reading,” Journal of Educational Psychology, 18 (1927): 226.


[5] Rudolf Flesch, Why Johnny Can’t Read (New York: Harper & Row, 1955), 110.


[6] Jeanne Chall, Learning to Read: The Great Debate, Updated ed. (New York: McGraw-Hill, 1967), 307.


[7] Rudolf Flesch, Why Johnny Still Can’t Read (New York: Harper & Row, 1981), 5.


[8] Chall, Learning to Read, 43.


[9] Ibid.


[10] Ken Goodman, What's Whole in Whole Language (Portsmouth: Heinemann Educational Books, 1986), 37.


[11] Frank Smith, Reading Without Nonsense, 3 ed. (New York: Teachers College Press, 1997), 57.


[12] Ibid., 54-5.


[13] Ken Goodman, “Reading: A Psycholinguistic Guessing Game,” Journal of the Reading Specialist, May (1967): 127.


[14] Smith, Reading Without Nonsense, 55.


[15] Share, D.L. (1995) “Phonological recoding and self-teaching: Sine qua non of reading acquisition,” Cognition, 55, 151-218.


[16] U.S. Department of Education. Institute of Education Sciences, National Center for Education Statistics. https://nces.ed.gov/nationsreportcard/reading


[17] Nicholas Lemann, “The Reading Wars,” The Atlantic, November 1997, 84.


[18] Linnea Ehri, “Grapheme-Phoneme Knowledge is Essential for Learning to Read Words in English” in Word Recognition in Beginning Literacy (Mahwah: Lawrence Erlbaum Associates, Publishers, 1998).

bottom of page