How Dumb Do We Want Prospective Teachers To Be?

Teacher at Maxwell AFB Elementary/Middle School
(Air Force photo/Kelly Deichert)

Not much smarter than the dumbest students they will teach, it seems. But the exact answer will depend on the “research” read by those few who still read. If this National Center on Education and the Economy (NCEE) publication is accorded the status of research, they may accept one of its many internal (and misleading) conclusions:  “…there is mixed evidence on the number or type of courses a teacher takes and his or her performance in the classroom” (p. 75).

Its authors are suggesting that we really don’t know if teaching ability depends on the kinds of courses and how many of them a teacher takes, so they have no clear advice to give on whether we would have better elementary school teachers if they took any or more academic coursework in the subjects they taught.

Actually, the NCEE assertion was poorly stated, and the evidence isn’t mixed. What is mixed are the kinds of studies that were combined for an analysis addressing the wrong question. Evidence will be unclear if not misleading if all studies of K-12 teachers’ academic background are put into one basket to analyze, whether the teachers taught elementary or high school, and if one talks about the relationship of background courses to teaching skills, not to student achievement.

It has long been obvious that one can’t teach what one doesn’t know. That is why teacher licensing began many decades ago as an effort to ensure that prospective teachers understood the subjects they were going to be legally licensed to teach. Education schools quickly objected that licensure test scores weren’t related to teaching ability. Quite right. They didn’t correlate because licensure tests of subject knowledge weren’t designed to predict teaching skill. They came into being to assess whether the test-taker had the subject area knowledge needed for teaching the range of students at the grade levels specified by the license (see Ann Jarvella Wilson’s thesis-based paper, ED 262 049, on the history of teacher licensure tests).

That didn’t stop education school faculty from criticizing teacher licensure tests of subject area knowledge on spurious grounds. Unfortunately, irrelevant criticism did change the tests; they were watered down in content demand, and came to feature pedagogical items, especially at the elementary level.  However, the public was simply told that licensure tests didn’t predict teachers’ teaching skills and were thus useless. The public wasn’t told that these tests had a different purpose, and that one did not use a knowledge test to predict pedagogical skill (even pedagogical subject knowledge), especially since there was and is no consensus on what good teaching skills look like.  The public wasn’t told that real tests of subject knowledge could be useful for the purpose for which they were constructed, and that the more prospective teachers knew about a subject, the more students would learn in that subject.

But if one looks only at studies of the relationship between students’ academic performance (not teachers’ skills) and the college math and science courses that high school mathematics and science teachers have taken, there is a correlation. See the text on p. 13 here.

And it turns out that when one looks only at studies of those who teach math in K-8, there is little or no relationship. Why? We don’t know, because education researchers haven’t tried to find out the math content of the math or methods courses future math teachers in K-8 were required to take or what the qualifications of their education professors were for the math or science methods courses future math/science teachers were required to take. We don’t know what high school math or science courses K-8 teachers took when they were in high school themselves. Or what grades they got. We don’t know if they took few or any courses in math or science in college. And if they took special college math and science courses designed for future teachers (like “Science for Poets”), they may have had so little content that future nurses and engineers were not allowed to take them for credit.

But Chad Aldeman and Ashley Mitchel, authors of “No Guarantees: Is it Possible to Ensure Teachers Are Ready on Day One?” weren’t interested in the fact that students learn more from a subject-knowledgeable teacher (e.g., if one looks at studies of the students of teachers who had to take math or science courses in order to get licensed as math or science teacher—in 9-12). It seems Aldeman and Mitchel had a different agenda for their report, issued in February 2016 by Bellwether Education Partners. This organization was funded by the Bill and Melinda Gates Foundation to develop Common Core-aligned test items—to be used in a variety of Common Core-aligned tests for, it seems, teachers as well as students.

The agenda is explicit in the final section:  “…If the Common Core-aligned assessments uncover consistent variations among preparation programs, it will be easier to know how to improve teacher preparation pathways… (p. 27).”

Their agenda seemingly was to promote use of Common Core-aligned test items (to tell us that coursework in “pedagogical content knowledge” is needed to prepare future teachers), not to note that since research already indicates the benefit of mathematics or science coursework for those who teach mathematics or science in grades 9-12, requiring mathematics coursework of those who will be licensed to teach it in a self-contained elementary classroom is probably a good idea. However, readers must ask if there is any need to take a Common Core-aligned test to find out what common sense alone has told intelligent educators for centuries. Nothing replaces actual coursework in “content knowledge,” whether in high school or college.

Otherwise, why bother going to college? Indeed, why should we require prospective teachers for K-6 to get a college degree? Many countries don’t. (But they do expect prospective primary grade teachers to have taken strong academic courses in high school.) Based on the studies they have looked at, Aldeman and Mitchel also recommend (p. 8) that, since (as they misleadingly conclude after posing the wrong question) licensure has no relationship to teaching ability, we should let unlicensed people teach and be evaluated by their local school district on whether they should get a license. In other words, no licensure tests at all to determine whether prospective teachers know enough mathematics to teach it at all. And to determine the award of a teaching license, criteria would include success in teaching the Common Core-aligned math tests that they had passed. This is a circular system now being promoted for evaluating reading comprehension itself.

As an Education Week reporter has already noted “Deep reading comprehension refers to the process required to succeed at tasks defined by the Common Core State Literacy Standards, as well as to achieve proficiency on the more challenging reading tasks in the Program for International Student Assessment (PISA) framework.” The reporter took this circular definition directly from the “research” study she looked at.

It in turn was promoting something called Global Integrated Scenario-based Assessments (GISA), computer-based assessments developed by Educational Testing Service (ETS) that use scenarios, technology, and reading strategies to motivate students.  GISA is described by the reporter as “a theoretically based measure designed to reflect an updated understanding of the construct of reading comprehension.” Apparently, the definition of reading comprehension has been “updated” to mean the results of a test claiming to assess it.  Neat!

That may be why the NCEE report on the training of elementary school teachers can say towards the end, on p. 74: “…content courses should be aligned to the level of the curriculum being taught.”   So, how much will prospective primary grade school teachers ever learn, if teacher education policy makers read Common Core-aligned research?

It in turn was promoting something called Global Integrated Scenario-based Assessments (GISA), computer-based assessments developed by Educational Testing Service (ETS) that use scenarios, technology, and reading strategies to motivate students.  GISA is described by the reporter as “a theoretically based measure designed to reflect an updated understanding of the construct of reading comprehension.” Apparently, the definition of reading comprehension has been “updated” to mean the results of a test claiming to assess it.  Neat!

2 thoughts on “How Dumb Do We Want Prospective Teachers To Be?

  1. What a mess… How about those with a desire and some useful experience apply to a program that would individually evaluate each applicant’s based on that organizations specific needs and then have that individual on a system of training two particularly prepare them to thrive and succeed in the desired environment for use of their interest and talents.

    Don’t we see this in other Industries are people actually apply and are treated like a person? It’s almost unbelievable to me that so many people are comfortable with attaching a number to their name and expecting that to be a system for judging a person’s potential. We are not plugging in components into a computer for a meat packing plant! We’re looking for inspired individuals who want to continue to improve and grow so that they can bless the lives of others as they go on their individualized educational paths.

    We’re looking for the inspired, for those who have a desire to serve, and for those who are still an example of being teachable and have a desire to improve.

    What if your child’s teacher had a passion and a talent for what they presented in class AND you knew that they continue to learn & study in ways that would help them improve as teachers and as individuals?

    It’s not too much to ask that only those, who have​ a desire to continue to learn and be the example of a hungry student, be those that lead the next generation of students.

Comments are closed.