Over the last several years, a major point of contention between parents and the education establishment (both federal and state) has been the issue of testing. Especially as states have responded to federal mandates by administering unvalidated assessments aligned to the Common Core national standards, parents across the country have begun, with varying degrees of success, to opt their children out of those assessments. The new Every Student Succeeds Act (ESSA) perpetuates the federal testing mandates, so the opt-out movement will continue.
But the education establishment is now colluding with Big Data to obliterate opting out. How? By promoting “embedded assessment” within the digital-learning platforms that are gradually replacing teacher-led instruction. As students interact with these sophisticated platforms, the software collects millions of data points on each child and can assess exactly what “skills” he has mastered and where he needs further training. (Modern progressive education is about skills rather than knowledge, and training rather than education.) Embedded assessment means each student’s performance will be assessed every moment, in real time, through analysis of keystrokes and perhaps even physiological reactions. Because ultimately the periodic “summative assessment” – the end-of-course or end-of-year test — will disappear, so will parents’ ability to protect their children from the testing.
The concept of embedded assessment has a certain appeal. If students are being assessed continually, the argument goes, the adaptive software can adjust to feed them whatever they need to address any problems they’re experiencing. Moreover, less class time will be wasted in preparing for and administering the summative assessment. But even some players in this new world of Big Data are acknowledging serious concerns with the concept – and parents and policymakers must understand what embedded assessment really means for children and their futures.
In a recent presentation at Princeton’s Center for Information Technology Policy, Yale University researcher and legal scholar Elana Zeide discussed the troubling implications of, as she put it, “moving from human decision-making to machine decision-making” in education. The potential problems involve threats to both student privacy and individual freedom and autonomy.
Zeide explained that adopting “personalized learning” through technology will enable creation of student portfolios at a granular level. For example, the software will record not only whether the student can calculate the correct answer on an algebra problem, but exactly how his brain is working on each step of that problem. As he progresses through school, platforms such as the creepy, mind-mapping Knewton will create “knowledge maps” to show precisely what the student knows and can do, based on every keystroke he executes and (with some programs) even on his heart rate and facial countenance as he does so. Zeide predicts those knowledge maps will eventually replace degrees and diplomas as credentials to open doors to higher education and employment.
The existence of such portfolios raises major concerns about student privacy. For one thing, as Zeide pointed out, this data generally isn’t covered by the federal Family Educational Rights and Privacy Act (FERPA). And Zeide acknowledged that all this “portable, interoperable, instantly transferable, and durable” data constitutes an enormous temptation for companies and researchers – “you can repurpose it for all sorts of cool aggregation and mining . . . and you can discover things you never knew were there!” Do parents want corporations and others sifting through their children’s most intimate data to discover things that should remain private?
But Zeide focused especially on the algorithms that the software will create using these millions of data points on each student – algorithms that will predict a student’s future behavior and performance. She explained that with digital training, all steps in the traditional educational process – observation, formative assessment (“quizzes” that measure how well the student is learning), summative assessment, and credentialing (awarding of diplomas or certificates) – are collapsed into one moment. And every keystroke, every action, no matter how tiny, will be memorialized in this algorithm, forever.
Under such a system, Zeide said, every student will be subject to constant monitoring and will earn an “algorithmic credential” based on literally every interaction he has ever had with the educational software. That credential could dictate what kind of higher education he qualifies for and what kind of job he gets.
The likely ramifications of this are sobering. What will be the psychological effect when a child knows he can’t erase anything – that everything he does, every mistake he makes, will be fed into his algorithm? And because all data is, as Zeide said, “decontextualized,” the computer won’t make adjustments for days when the child is sick or struggling for some other reason (things a teacher would know and take into account).
And consider the intimidating effect of this permanent portfolio that every student will now have. Will the student feel pressured to conform to the consensus of opinion on a particular topic, knowing that any dissension may come back to haunt him? Or what happens if the algorithm gets it wrong? If the algorithm mislabels him in some way? Will there be an appeal process? Appeal to whom? The erroneous or misleading data is already fixed and recorded. Is human agency therefore to be eliminated?
And what if the algorithmic data was neither wrong nor misleading at the time it was collected and analyzed, but the individual experienced a fundamental conversion from, say, unengaged slacker to motivated go-getter? Will automated systems immediately discard his application or resume on the basis of the now-outdated algorithm?
The problems of “predictive analytics” (decision-making based on algorithms) are being explored in many contexts – credit ratings, employment decisions, law-enforcement issues. Individuals who find themselves disadvantaged by an algorithm because of mistaken or misleading information can spend years – or forever – trying to escape the hall of mirrors they were thrust into. And when education is increasingly concerned with “equity,” the possibility that individuals will be labeled based on stereotypes is too real to ignore. Indeed, a stereotype perpetuated by a supposedly unbiased algorithm rather than a human being is even more difficult to overcome.
How do we prevent these problems with education algorithms? Zeide acknowledged that she has more questions than answers.
Perhaps the law could impose consent requirements – no collection or use of such data without parental consent. Zeide considers this idealistic, since it’s unlikely parents will fully understand the nature of the problem or what they’re really consenting to. Or the law could confine use of the data to “educational purposes.” But of course, that phrase can be expanded to allow almost anything. Or could the law ban collecting biometric data? Zeide expressed concern that this would interfere with services for special-needs students (although the law could be drafted to allow narrowly tailored uses for such students). But even then, the intrusive data that would be included in the contemplated algorithms goes well beyond purely biometric data. Drafting an effective law is possible, but difficult in light of opposition from the powerful educational-technology companies.
Zeide also discussed the frequently recommended possibility of giving each student control over his own portfolio, allowing him to remove it from the “silo” and converting it into a “data backpack” that he can use for his own goals. But as Zeide pointed out, that solution creates its own problems. Would universities or potential employers demand to see the portfolio? Even if the law prohibited them from asking, would the individual’s decision not to offer it voluntarily suggest to them that he’s hiding something?
Finally, Zeide raised the issue of algorithms’ effect on the nature and definition of education itself. The Big Data mindset may suggest that only what can be measured and recorded is worth knowing. If a student’s grasp of the messages of Macbeth can’t be recorded as a “skill” that should be included in the portfolio, does that mean this understanding is less important to his education? Indeed, progressive education schemes such as Common Core already minimize intangible understanding in favor of concrete skills that can be measured. Zeide pointed out that in Big Data World, questions such as this would be worked out invisibly, without input from parents or teachers.
Zeide wandered into territory that borders on the heretical for data mavens. She raised the question whether, perhaps, “less is more” – whether we should limit use of these digital tools, or preserve the “silos” so that not all data on a student is linked and easily accessible. Or (gulp) maybe we want to eliminate the digital tools altogether.
This conclusion would be anathema to foundations such as Jeb Bush’s ExcelinEd (on whose board Secretary of Education nominee Betsy DeVos served) and tech-industry-funded groups such as the Data Quality Campaign. These groups trumpet the supposed benefits of digital training and claim it can transform education (have we not learned to look askance at promised “transformations” of anything?). And for now, their argument is winning.
Encouraged and incentivized by the federal government, and overrun by education-technology snake-oil salesmen, er, lobbyists, public schools are adopting digital training at a breakneck pace. Most decision-makers for those schools have probably never considered the serious implications of this transformation. Nor have the implications been explained to parents, who rather are assured only that digital training will create unparalleled personalized learning opportunities for their children.
But it is critical that as a society we take a hard look at what the Big Data revolution means for the children in our schools – for their privacy, and their very humanity. As Big Data advances in education, parents will discover that opting out of a test isn’t enough. To protect their children, they may have to opt out of an entire system.