This is the fourth part in a four-part NaughtyBots series about the #commoncore Project: How Social Media is Changing the Politics of Education.
Researchbots, Bought, or Naught
I tend to be linear, not millennial-ear, and had to read the pdf version of the #commoncore Project report. I found it difficult to follow the online format that has been lauded as an innovative interactive presentation. This is not a typical format for presenting research findings and adds to the notion of this being fake research as I have heard many refer to it. The interactive online presentation lost me until after I read much of the pdf version and gained an understanding of how the information is organized. Is this the way future research reports will be presented? What’s next? Is this a precursor to research reports being presented as interactive online video games? Research report gamification—kind of like the gamification of education?
There are actually two #commoncore Project reports. The citation for the first report says 2015 and the citation for the second says 2017. I used the 2017 report. Having heard a comment that the first report was more objective and interesting, I thought I ought to take at least a quick look. I found that the first report does not include theories brought up in the second report and appears to be more straight forward in presenting information not laden with so much opinion and ideology. Does the difference make you wonder why?
The first report shows a statement that says, “This project received no external funding from any source.” The second reports says, “This project received funding support from the Milken Family Foundation, the Gates Foundation, and the Consortium for Policy Research in Education. The analyses, findings, and conclusions are the authors’ alone.”
For many advocates, any sense of credibility is lost upon seeing all to familiar funding sources for the project’s second report. It can be enough to make one wonder. Was this research bot bought? Did funding bring on the opinions and ideology in the second report? Would researchers compromise standards of professional ethics that guide most academic research as a result of funding? Has that ever happened? Of course that would not be the case here because we are told, “The analyses, findings, and conclusions are the authors’ alone.”
One very involved mom observed that the whole tone of the project was different after the Gates Foundation became involved. It is hard to tell if this is a correlation or a cause and effect situation. It has made some wonder if part of the mission of the funded project was to marginalize Common Core opponents.
Research or naught? Or naughty research? You decide. The project did produce a report but that doesn’t necessarily make it research. Advocacy reports, research for pay, anecdotal compilations, or ideological reports about education are all to often pushed and accepted in lieu of solid empirical research.
Research or project? The Consortium for Policy Research in Education published the report. Sounds credible. While the authors do refer to this as a project, they also say “our research” and once in the report say “our cutting-edge research.” So that settles it. It must be research or at least a research project.
On page 69 of the report it says:
“Using this avenue, individuals and organizations can disseminate information unvetted by formal sources. This loosening of the hold of the ‘professional’ media has led to broader reporting of activity and events, but also has the effect of increasing unsubstantiated, exaggerated, and even outright fake news stories. “
“…and identified a number of alternative online ‘news’ organizations that used the legitimacy of news to overtly push a particular ideological slant.”
and on page 70 it says:
“These strategies show that invested parties are making a concerted effort to disseminate information in intentional ways with specific goals.”
When it says “individuals and organizations can disseminate information unvetted by formal sources”, does anyone think of the NGA, CCSSO, CCSSI, wealthy individuals, major corporations and foundations $upporting and promoting the Common Core? I didn’t think so. Why would you?
These statements raise some other questions. Does the mainstream media ever publish information from proponent press releases without substantiation? From opponents? We do have an unbiased objective mainstream media… don’t we? We would like to think so but all to often we see the so-called “professional” media publish unverified information in an unfettered manner. I wonder if that is okay as long as information and opinions are stated as fact, false or naught. The way Twitter seems to be portrayed as manipulating and spreading misinformation, you would think mainstream media is pure, never manipulating or spreading misinformation.
Here’s a mix of taco filling comments that may, or naught, be related to research methods. The taco didn’t come out of nowhere—it came from page 37 and I really wanted to fit it in somewhere and this seemed as good a place as any.
The project undertakers appear to assume the adoption of the Common Core has been successful, yet successful adoption has not been defined so we don’t really know what constitutes success in their eyes.
While it is understood the project focused on central actors, transmitters and transceivers, using Twitter, it does not take into account the possibility that many of those actors are also part of non-Twitter networks and may very well influence and be influenced by actors in those other networks. It is hard to imagine Twitter as the only source of information for these central actors or anyone else.
I have heard people question why Instagram, Facebook, or other social media were not included in this project. I can’t speak for the project undertakers but it make sense to me. Many social media platforms are not public. Tweets are publicly accessible and can be searched, stored, and mined to gather quantifiable data that can be confirmed and verified.
It says in the project report that the work was peer-reviewed. I wonder what that peer review process looked like. Would this “research” report survive the peer review process to be published in a reputable research journal? We may never know.
Miscategorization or mischaracterization? While I realize people, or tweeters, may need to be placed in categories for any number of reasons, in this case, research, questionable or naught, I question the loose broad position type categories used like education professional. Initially, I thought those conducting this activity did not provide clear definitions of the position type categories they used or how tweeters were assigned to these categories. Then on page 26, I found were it does define, loosely, “Education professionals were individuals who worked in, commentated on, or were otherwise part of the education profession.” Wow! I know lots of people who commentate on education and the profession, but they wouldn’t consider themselves as education professionals. Anthony Cody as an education professional, no problem. He had a career as an educator, mostly as a classroom teacher. I don’t see Michael Petrilli or Neal McCluskey as educational professionals even though they commentate. Both are foundation executives or officers and don’t seem to be working in or a part of the education system itself. Would they even be considered education bureaucrats? Are any classroom teachers across the country offended by this categorization? I know of at least one group outside education that was surprised to be categorized as being a group inside education.
To start this section, let’s consider two definitions of theory.
- a coherent group of tested general propositions, commonly regarded as correct, that can be used as principles of explanation and prediction for a class of phenomena
- a proposed explanation whose status is still conjectural and subject to experimentation, in contrast to well-established propositions that are regarded as reporting matters of actual facts
While it is not clear, it appears that the project undertakers may be trusting unproven theories, or at least one. At any rate, they do bring up and rely on some theories in their work. Would using unproven theories lead people to believe they are undisputable fact? It may be questionable as to which definition applies to the theories used in this project. Let’s explore.
On page 46, the report says, “David McClelland’s seminal research in needs theory—a motivational model of human behavior created in conjunction with the Thematic Apperception Test.” The project has a heavy reliance on this theory. The statement says it is a motivational model. That doesn’t mean it is a proven theory. As I read more, I thought it sounded like Maslow’s Hierarchy. Maslow’s theory has made sense to a lot of people over the years but it lacks scientific support. Definition 2 would apply to Maslow’s theory. What about McClelland’s Need Theory focusing on achievement, power, and affiliation? Definition 1 or 2? One statement indicates there is more empirical evidence to support McClelland’s Need Theory. “McClelland’s theory is criticized for its lack of predictive power as it relates to entrepreneurship.” While there may be some empirical evidence to support this theory with regard to entrepreneurs and possible others, is there evidence or reason to believe it applies to the study population considered in this project? Definition 1.5 plus or minus .1 or .2, maybe or naught?
The Theory of Social Capital comes into play in the project big time on page 13. Definition 2 definitely applies and I wouldn’t rule out definition 1. There’s lots of studies and information available about this theory and there seems to be evidence to support it beyond conjecture but it is hard to tell. The linked video says there is no single social capital theory. It also says, “there’s a lot of contradicting and confusing theory out there trying to explain what social capital actually is.” It’s uncertain as to agreement on how social capital is defined as shown by the 20 definitions provided on a Definitions of Social Capital web page. With or without empirical evidence to support this theory, it does seem to provide a good perspective to use for this project.
The project makes use of James Pennebaker’s lexical tendancies work. There is some indication this work is “more accurate than a polygraph.” Seems like fascinating work, especially with the now available technology. Interesting names of a book and a papers found in the references and online—The Secret Life of Pronouns: What Our Words Say About Us and Counting Little Words in Big Data: The Psychology of Communities, Culture, and History.
We or I (or you choose the pronoun) can have some fun with some of those little words and pronouns, especially if the little words are pronouns. Choose the pronoun depending on how you want to be analyzed. On page 42 of the report, it says, “…whereas the happy poet, using we, finds distance between themselves and their feelings and they share their experiences with others.” We are told happy people use “we” words, or third person pronouns. On page 49 it says, “Specifically, dishonest people employ fewer I words, more 3rd person pronouns, fewer number words…” So what should we infer from this? Should we infer that dishonest people are happy or that happy people are dishonest?
The four words that make up my concluding thoughts will fit within Twitter’s 140 character limit, but being a twittertard, I will not tweet them. My concluding thoughts seem to apply to life in general and may, or naught, apply to this article or the #commoncore Project. My favorite quote from Douglas Adams presents my concluding thoughts better than I could, “Reality is frequently inaccurate.” Have a diurnal anomaly.
NaughtyBots Part 4: Researchbots, Bought, or Naught, Theoretically Speaking?