Imitative AI is likely to have terrible effects on education, not because it will be used as a replacement for student writing (it is not likely to be consistently good enough for that to be a concern and there are things that can be done to discourage its use) but because it is a direct attack on years of received wisdom about how to research online.
A thread on Blue Sky shows a teacher running headlong into this problem. Her student confidently told her something that was completely untrue. When she pointed this out to him, he told her that he had researched the issue, and she was incorrect. She asked to be shown this research and he went right to a ChatGPT input. She was completely unable to convince him that ChatGPT was wrong, and she was unsurprised by this. For decades, people have been taught that online research is reliable, if done correctly from reliable places. Why should he believe one person when his entire life the student has been taught that wisdom of the search engine crowd was more reliable?
Imitative AI companies take advantage of this societal belief. They hype their tools as being better than humans at this or that, even when a closer look reveals that the results are not all they seem. They make their imitation machines look like traditional search engines, and search engines like Bing and Google force AI results into search. They minimize sources, when they aren’t inventing them, and their responses treat Reddit jokes as reliable as encyclopedias. They know that they have years of goodwill built up with the general public and they are using that goodwill in an irresponsible, but potentially profitable, manner.
It may seem odd today, but when the Internet was new using it as a source of research was frowned upon or outright forbidden. It took years for a consensus about how to effectively use online sources and research tools coalesced. And that is the danger — now that those procedures and processes have been made useless it will take time to unlearn them and to learn new methods. Having two or three years to shake all this out may not seem like a lot, but it means that students spent almost all of their college, or worse high school, years learning to trust bullshit.
Because imitative AI is, at its core, a bullshit machine. It has no independent understanding of the world. Where it isn’t just copying things, it is calculating what comes next based on its training data. Because the math requires an enormous amount of training data, it cannot be trained just on reliable material. Taken together, it means that imitative AI cannot ever solve its bullshit problem — these systems will always produce some unacceptable amount of bullshit. And because it produces said bullshit in manners that look authoritative based on how we have conducted research online for years, it poisons the collective knowledge.
Anyone arguing for the use of imitative AI in education must deal with this bullshit problem. It is not just a matter of wrong answers, although that in and of itself is a serious problem. It is the fact that these systems are warping and twisting students’ ability to conduct meaningful, accurate research. The use of imitative AI is teaching students to accept bullshit in the place of expertise. That’s not education in my book, and anyone who presses for imitative adaption in education needs to explain why it is in theirs.