Does it feel like end times to anyone else today?
I don’t believe in “end times” – or anything at all supernatural – but I could feel the hairs on the back of my neck standing up this morning as I listened to MSNBC’s report on the ChatGPT chatbot’s capabilities.
As a communications manager, I write for a living. I churn out newsletters, monthly bulletins, website copy, professional bios, slide presentations, and training manuals. I’m also the editor for my leadership team, reviewing their work and correcting grammar, etc.
I’m good at my job. I love it. I am proud to be able to say I’ve put my B.A. in English/French Language and Literature to good use! So much for the “I Have a B.A. In English: Do You Want Fries with That?” t-shirt that Da threatened to buy me after I graduated.
So today’s report that Microsoft will invest another $10 billion dollars (after an initial investment of $3 billion) in ChatGPT is more than a little unsettling, just for personal pocketbook reasons. I mean, if I were younger, I’d be terrified of losing my job to an AI generator.
So what is ChatGPT? Well, per Wikipedia:
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques.
ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy was identified as a significant drawback. Following the release of ChatGPT, OpenAI was valued at $29 billion.
ChatGPT is a chatbot: a computer program that can generate content that plausibly sounds like a human wrote it. This morning on “Morning Joe,” their segment on the chatbot featured a professor and a reporter testing the tool by getting it to generate a “Morning Joe” Scarborough interview with President George Washington, who had been whisked to the present via time machine. The resulting script was actually… plausible. As though written — or spoken — by real flesh and blood human beings.
So — that’s scriptwriters out of a job, maybe?
And students will rush to us AI to write papers and take tests – we humans like a shortcut, right? In response, school systems are already instating bans on using the technology to produce required coursework. In Seattle, where I live, the school system has already banned ChatGPT and several other systems on school WiFi and school issued devices.
Reporters probably shouldn't worry yet — but folks who research the news and write analysis may find themselves on the unemployment line in a few years.
Who else is headed for the dustbin of history? Fact checkers? People who read and analyze or digest anything written? Paralegals? Bloggers on Daily Kos?
Those are all big considerations that I hope someone with power is thinking about. But I think there are also larger reasons to be worried that we’ve let a cat out of the bag that ought not be released into the wild yet.
Yesterday, the very serious journal Nature posted an editorial summarizing their guidance for researchers submitting papers. This follows their reporting last year that “some scientists were already using chatbots as research assistants — to help organize their thinking, generate feedback on their work, assist with writing code and summarize research literature.”
Yesterday’s editorial states:
ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them. Worryingly for society, it could also make spam, ransomware and other malicious outputs easier to produce. Although OpenAI has tried to put guard rails on what the chatbot will do, users are already finding ways around them.
Pass medical exams – okay, fine. After all, isn’t everyone a trained diagnostician these days, what with the advent of Google and WebMD? Answering questions by filtering through a body of established knowledge seems fairly tame. And one would imagine that no one can yet get through medical school without actually learning the material. Medicine is, after all, an applied science.
But – do research? New research? And generate computer code?
It’s that direction that scares me. The direction that leads to research not done by a human mind, and a cohort of young people perhaps not fully developing their learning and critical thinking skills, and even, if I am to be honest about how much tin foil I am wearing on my head right now, AI-generated code run amok? To do… what? Perhaps not what the requestor anticipated?
So I wonder: who is designing and testing these AI systems? Are there guardrails of any kind in place? Are the developers working hard to correct biases and train these new systems to fact check the information they are using to produce content? Or are our fearless tech overlords just champing at the bit to increase their bottom line and market share, without much thought about pernicious or damaging downstream consequences?
At the very bottom of this piece in the New York Times, we read:
But the new A.I. technologies come with a long list of flaws. They often produce toxic content, including misinformation, hate speech and images that are biased against women and people of color.
Microsoft, Google, Meta and other companies have been reluctant to release many of these technologies because they could damage their established brands. Five years ago, Microsoft released a chatbot called Tay, which generated racist and xenophobic language, and quickly removed it from the internet after complaints from users.
It strikes me that “after complaints from users” is not the time to be noticing that your chatbot is a fucking racist troll. Perhaps that might have been part of the conversation during the development phase, no?
I’m worried. I don’t know enough about AI to fully understand all of the reasons for this creeping fear, so perhaps some of the more knowledgeable folk here can help me out in the comments – either by talking me down, or by Imagineering some ways this could all go horribly, hideously wrong.
Oh! And yes, Louisiana and Texas got walloped with some winter tornados.
And the Earth’s core has stopped spinning and may start going in the opposite direction. We’re probably not all gonna die, though.
End Times? Who is to say?
______________