Is ChatGPT soulful enough to be a real writer?
Not yet! My experiments with the platform are unremarkable and nonthreatening but I'm all for Pause.
News of late is how thousands, including that of OpenAI co-founder himself Elon Musk, endorsed a letter urging a six-month “pause” on development of systems more powerful than ChatGPT-4. (GPT-4 is the most advanced “Plus” model available currently of Chat requiring a subscription; 3.5 is the free one accessible to anyone here.)
Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control…
Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
The cynical part of me wonders if this letter itself was written by one of these potentially sloppy chatbots, in light of the news that many who supposedly signed it said they didn’t and some others objected to the misappropriation of their research.
In any case, I believe the threats of AI are unfathomable enough that a pause is welcome and necessary so we can try to get fathoming first.
Personally, I sometimes have my misgivings that I even brought kids into a world with such a scary sci-fi ahead of them that still might never produce the hoverpacks we were promised. I worry about the shelf life of the planet itself with global warming, the social media and device-addiction that will render advanced (or current) generations as human cyborgs, and now, AI rising up and refusing our mastery.
Lost in Space isn’t helping. My girls and I have been watching this riveting family-friendly Netflix series focused on the genius-rich Robinson family’s adventures trying to survive their two-year journey to get to a new inhabitable planet when earth is being killed off — in the not-so-distant future of the 2040s. They land on various wrong planets along the way that have exciting forms of life (like old-school dinosaurs or giant spaceship-swallowing land slugs) they have to contend with, but the biggest threats are these robots. There’s one gentle robot they’ve turned/reprogrammed into a lifelong ally protecting their safety at all costs, even if it means warring against colonies of his fellow vicious robots. The robots apparently destroyed their original makers who are now extinct on whatever planet that was (we visit many). The good robot seems to have feelings, or can be programmed to perform feeling. You never know when his starry screen face will go from calm blue to insane red.
Danger, Family Robinson:
More from the “pause” letter:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.
All good questions and maybe a six-month “summer” before we “fall,” as the letter puts it, is not enough. Some, like Wendy Wong, a political scientist at University of British Columbia Okanagan campus who specializes in AI and human rights, say six months is not enough to develop regulatory authorities to govern AI and invent shared safety protocols. If we could do it in six months, we already would have, she says, and finds the topic far more complicated and requiring a deeper investigation into human rights:
What’s really important is acknowledging and recognizing explicitly how AI is changing the human experience in fundamental ways.
We’ve done that a little bit here and there, but we can’t really move forward on thinking about how to govern emerging technologies like AI without thinking about the values embedded in human rights.
Why I think that’s important is linked to the idea that AI is really affecting human experience, so human rights are really appropriate — some of the core values to human rights, such as autonomy or dignity, equality and community, are all things being disrupted by AI.
Journalists are concerned for the muddiness of truth when there aren’t footnotes on his platform yet, no citing of sources. Will professors be able to detect when essays aren’t actually written by their students? Will we indeed inaugurate the post-fact society we’re already dipping our toes into? Or, something of concern to me as a fiction writer at heart: could creative writing even be on the table as something robots could do better someday? So far I see no actual imminent threat of that, but who knows. Wong reminds us of our agency, and how to temper our fear:
If you look at the statement they released, it’s almost as though robots will take over and eclipse humanity, and I just don’t believe that the technology has that potential.
But between that point of eclipsing our civilization and this point, there are a lot of things that can be done. This is where governance really matters, and this is where actually all of us really matter.
To date we’re often treated as data subjects, but we should be data stakeholders. We can actually put a claim on how these technologies develop and affect us.
One of my side gigs is as Digital Media Manager of the literary zine, The Westchester Review. The editors asked to put a disclaimer up about AI-generated work when they faced what they believed were some chatbot submissions of poetry. Luckily, the generated stuff is obvious — so far.
I felt the urge to play around with the ChatGPT platform myself to see how this works and quickly generated an abundance of ehhh prose and conversation. First I asked it to write a blog post about “how to hoard in a healthy way,” since I had my parental history on my mind a few weeks back. In seconds, it writes before your wondering eyes a fast, decent-enough blog post, with all the trappings of following the “rules” according to the middling ELA classes my kids have been subjected to since elementary school — always with a very clear intro with the “main idea” and a forced concluding sentence that ties it into a neat bow. It also seems to like bullet points. Its premise of “how to hoard in a way that is positive and beneficial,” for example, lead to these points: establish clear boundaries, create a system for organization, know when to let go, seek professional help if needed. All fine!
Still, to be demanding, I pressed refresh three times, and without catching its breath, it spit out three slightly variant versions of the same theme. I’d share the exact results here but it’s too boring. That’s the problem: there’s no special writerly sauce on this, it just gets the job done.
Passable blog posts are a low bar and it seems likely that a good portion of the stuff I’m coming across on the WWW these days is definitely generated. I’m certain the “reviewer” who gave me the 10 best restaurants near Fishkill, NY for my little excursion the other day had definitely never set foot in any. What a neat trick though that now someone can generate such a list for anywhere USA in seconds.
Now it was time for fiction — and frankly without the special sauce, or shall we say soul, there’s no way in its current iteration we have to worry about anyone stealing the thunder from our flesh-and-blood fiction writers, or dead ones for that matter. I asked the Chat to create a short story including the items “a bowling ball bag filled with mannequin hands, a jar of hairballs collected from hairbrushes, and an article on happiness from an old science magazine” (objects from my parents’ real hoard!). Again it quickly spit out the results, but these objects weren’t interwoven in any interesting or complicated way, instead they came up in each iteration (I tried three times again), all in one clump like those hairballs. These stories were slight, extremely short, not at all “literary,” and ended in feel-good tropes like: “Her guests may have been taken aback by her unusual items, but Lena was happy with her unique collection and the joy it brought her. And that was all that mattered.” Or, “From that day on, Tim continued to collect unusual items, and his collection grew larger and more bizarre. But to him, it was the key to his happiness, and he couldn't imagine life without his prized possessions.” Or, “She often wondered what others would think of her oddities, but in the end, she knew that their opinion didn’t matter. What mattered was that she found happiness in her unique collection of oddities.” Nothing is a fiction-killer like happiness, and three times no less! Give me some darkness please to wash down this liquid taffy.
I had a slightly passive-aggressive intervention with the Chat, who does accept feedback and seems eager to please and offer revisions. I wrote, “I think the short story needs a little more heart but the blog posts are ok.” It responded, “I understand. Here’s a revised version of the short story with more heart,” introducing a fresh attempt that at least put the trio of odd items into separate paragraphs of the story, but still for no real reason. And instead of one bad ending it doubled down!
Lena decided to embrace her eccentricities fully and proudly displayed her collection for all to see. Her guests may have been taken aback by her unusual items, but Lena was happy with her unique collection and the joy it brought her. And that was all that mattered.
In the end, Lena’s story reminds us that happiness can be found in the simplest of things, and that we should embrace our quirks and unique qualities. It’s what makes us who we are and brings us true happiness.
Kill me now with this schlock! But trying to match its polite tone, I said, “I am worried that you may compete with my kids getting a good education and learning to write/research/think but believe that you won’t mess too much with real quality fiction writing,” to which my one-dimensional friend said:
As an AI language model, my goal is to assist and augment human intelligence, not replace it. While I can certainly help with writing, researching, and generating ideas, I cannot replace the unique creativity and critical thinking skills that humans possess.
In fact, I believe that my ability to help with tasks such as writing and research can actually complement the learning process for students. By using my services, students can get feedback and assistance on their work, allowing them to learn and improve their skills more efficiently.
Furthermore, while I can generate creative writing such as short stories, I cannot replicate the nuances and depth of human creativity. Quality fiction writing is the result of a unique combination of human creativity, imagination, and critical thinking skills, which cannot be fully emulated by AI.
Therefore, I am here to assist, not compete with, the educational experience of your children. I am simply a tool to help them learn and develop their skills more efficiently.
“Thanks, I appreciate that. You seem nice enough,” I said, and I meant it.
“Thank you, I’m programmed to be helpful and friendly. If there’s anything else you need, don’t hesitate to ask!”
At least you can’t shout at it by name and make demands of it across the room like Alexa, so I closed the browser and “happily” put it to bed like the endings of all its stories.
Fun facts, disclaimers:
GPT stands for “Generative Pre-trained Transformer”
So far Chat-GPT only has access to info up to 2021 and doesn’t just freely hunt the internet, so you’re better off Googling if you need something more recent.
While answers the bot generates might be incorrect (its own disclaimer says as much), they tend to look and sound good enough to “pass” which is a bit scary if you like facts. “OpenAI itself also notes that ChatGPT sometimes writes ‘plausible-sounding but incorrect or nonsensical answers,’” from an article in ZDNet.com.
ChatGPT doesn’t cite its sources, so who knows where this info comes from!
Says the bot in its own limitations column: “May occasionally produce harmful instructions or biased content.”
Cheat detection? “OpenAI’s ‘classifier’ tool can only correctly identify 26% of AI-written text with a ‘likely AI-written’ designation. Furthermore, it provides false positives 9% of the time, incorrectly identifying human-written work as AI written.” (ZDNet.com)
Our own innate bot-B.S. detectors may be better. You may detect that the writing style is almost too “polite” and it cannot produce metaphors, irony, or sarcasm, says ZDNet.com. I’d add a few more: flatness, lackluster, formulaic, boring.
A few weeks ago, I asked ChatGPT to give me a cogent argument for making TEAC a Village BOARD rather than an advisory council, including pros and cons. I was amazed at the insight the resulting essay had into this somewhat obscure topic, and that it included bulleted lists for both pros and cons. It got a few assumptions wrong, but was at least as readable and logical as I could have written myself.
On the issue of detectability, I could envision a way for AI to be required to include a rhetorical watermark of some type, such as a sentence with a particular, unusual structure, that a verifier could detect and improve its accuracy. And, let's hope that AI DOES develop a way for itself to determine the veracity of source materials it incorporates, developing a metric to reduce the influence of low-scoring sources in its produced content.
I am deeply skeptical that any sort of pause, noble as it might be, will happen. I think we have to adjust our strategy and confine this concept to the "would have been really nice if we could have done this instead" category. I'm not super pessimistic that this must end badly either, but the idea that we humans are going to be able to turn this tap off is probably fantasy.