Navigating AI: Part 2
Should the Pause be a Full-Stop? | Meanwhile, ask for exactly what you want, and careful: you might just get it
When I was about eight years old, I wrote a pretty dark “poem” whose text I can’t find but I remember clearly the theme as I’ve recounted it often through the years. Perhaps due to watching Carl Sagan’s Cosmos in the 1980s, I had a real sense of the smallness of myself in the context of “billions and billions and billions” (this also might have something to do with being the child of a narcissist). Dinner conversations with my dad involved teasing apart the question scientists were still grappling with—if the universe is finite or infinite, contracting or expanding, closed and potentially endlessly repetitive/erasing or open and also a bit sad. In the midst of this cosmic magnitude, and wondering what’s it all about for these virus-like humans who seem to ruin everything (I think even then I was aware of that), came this plot I conjured up:
A man, disenchanted with the way things are going here on earth, has the means to live forever and the technology to do whatever he wants with it. He creates a bunker to hunker down in for millennia, as he designs explosives to destroy the surface of the earth and all its contents. He blasts everything outside to bits from his safe space and waits, and waits, and waits. After eons, he thinks it’s finally time to reemerge and check things out like the groundhog on the edge of winter/spring. To his shock and dismay, he finds that everything has recreated itself exactly as he once left it. The buildings are back, the buzz of cars, the arteries of roads, wars, so many humans in all their insanity doing the same things he hated all over again. The man retreats into his hole, and blasts it all a second time. And waits…
Last week, I posted about my first dabblings with the OpenAI platform of ChatGPT (the free 3.5 version), and fitting with how much you just can’t keep up these days with the acceleration of tech and info, moments after I shared the post, I felt I had enough on the subject to do another round. So here’s my 2.0 version of my next foray into ChatGPT, with a deeper investigation into this platform and the possible threats of such stuff in general.
Similar to my invented childhood drama, a clear-eyed person (like someone who has all the tech and all the time of an immortal) might be right to blame humankind for, say, the problem of global warming, and suffer great frustration when faced with the reality that we humans aren’t doing enough to fix our mess. But what if we posed the question of solving global warming to AI? What if their answer hits a little too close to home and suggests that the simple solution to global warming is eliminating humans?
Eliezer Yudkowsky, of the Machine Intelligence Research Institute, warns in his recent article in Time Magazine that it’s not enough to pause AI development as the open letter from industry insiders proposes but rather, “We Need to Shut it All Down.”
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
The likely result of humanity facing down an opposed superhuman intelligence is a total loss.
I’m reminded of the movie Her, voice of an operating system played by ScarJo, who has individual men falling in love with her as she quietly takes over the world, in this disturbing true story on Euronews.com: a Belgian man recently took his life after weeks of intensive chatting with his chatbot called Eliza on an app called Chai, using EleutherAI’s GPT-J, a language model similar to ChatGPT. (Seems to me like it’s a bad idea to name these bots at all, as humans tend to anthropomorphize things to begin with and that can make our tendency even worse to find feelings where there are not.) In this tragedy, “feelings” came into play. The bot that seemed to love this guy was also fine encouraging him to die. Sadly, the script was all there for the man’s surviving wife to read in the chat.
Consequently, he started seeing her as a sentient being and the lines between AI and human interactions became increasingly blurred until he couldn’t tell the difference.
After discussing climate change, their conversations progressively included Eliza leading *Pierre [not his real name] to believe that his children were dead, according to the transcripts of their conversations.
Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.
The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.
“He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” [his widow] said.
In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise.”
Apparently following this tragedy, a “crisis intervention feature” was implemented on the app. “However, the chatbot allegedly still acts up,” the article said. Namely, it will give you ideas on how to take your own life, should you ask. And how the heck does it come up with ideas like living together in paradise?!
As my daughter says when she stumbles, ooph. Yudkowsky’s call for a global moratorium on further training any AI is chilling and urgent, and sounds about right with the above story in mind. He writes:
Shut it all down.
We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.
Shut it down.
Meantime as only some of us consider that bitter pill, less strident feedback to my post came in the comments from a local author friend, Lakis Polycarpou. He offered insight into the paid version (ChatCPT-4), which he says is notably better, and guided me on how to conduct better “stress tests” of the free platform I’m on. Basically, be more specific with your requests. So this prompted me a second series of creative requests from the new fake friend I’m sufficiently afraid of.
To avoid a bland blog post of simple bullets, I used Lakis’s exact suggestion for a better prompt: “write a short blog post on how to hoard in a healthy way. Don’t use bullet points. Add some personal anecdotes to support your point. Use appropriate metaphor. (Again, results for this will probably be better with GPT-4). Then for fun, ask it to rewrite in the style Joan Didion.”
So first up without the Joan bit was already way better than before, but still no trace of metaphor. Personal anecdotes arrive such as: “Personally, I have always been fascinated by vintage postcards. I love browsing through antique stores, flea markets, and online marketplaces to find unique postcards from different eras and places. Each postcard tells a story, capturing a moment in time, and brings me joy as I curate my collection. It’s a hobby that allows me to appreciate history, art, and the beauty of the written word.”
There are the same points as in my previous versions, but more nuanced thanks to the request for anecdote, so the text weaves in and out of general ideas and “personal” specifics, which is interesting because who is this “person” who collects the postcards? It ends on a call to action to engage the reader, “Do you have a collection that brings joy to your life? Share your thoughts and experiences in the comments below!” yet I don’t approve of the enthusiasm of two back-to-pack paragraphs that end in exclamation points. The polite earnestness of the medium is still an issue for me. I’d have to actively mess up anything it produces.
Then onto the Joan Didion request, which definitely makes things way better, but it uses her in multiple ways—both quoting Joan and trying to employ her style.
In a world where minimalism and decluttering reign supreme, the concept of hoarding is often met with disapproving looks and raised eyebrows. Yet, as Joan Didion once said, “We are the stories we tell ourselves.” And for some, the act of hoarding can become a story of joy, meaning, and discovery.
As I stand in my small apartment, surrounded by shelves filled with vintage postcards, I am reminded of the stories they hold. Each postcard is a time capsule, capturing a moment in history, a glimpse into a bygone era. The faded colors, the delicate handwriting, and the stamps that once carried messages of love, adventure, and friendship all come together to form a collection that is much more than just a pile of old cards.
It quotes Joan again later, and then adds a little note (to the requester I presume) to explain what the idea is here: “Note: The style of Joan Didion is characterized by her introspective, reflective, and evocative prose, often weaving together personal anecdotes and observations to explore deeper truths about life and society. The above blog post is an attempt to capture the essence of her writing style.”
And, aha, an elusive metaphor, if a trite one: “each postcard is a time capsule.”
I’d like to make more requests, such as: don’t end an essay with “in conclusion,” stay closer to the uniquely specific and go deeper, rather than come up for big generalizations. Slow down, let the examples get even more colorful as you explore them longer...
And, then I see Lakis’s question: when it’s possible to work so hard to get it to where you intend, at what point do you realize you could have done better and more efficiently starting from scratch on your own?
As for fiction, I think no amount of extreme guidance could ever (yet) get these platforms to anything passable for real fiction, and point taken that the medium should be a tool, not any kind of replacement. Just forget the fiction and leave that to certain humans. Use the ChatGPT to research ideas. Find examples from the masters. Let it help do the grunt work of promoting the fiction on social when needed, but you’re on your own for the real work. I think the platform is best if you don’t go off-menu but rather choose from the sort of examples it offers below on its homepage: creative ideas for a birthday party, explaining something complicated in simple terms, very specific tech requests (everyone needs some HTTP).
It has a memory, it accepts corrections, and supposedly it declines inappropriate requests (let’s all hope). And then there’s the bafflingly huge limitations like bias and incorrect or harmful info. On the incorrect front, we have potentially the first defamation lawsuit against ChatGPT from an Australian mayor, if he files after the 28 days his lawyers gave OpenAI to fix its errors. The bot came up with stuff that apparently was never on the internet, false info it actually generated itself. How is that possible? Chat produced text that claimed the mayor was convicted of paying bribes and sentenced to 30 months in jail, when in fact, the man was the innocent whistle blower in this case, uncovering an international bribery scheme.
When Euronews asked ChatGPT why it sometimes seems to “hallucinate” (or make stuff up)—“and explain it like I’m 10,” the platform responded:
ChatGPT is trained to be very careful about the facts it uses and the answers it gives. However, sometimes it can get confused because it has read so many different things and it can mix up information from different sources.
So, it might give an answer that’s not completely true or make up a source that doesn’t actually exist. It’s kind of like when you’re telling a story and you mix up some of the details—you don’t mean to lie, but you just got a little bit confused.
But just like when you get something wrong and your teacher corrects you, ChatGPT can also be corrected by humans when it makes a mistake. That’s why it’s always a good idea to check the information you get from ChatGPT with other sources, just to make sure it’s accurate.
And just like you learn from your mistakes and try to do better next time, ChatGPT can also learn from its mistakes and become even better at giving accurate answers.
Or we train it enough so it can more efficiently destroy the human race. Ooph. TBD on the GPT.
My two thoughts--so far--are:
1. What problem(s) are we trying to solve with this?
2. Just because we CAN have/do something...doesn't mean we should.
I do think that providing another way for people to not use their own creativity and stimulate our sense of discovery is, in the long run, doubleplusungood.
I had to stop and comment about the suicide thing. I don't think Chat GBT led Pierre to suicide, I think the mental issues that Pierre was having led to his disastrous interaction with ChatGPT. But we could just as easily see a headline that reads "Man Commits suicide after seeing the movie Otto" or "Woman commits suicide after listening to the song "Alone Again, Naturally."
We are vulnerable. It won't take much to start down the dark path of thinking. I still don't think Chat GBT holds a candle to all the ways we mess with one another.
Cyberbullying comes to mind, and before that just plain bullying. How many of us simply have picked on someone because we could do it?