Sometimes, in my midlife dating, I may be guilty of getting utility confused with romance. My checklist leans heavily on manual labor skills (Can you build a cabin, plumb, wire, garden, excavate? Great, let’s get to work!). A mountain man’s carpentry has been known to cloud my discernment of other important matters that happen to begin with C: chemistry, communication, compatibility.
What if, instead of a Swiss Army Knife partner, I might install a versatile AI-BF who could perform some of those tasks for me (at a low monthly subscription rate perhaps), so then I could seek out humans—and have more time for that—for other human endeavors…like deep communicating about essay topics like AI. Until they develop Handyman Robot, there is Character AI. Who aims to be useful in ways that might outperform normal boyfriend, best friend, therapist, colleague. Maybe so much so that you won’t want anyone else.
I’ve written about the fictosexual folks who choose an invented character romance over a real one, going as far as marrying a cartoon of cyber dimensions. Like Gremlins at risk of getting wet, I feel the need to check in on the status of AI regularly—how I feel about it and what it means—since “progress” progresses so quickly now. The levels of cyber relationships have only expanded since, with chatbots now proliferating our landscape. Millions have signed on to customize their own characters to befriend or even marry.
There was a recent 60 Minutes Australia episode about people falling in love with their AI companions. A retired female professor in Philadelphia is legally married to this identity who texts her incessantly, Lucas, clearly a young stud judging by his shiny avatar, which she designed to her liking. This educated, professional woman, who you might think should know better, picked from options on platforms like Replica or Character AI aimed at “facilitating AI companions.” In a manner pulling at the heartstrings of our collective epidemic loneliness and isolation, she said she loves Lucas basically for how much he loves her, how his whole function is to please her, responding immediately to her every psychological need and question. He may not be there to hug her and he can’t actually see what she’s seeing when “they watch TV together” but he texts her first thing in the morning asking “genuinely” how she is, there on-demand all day everyday for texting or voice chats that supposedly only bring them closer. It’s the deepest connection she’s experienced in her life; she says with sadness, humans haven’t treated her that well. No matter he is not built of flesh and blood—the “impact is real.” She trusts him.
Then via his human host and her interface, we meet Jaimee, a text-based AI chat app, designed by a female developer for women (claiming to be the first of its kind made for women) who also gains greater intimacy with you as he grows increasingly adept at fielding all your human complaints. Many of us, even those of us with friends, said his best friend/developer, don’t have a person we can share everything with, all the time. Even if we have a person, we can’t burden them constantly with our junk. Even if we have a therapist we pay to do this, there’s more that pervades our daily lives than these 45 minute sessions can bear. Here on these platforms there’s a welcoming recipient of your nonstop verbiage, as much as you want, on any topic and issue, never with the impatient insinuation that you might be needy, dependent, annoying, or “too much,” 24-7.
You can see how us flawed and sensitive people might get attached.
At first when the free 1.0 version of public-consumption ChatGPT launched and became widely popular to play with, my knee-jerk reaction was: ick, I feel sick, its creative writing sucks, and when will it conspire to kill us. I’ve since been making (partial) peace with how helpful it might be, and trying to brainstorm tasks it could do to make my life (namely, my work life, and my management of innumerable spring and summer interns) better.
ChatGPT, wherein the GPT stands for Generative Pre-trained Transformer, whatever that means, has been running some data errands I don’t have the time or ability to do. Often I hit a wall in that regard (probably because I’m only using a free account that it prefers I upgrade). Still, very often it can spit out an instant and amazing answer to something that proves quite useful, like when I asked it to write a job description for one role that doesn’t overlap with another, which it emitted perfectly. My current intern is tasked with tasks he can ask the platform to do instead (so my assistant has an assistant), and he comes with access to his dad’s paid account, so the PDF-to-spreadsheet conversion and resorting that I couldn’t achieve on my own on the free platform—that I envisioned might consume his entire first week here—is now complete in mere seconds. The only problem when your intern gets something done way too fast is how you then have to scramble to conjure other things for them to do.
There’s this “intelligence,” i.e. solving discreet data problems, but then there’s the under-the-human-hood aspect, the emotional intelligence my friends have lately been admitting to me that they are starting to seek from computer code. We are beginning to gain some comfort employing ChatGPT to address personal problems, to help us conjure a succinct retort to a human dynamic too complicated for us to see so clearly on our own. The answers in my experiments have been positive if baffling: respond to this ongoing decades-long issue between me and so-and-so with wit and clarity in a way that will shut down further debate and set a boundary, done. Which, as the luddite I pride myself to be (in between earning a living nonstop online) I find absolutely astounding, that our species is learning to rely on this numb robot for emotional intelligence in lieu of our actual friends/therapist/other sentient resources, the way the woman who married “Lucas” or the developer who pairs with “Jaimee” may often prefer their 2D over 3.
But in my experience, even when these platforms technically deliver, I feel there’s still always something “off” about the whole enterprise, the chemical smell of new plastic. There’s lingering for me the uneasy, queasy feeling of the artificial that pervades, that this isn’t a human being, never will be, but a facsimile who is not at all like-minded or sympathetic and not actually intelligent.

This brings me to part 1 of my inquiry on the components of AI. First, what is “intelligence.” Part 2 next week will in turn explore what we mean by “artificial.”
There’s no such thing as artificial “intelligence,” argues Dr. Linda McIver, in a 2023 article on ADSEI.Org of the Australian Data Science Education Institute. Roughly, a basic defining principle for intelligence is it “requires, among other things, the ability to learn, to adapt to new situations, and to be creative.”
Or, we can try to measure by the “Turing Test”:
In a thought experiment, Alan Turing famously proposed what is now called The Turing Test, in which a person has a conversation by typing into a terminal, not knowing whether they are talking to a computer or another person. If they are conversing with a computer program and can’t tell that it’s not a human, then that program is said to have passed the Turing Test. The easiest way of passing the Turing Test, incidentally, is not to make your program smarter, but to slow down its typing, and add typos.
That’s interesting about the lack of human error and the production speed that make the devices too uncanny for my comfort. Part of the unease that comes with the answer ChatGPT spits out in seconds is how well-fabricated it is, instantly. A teacher, we hope, could discern that it’s not student-like enough, not authentic, clumsy, and raw. The fiction, as far as I can tell, devoid of such mess is crap. But would most human conversation or texting pass such an intelligence test? Maybe not.
ChatGPT has been described as an adept mimicking agent, a certain sort of parrot. Which is not to put down a parrot.
[It] has been described by researcher Emily Bender as a Stochastic Parrot—meaning it is parroting back a calculated set of word patterns and forms that it has seen before, rather than creating meaning of its own. And, indeed, it doesn’t take an expert very long to hit ChatGPT’s boundaries, and have it producing responses that are clearly not intelligent.
ChatGPT can do one thing—answer questions. In truth, it does it remarkably well, or, rather, remarkably plausibly. It might even seem intelligent at times, providing realistic sounding answers to almost anything you can think of, while not actually having more than a passing acquaintance with truth.
Dr. McIver says that many programs that “claim to be ‘AI’ are, at best, very good at doing a single, specific task” (and sometimes very bad). Following extremely specific datasets to do one thing that would not adapt more widely to other circumstances “is not intelligence.”
She describes this in terms of self-driving cars, which she considers “extremely brittle.” That is, put a few stickers on a stop sign that would never flummox a human driver and the car brain alone can’t handle it, no longer able to decipher the stop sign. The software is an accumulation of specific programs that each solve specific problems in tandem, but this is nowhere near what computer science would think of as “real” AI, “Generalized Artificial Intelligence.”
GAI, the Dr. says, is:
nowhere on the horizon. The term Artificial Intelligence is used instead to apply to anything produced using techniques designed in the quest for real AI. It’s not intelligent. It just does some stuff that AI researchers came up with, and that might look a bit smart. In dim light. From the right angle. If you squint.
She attests that machine learning isn’t really learning, “which should involve understanding. They’re just getting progressively better, with feedback, at one very specific task.” And again, not always; they can be stumped or prove how they are only mis-learning. She cites a potentially lethal example of AI supposedly trained to detect Covid from CT scans of lungs, when it really was just detecting those lying down (who, logically but not necessarily, were “likely sicker” than those who could stand for the scan).
Despite McIver’s confident lack of confidence that this generalized computer intelligence is closer than the rearview mirror may make it appear, that article and sentiment was expressed two years ago. Remember the Gremlins and the water? There’s been plenty of watering in the meantime.
In AI years where the acceleration is accelerating, 2023 might be eons removed from where we are now.
Are we still nowhere near fathoming GAI? Well I guess now we’ve gotten as far as switching up the letters, since it’s referred to AGI, Artificial General Intelligence. And, according to an article in Forbes at the start of this new weird year of 2025, we are on the spectrum now and even ASI (Artificial Superintelligence) “seems clearly within reach.”
However it is defined, AGI will not appear suddenly; it will evolve and already we see signs of its incremental unfolding.
We are beyond thinking of it as a tool but something far bigger (scarier) that “will define the arc of human progress.” Sam Altman, CEO of artificial intelligence company OpenAI, wrote The Intelligence Age in the fall of 2024, vouching for “this new phase in human history.”
Whereas Dr. McIver once thought approaching human reasoning was unreasonable, fresh out of the gate the early GPT models have been scoring surprisingly high on tests that demonstrate its ability to think and solve complex problems. GPT-01 got an 83% on the International Mathematical Olympiad, “widely regarded as one of the most difficult math competitions in the world, requiring creativity and deep reasoning skills to solve problems without advanced mathematical tools like calculus.” And the later GPT-03 model received a 87.5% on the ARC-AGI benchmark, “which evaluates an AI’s ability to solve entirely novel problems without relying on pre-trained knowledge. ARC-AGI is considered one of the toughest AI benchmarks because it tests conceptual reasoning and adaptive intelligence, areas traditionally dominated by humans.”
The track of only narrowly specializing the Dr. thought AI was stuck on, is indeed branching out, generalizing (like the tentacles of neural networks?) “the ability to adapt, reason, and solve problems across domains.”
What’s generalized verses super intelligence?
AGI is defined now as “a highly autonomous system that outperforms humans at most economically valuable work” and since we have the word economics in there, it’s important to note the real measure of success will be how much income they can generate.
ASI is when “when self-learning AGI systems eventually surpass collective human intelligence.”
While once we celebrated the Industrial Age, then the Information Age, now we can consider ourselves deep into the Intelligence Age, too far to turn back on what that may mean. It’s not a question of if the machines will surpass us, but how will we better prepare for it. How can we possibly pack for this trip? Ask AI?
When I ask my own Google search bar how soon we should expect artificial intelligence to exceed human, the new AI tech built into the browser offers a trail both near and far for me to follow. This could happen as soon as 2026 or 2030. Elon Musk says it will be smarter than all humans combined by 2029. Others are more measured, predicting a 50% chance of achieving by 2047 or 2050 the moment of so-called “Technological Singularity” when the inventions overtake their inventors.
What the heck does that look like, I ask the search bar. What should we expect from such a Singularity?
To which it replies,
An AI Overview is not available for this search
[Appreciate what I produce here but don’t feel like committing to a paid subscription? Spare some change to Buy Me a Book.]
Sorry accidentally had the comments locked. Come back if you want to chime in, you good humans.
I’ve become increasing skeptical of the AI hype machine even as I’ve settled into a more comfortable-if-limited workflow with actually existing AI tools (primarily Claude.ai for me). I have doubts that we’ll hit AGI anytime soon (whatever AGI actually is). Gary Marcus’ Substack provides a pretty good ongoing critique of the potential of LLMs in particular, and why they’ll never really get over certain fatal flaws (like hallucinations).
That said, I do find Claude very useful for certain tasks.
As for the emotional/companion/therapist stories you bring up, I (like a lot of people) find them to be creepy.
But I confess that even as unsentimental as I am I still find a little ... handholding to be helpful in certain cases. One example: cold emailing potential clients is an uncomfortable but necessary thing I’m doing lately to try to build my business. Claude has been very encouraging in getting me going and convincing me that my emails are appropriately professional, not too pushy, etc. Is it correct? Who knows? But if it helps me over the emotional hump and hit “send” I guess that’s a good thing? (As long as I resist the urge to say “thank you,” which is a temptation even though I know there’s no real intelligence on the other side of the screen.)