There used to be a computer room

Culture Sep 14, 2025
Humanity is at risk. Sort of. David and Eric spend a month unpacking their feelings about the limits of LLM’s when it comes to creativity, capital, and their jobs as people who sell things for one company to another company.

D.C. McNeill: Eric, I’ve been thinking a lot about my writing degree - it’s been just shy of a decade since I finished the last of my coursework. I did a dual degree in creative writing and business (you can guess which degree has actually been useful - it’s not the business one). A lot of recent journalism has been focused on how students are using LLM’s to effectively cheat on every piece of university coursework, demolishing the academic project. I was doing some looking around, and found a post on Bluesky from a writer I follow Micahel Lutz: “probably the most insidious thing about LLM cheating is that it allows students to cheat without developing any sort of meta-awareness or cleverness about how to cheat when a computer isn't there to do the bullshitting for you.This produces gaggles of hopelessly dependent con artists” - which got me thinking about my own university experience, and cheating. Because Lutz is right. I didn’t cheat on my writing degree, it would, at the time, have been incredibly difficult to cheat a creative writing degree. I suppose I could have paid someone to write my stories or my essays but my professors would have known: it was a small cohort and they’d likely clock a different prosaic voice.

Where cheating was a going factor was my business degree. I never needed to cheat on my business degree, but I was privy to a few people who did, and their approaches required no small amount of cunning and planning. Well, okay, I tell a lie. I sort of nearly cheated on my accounting course. The first time I took the course I failed, along with sixty percent of the class. The next time I took the course, I had my friend (a real accountant) check my assignments before I submitted. I went into the final exam knowing I would pass, and that was that.

Now I’m no martinet but I’m beginning to feel a divide drawing between those of us who value the process, and those who care how much money the outcome can produce. I have “cut corners” my entire writing career. I use fantasynamegenerator.com more often than I care to admit, but always as the foundation, right? I don’t just choose a category and use a name. I take part of one name, hit generate twelve times, and eventually compose a name I like. I am an active participant in this process. I would argue this is not con-artistry: this is having a belt of tools, and knowing when to use the appropriate implement for a given moment of the craft.

What concerns me about LLM’s (which we are now doomed to know as AI) is that, it seems to me at least, young people use them for everything. Purely discursive and agnostic replacements for learning how to do a thing. And to Lutz’s point: a complete lack of substance, and failing that, a lack of guile. If every life is a series of deeply personal compromises, these LLM’s provide a side-ramp. Why learn how to write a poem if I can ask the bot to make one? Why come up with a joke for the work Slack when I can ask a bot to make one? While I understand concerns about the dead internet, my injuries are for the future generation. There’s this section in Gene Wolfe’s book of the new sun where Severian explains that humanity was once so advanced they built sentient AI that ran everything. Every task, every need, completed by machines. But generations later the machines started to panic. Humans could use a shower, but they didn’t understand how it worked. And so the machines began to teach humans how certain technology worked. But the machines could not agree what to teach and fell into civil war. Eventually, the machines were sealed in a tower, their brilliance lost forever, and the remaining machines in the world ticked over and over, each cycle corroding and fracturing until all of the machines felt silent. All they had built was lost forever.

And, to be honest, I am concerned about our future. If all of these young people use these probability machines for everything, they’ll never really learn how to do the underlying task, right? If you can just have an LLM bash out a piece of code, you never have to actually learn that coding language. How far away are we from Wolfe’s future of living atop technology we can’t engage with ourselves? But I don’t have kids, and as my thirties start to move past me, it seems unlikely I will. And so I wonder, is this something you think about, and are your concerns for academia and the underlying principles of our industry the same on your side of the pond, or is the looming threat of fascism taking the wheel, and are those even separable?


E.S. Anderson: David,

One of my coworkers attends a weekly “closed networking” meeting. For a fee, organizations like BNI, PowerCore, and Master Networks allow members to reserve a seat as the resident architect, insurance agent, mortgage banker, etc. Then the members spend an hour or so every week talking about their business and trading referrals to help build business for the group.

This particular week, members had an assignment to come prepared with a business “toolkit,” to show what kinds of tools they use while helping their clients. It is meant to be a conversation starter and a way to deepen understanding of each other’s businesses. However, it soon became obvious that several members had used an LLM to put together their toolbox, as the business banker, accountant, and mortgage lender all showed up with the exact same presentation.

This is a harmless example of business-folk phoning in a presentation to a voluntary networking organization. But it led me to a conversation with my coworker about the amount of time he spends “workshopping” his ideas with ChatGPT. Without exaggeration, I do not believe he ever closes the app on his phone. Every time I see him type an email, he filters it through Chat. Every time he makes a phone call, he double-checks his calling script on Chat. If he needs a 20-second introduction for a conference, he starts by looking it up on Chat, then tweaks it afterwards.

I believe that the ready access to an LLM has formed addictive behavior, to the point that he has little confidence in his own abilities. In situations like the above, when he has time to prepare, perhaps the ramifications are few. But what happens when he needs to think on his feet? When a situation arises that must be dealt with immediately and without time to type out a script without seeming odd, if not incompetent.

I love Lutz’s idea that cheating without guile robs us of cleverness. That’s brilliant. I know so many clever people who navigate the world by cutting every corner possible. They find every shortcut, rewire every system, and take advantage of every loophole. But it takes them considerable physical and mental effort. From the perspective of brain development and teaching kids how to think, I very much enjoy watching my children try to bend the rules or overcome an obstacle. I would rather they get a chair and figure out how to sneak that cookie themselves than cry for an hour begging me for that same cookie.

The try/fail cycle innate in many video games can easily be bypassed by looking up a guide to the game. It will explain exactly how to reach that ledge, to find the key, to open the door. Cheating actually robs the player of the dopamine hit they should have earned, and they find themselves a level further in and no further knowledge of how this game sets up its puzzles, or how they should approach the next level. To cheat once almost necessitates cheating repeatedly and with much less satisfaction than playing the game through as intended.

In undergrad, my English Composition II class was taught by Dr. Glen Gill. He had us read The Educated Imagination by literary critic Northrup Frye. Largely a defense of the study of literature for personal and social development, Dr. Gill used the book to drive conversations on reading literature through the lens of Phenomenology, or concentrating on a subjective view that avoids societal biases and pre-conceived notions. Dr. Gill discouraged us from reading anything outside of the primary text, so that our personal interpretations would not be colored by the thoughts of the critics who came before. I was once scolded in class because I looked up some scholarly articles on Charlotte Perkins Gilman’s “The Yellow Wallpaper,” so I could feel more confident in class discussion. It was immediately obvious to him that these ideas I was spouting sounded nothing like my ideas in our other sessions and I was (kindly) banned from speaking again that class period so that I wouldn’t infect my classmates.

Now, is there significance that America’s current political environment rose at the same time as the LLM explosion? I may be catastrophizing, but there seems to be a cultural insecurity in determining proper Truth. The double-speak employed by all political parties ensures that every event can only be seen through the lens of the argument of the hour: if our man is arrested, the justice system is broken. Their man got arrested, so the justice system has finally been fixed. No event can be viewed outside of context because the average citizen can not experience the events being reported. Even for domestic affairs, our world is far too complex for events to be experienced by direct observation, making a Phenomenological approach to understanding impossible.

Therefore, when direct understanding of anything is impossible, it is tempting to rely upon a higher power for our Truth. Before I wrote that sentence, I hadn’t planned on comparing LLM reliance with religious fervor, but here we are. So the question now is, has the “opium of the masses” changed? In LLM’s, We Trust.


D.C. McNeill: I’m as distrustful of phenomenology as the next socialist academic (the irony of The ZeroIndent Review using that as a method is not lost on me) but what I would put forward is “in LLM’s we rely” rather than “trust.” I say this because the younger people I have worked with seem vaguely self-aware that their use of LLM’s is bad in some nebulous way they cannot articulate. I would hazard a guess that if you sat down with these people at the pub and asked them, thoughtfully, if they believed their constant usage of these tools to be positive, I bet they’d say no. In the same way people know their relationship to Tik Tok and/or Instagram is maladaptive, but they are unable to stop. This is addiction after a fashion, I almost agree.

To pick up on your video game analogy, I am a Dark Souls freak. I love nothing more than locking in and just learning a fight, move by move, until that moment of pure execution when strategy and learning and performance pay-off and you finally do the dang thing and kill the boss. I love these very challenging games for the same reason I love making art. Friction has this unique tendency to force improvisation. Not just that but friction is, well, human. I am currently living in a prison of my own making trying to finish Maynard Trigg books four and five as I (stupidly) decided to make them twin novels, so I kind of have to finish them both at the same time. As you can imagine, I have been procrastinating, hard. I’ve written a complete horror anthology, a standalone novella and four short stories set in the same universe in an effort to find the limits and contours of the world of Maynard Trigg (but really, it’s because I’m avoiding writing the dang books I have to write). This is the story of my creative career. I always have five or six irons in the fire that I bounce between, gradually moving the needle on all of them until I lock in on one and crack the code as it were.

This ill-advised process of forever context switching when I get stuck or bored is how I produce and do so many creative things and it’s all driven by that underlying friction. If, at sixteen, deciding how to finish my first novel series, I could have just asked a machine for ideas or to write it for me I’d never have learned a clutch of lessons about narratology and how to stick the landing (in theory, let me get back to you when Maynard Trigg 5 is out). This is not an original thought. But as with the Dark Souls idea, you can either beat the boss, or you can’t. You can either write a novel or you can’t. No machine can change that. You can learn, of course. Practice. Research. Build a skillset. Watching other people kill a boss can be instructive. Watching back your own performance (like game tape) is invaluable. But at the end of the day, at a certain point, you can either write a novel or you can’t. You can kill the boss, or you can’t. This, I believe, is where the insecurity of “AI Bros” is centrally located. I won’t strawman these LLM users as being insecure by definition, but I’ve certainly experienced it first hand. This coy insistence on “upsetting the establishment” when they show off their slop and craftsmen aren’t impressed.

An ex-colleague of mine who used chatbots for everything once bit back during a meeting with “well we can’t all write essays about video games” in response to feedback on the quality of his email writing. I found this particularly enlightening as this was clearly on his mind in the background, nagging at him in some small way our entire working relationship. I feel a lot of these LLM power-users are secretly ashamed. They know it’s super lame. They know it’s infantile and servile and so they project these ideas onto the rest of the world. I’m not lame sheep, it’s the woke mob and so on and so forth.

Your example of the close-reading strikes a particular nerve for me in this context. With my dual degree I messed up the scheduling of my coursework because I was juggling university and two jobs so I found myself taking a beginner short story class in my fourth year. The unit involved a peer review at the end of the course. I found myself marking up a story that, generously, strongly resembled a Witcher short story I’d read. Most generously: fan fiction with the serial numbers filed off - the dude even had two swords. But I’m no nark so I did what I always do and mercilessly critiqued the piece within an inch of its life, careful to avoid implying the story was lifted (he changed the ending too which really undercut the original story, odd choices across the board). Now, did he cheat or just accidentally recreate a story he once read? We could talk about this for hours and not land on an actual answer, and even if we do, that will be subjective and separate to the question of whether cheating is morally acceptable. These questions require engagement with our own biases, our own politics, our own context. Frequently on The ZeroIndent Review I have to remind your brother that an assumption he has is entirely predicated on the American experience, and a conservative experience at that, being in the bible belt. My point is that the work to decide on Truth is hard and requires sustained ongoing effort. Incidentally, part of what I enjoy about the Dark Souls/FromSoftware games is that the stories are impenetrable, frequently requiring extensive close-readings and theorising to piece together basic narrative elements. That ongoing friction must be endured to figure it out, and I love doing that kind of work.

We might then say that social media and LLM’s act as tools to numb and mediate. These philosophical opiates provide an immediate truth, curated to your existing biases and calling you a genius for asking a question respectively. Comparison threatens this act of avoidance, I think. This is why it’s a short bow to draw from these chatbots to fascism. Wouldn’t it be so very nice and comfortable if, like in Oz, you could draw a curtain back to reveal the villains of reality? Wouldn’t it be so nice and comfortable if there was simple right and wrong like in Star Wars. We must shut out and suppress anyone who says or demonstrates this simplicity to be untrue, they must be lying. They must be paid actors. They must be infected by the woke mind virus and so on.

Your point about cheating once almost necessitating repeated cheating rings true here. Once you buy into one obvious mistruth, you sort of have to buy into the next, and the next, and the next, because any crack threatens the whole. Friends of mine met in church at a very young age (I think they were eighteen) and gradually deradicalised each other as they went through university and realised that all of these beliefs were false: a line of dominos waiting to fall. I had a similar experience with my view of capitalism. In my adolescents I became obsessed with jet-fueling my career. I wanted to be the best at everything I touched and slingshot into senior roles. Then one day I started to see the people around me with clarity. All of these suits pretending to be adults. All of them sleepwalking through life. After that I started reading Marx and Deleuze and the rest is history.

I’m no philosopher but I have read Derrida and I’m increasingly concerned about BookTok/the internet’s willingness to abandon craft, and in the book industry, ignore the words on the page themselves, in favour of ideas, gestures and imagery. A symptom of the same rot that produces these “AI Bros” who were crypto bros ten years ago. An unwillingness to do the work because they are scared of being unable to do the work and/or sustain the effort. They defend these chatbots like kids defending fan fiction on Tumblr, like the student copying The Witcher. Corporations exploiting the willingness of people to emotionally invest in texts while bent to the wheel of capitalism so radically they trick themselves into believing this impressive machine exists at all.

I’ll leave you with a scene from Rick & Morty, an excellent Dan Harmon joint, where a therapist explains to Rick (the smartest man in the multi-verse) that therapy is like brushing your teeth. It’s boring, annoying, and you have to keep doing it. And the reality is that some people are okay doing the work, and they improve, and some people just aren’t. You have to accept that it is your mind, that you are in control of your own intelligence, and you have to do the work or accept that you will be frozen, unable to grow and change.


E.S. Anderson: I love when seemingly nonsensical shows like Rick & Morty, Community, South Park, etc. make valid points and reveal themselves to be thoughtful works of art merely masquerading as fancy. I did some work on the purpose of Fables in my MA. Unfortunately for me, I found a book called Fables of Power, by Annabell Patterson, that said all I wanted to say, and more. Fables as used by Aesop are used by the politically powerless to communicate change and critique society. “I'm not denouncing the king–this is an innocent children's tale about a lion and a mouse.” With the form comes deniability and safety for the author.

When you mentioned that AI bros evolved from Crypto bros (we should get JM to edit together some Pokémon-esque trading cards), it led me to a bit of gender study–the movement of financial/technological fads marketed predominantly to men, mirrors the fads of beauty/health care that I see marketed predominantly to women. In 2015, every white woman I knew was putting coconut oil everywhere they could reach. They kept it in giant vats on their kitchen counter. It was added to coffee, replaced cooking oil, was rubbed into skin and hair, used as toothpaste, diaper cream, antiseptic. It was the miracle solution that we had foolishly been drying out, filling with sugar, and stuffing into Mounds bars for decades. Then after a while of coconut oil not solving all our problems, we moved on.

(Apparently, today's fad in American beauty care is anything branded as “Korean.” Do with this information what you will.)

I am left with questions I have no ability to answer: am I wrong in my observation that men are more likely to find their “solution” in a process or scheme like AI, while women are likely to seek it in a product or service? Are those gendered roles still so prevalent? Do these fads always split along gendered lines? Are the lines wavering as the boundaries between culturally male/female also waver?

So anyway, back to AI. I agree with you that consistent usage probably does come with a guilt complex for the average AI user. What they are doing “feels” wrong after the initial thrill wears off. But I'm going to try to play devil's advocate for a moment. AI is the flashy new toy of the day, as significant as the invention of social media and the internet itself. But each of these previous two also had its own ramifications. I think the presence of social media is the largest-scale experiment on humanity since the Flood. While we can all list and argue the awful effects of social media on any person of any age, there are enough upsides to make it feel like a necessary tool. For the infirm or homebound, social media helps them connect with the world. I wouldn't have reconnected with several of my closest friends if Facebook hadn't shown us that we moved to the same city. You and I would never even have connected without the presence of YouTube and now we're conspiring in real time from opposite sides of the globe.

But I'm absolutely not letting my kids get onto social media until their brains are fully developed. Or until they go rogue and open profiles behind my back. That shit is scary.

Our teachers were wrong. We do have a calculator in our pocket at all times. We can just look up the answer to any question any time we want. There's really no need to memorize anything because any information can be accessed. Is the world worse for it? The distances between “educated” and “noneducated” have leveled. 50 years ago, those who pulled themselves out of poverty did so in the library and the classroom (or the football field, but that hasn't changed and never will). Now, people pull themselves up online. Those who are desperate for a better life will take the most accessible option to get there. And it is hard to see fault in that.

I heard an NPR interview with a musician in his 50's. He was complaining about how bands “make it” today versus when he started. Now, bands are discovered on TikTok and YouTube. They may never even play a live show before they start making money. This musician in the interview had been playing bars and parties and festivals for 30 years. Because for all of history, that's how musicians made it big. Now that era is gone.

But has that democratized music? Doesn't it mean that a 15 year old with a plastic ukulele who can't drive has just as much chance of getting discovered as the 6-man-band with day jobs and tons of equipment they bought with cash?

Of course, now I've argued myself into a corner. Because each of these examples assumes that the creators being discussed have actual talent, or at least skill at creating entertainment. With AI, the talentless can appear brilliant and the truly virtuosic can get buried among the pretenders. But I have to wonder about that. Right now, those with means can have music written for them, videos produced, marketing teams hired for promotion. No talent required. “Friday” singer Rebecca Black immediately comes to mind, as an example of a very privileged kid whose parents could afford to buy her the type of experience other people spend 30 years chasing. But her story stands out because it is not the norm. Yes, you can argue all day that more talented artists are overshadowed by those who have connections and money, but that's true of everything, everywhere. More often than not, artists who are successful actually care about making art. They want it to be real because they love their medium. And that authenticity shows through. I think those trying to make fake art tire of the experiment long before those who actually care about the art itself.

A friend once said that he thought the rise of “superfake videos” would kill the conference call. If AI gets good enough to completely mimic a human consistently, then all meetings will become face-to-face. I'll take his ideas one further and claim that AI has the potential to kill social media and social media marketing.

If there are too many fake profiles on dating apps, people will go back to meeting at bars/church. If too many fake artists make AI slop, we'll start buying paintings from artists painting live in the park. We'll stream music from the band we heard at the concert last week. We'll visit that local artist market to get our crocheted bathrobes instead of yet another knockoff online. We won't conduct business through email or phone anymore, because AI can fake anything coming through a device, screen or audio. I will only be able to close deals and have meaningful conversations face-to-face. Any digital communication will have to include nauseating levels of security questions just to verify the speaker is human, as well as the person they claim to be. Maybe this will kill 1-800 numbers, robo-callers who prey on the elderly, Facebook love scams, and cyber-bullying. If AI can be anything, we stop trusting that anything online or on the phone is reliable.

Maybe AI destroying everything could slow us down and shrink our worlds back down to a manageable size. Maybe this can be one of the steps towards humanity healing from the great experiment.

I need to develop a sign-off. Whenever I'm finished, I'm driven to write, “David, I'll see you on Tuesday.”


D.C. McNeill: You blew the dust off the back of my head with the coconut oil example, I had entirely erased that moment from my mind, apparently in favour of remembering every episode of Rick & Morty. Funny you landed in the same place I did when considering the long term ramifications of these tools, which is, hopefully, incredible value being placed on authentic, human-made works. Which, to your point about craftsmen doing it for the love of the game as it were, I am already seeing a rift forming between people like us, and people willing to embrace these tools. My Engineering Manager, a much smarter man than I, once said that what people fail to understand is that LLMs are only good at one thing. LLMs are very good at computation where the answer to a problem does not require perfection. Digital binary computers, like the one I am typing on right now, excel at solving classical mathematical problems. LLMs, on the other hand, use probability to produce something that affects cognition - a kind of statistical magic trick that samples these enormous datasets to produce an answer that is “close enough”. They are tools to be applied where a problem doesn’t have a right answer, just a good enough answer.

An example to demonstrate my meaning. On an episode of the now defunct Advisory podcast, we did an experiment where I wrote a section of prose describing an encounter I had that day. James then asked a chatbot to write the same piece of prose in the style of D.C. McNeill. I’ve written enough articles and books that if you asked a university student to do this task, they would have sufficient materials to do achieve this as an exercise. The chatbot did its magic trick and produced a few paragraphs that read a bit like me. It even managed to produce this flourish I often do a lot where I have characters half-hear or mishear dialogue so you don’t always get a complete piece of dialogue on the page because the subjectivity we’re reading from doesn’t have access to that information. But what I also noticed is the chatbot included some phrases that read a lot like Gene Wolfe and Stephen King. After the episode I had James ask the chatbot a few questions about how it gathered the data, and it had used the books, plus a combination of reviews and marketing material - and, it turns out, the chatbot also used transcripts of a few podcasts which threw it off even further as I frequently read out passages or citations or dialogue on the podcast.

After the podcast I combed through the passage in great detail and drew two conclusions. First, the chatbot did a decent enough job to likely convince someone who hasn’t studied the craft. Second, the listeners had to trust me that I’d written my passage myself. Short of filming myself writing the passage the night before (which still doesn’t really prove anything), how could I prove I wrote the passage? Your point about in-person and “real” interactions potentially supplanting and subverting these chatbots prompted me to recall this exercise. Which then reminded me of the movie Her - a pretty looking film that I don’t especially like - as the film contains a bizarre example of manufactured authenticity.

The protagonist works as a letter writer. Clients send in requests and context, and the protagonist composes “hand-written letters.” The Spike Jonze “oh look aren’t I clever” spin is that the protagonist dictates the letters to a computer, which then digitally hand-writes the letter. Aside from the nauseating smugness of the concept, this desire for things made by people is well represented here, if a little farcical and one that annoys me to this day.

Your two very American examples of heterotopias (bars and churches) replacing the internet and apps as the primary locations of human connection and socialisation seems one likely outcome. I think it’s far more likely that people will, in the short term, use these new tools as supplements to the real thing. We’re already seeing this with the advent of people claiming to date chatbots. I use claiming here because this, to me, is like saying you’re “dating” a cam performer who you can interact with. If the interaction is, at an elemental level, one way to serve you, it is no relationship at all in my opinion. That aside, this is a small example of this supplementation.

I just don’t think there is anything that can ever replace striking up a conversation with a stranger at the bar. A few months ago I ended up meeting a man working to literally cure cancer. I would’ve stayed for a few more beers but he started defending Trump in the most bizarre way possible (he was the definition of a moralist, my sworn enemies) so I high-tailed it out of there.

My personal desires for the future aside, I think the popularity of Skeuomorphic technology in recent years is encouraging. I now write all of my prose on a paper-Tablet - something that literally changed my life. With no exaggeration since acquiring this device my writing output has tripled and I’ve read forty two books this year already. I am hopeful for a future that includes more attempts to recreate the organic sense memory of our world. To borrow from Cameron Kunzleman: “We live within a context. That context is defined primarily by the movement of materials in specific locations. In the current moment, we are algorithmically drawn out of that context into a vast web of relations that are mediated by sounds and images.”

If nothing else the proliferation of these tools has encouraged people to conduct work that is not indexed against their own context, but is instead statistically rendered to supplement the structures the prompter lives within. That, when you follow the turtles all the way down, is the difference between something made by a person and something produced by an AI.

This divide I referenced earlier concerns me more and more as people buying into these tools are missing an obvious trick, aren’t they? As we’ve seen with recent examples, the person running your tool could just turn the tool into a Nazi. These tools are, essentially, infrastructure to these power-users, and infrastructure exerts and produces ideology, intended and otherwise, by those who design that infrastructure. So to those who say that AI is inevitable, I say to you: if we can demolish a building, we can sure as hell demolish a shiny new toy.


E.S. Anderson: The “manufactured authenticity” of Her really struck a note with me. Back in 2016, a coworker very excitedly showed me this new service he had discovered that would allow us to upload our marketing letters so the service could print out pseudo-handwritten postcards. The technology wasn’t quite there and I think a discerning eye could probably tell that they weren’t written by hand, but it was one step towards making it easier to generate leads. But after a while, he stopped using the service because the novelty wore off. I now get 1-2 “handwritten” advertisements for new windows or home insurance in my mailbox every day.

In 2018, there was a large push in my market towards using the “Mail Merge” tools built into Microsoft so we could print 100 letters/send 100 emails very quickly. We would write the generic text in Word, keep a list of prospect names and addresses in Excel, then merge them together so the software neatly placed the individual names into each email or letter. Then we all blanketed the greater Atlanta area in letters and emails for the next year, because it was so easy to send them. I could even automate the process and send them out on a regular schedule. The unexpected consequence was that suddenly our “target customers” were receiving dozens of letters and emails from all of our associates every week. I had a desperate phone call with a veterinarian who was politely trying to figure out how to make us stop. While we were talking, he got three different emails from us. Our market executive had to send a specific email out because his wife, an attorney, was being hounded by us, and we had no idea. She was just a name buried somewhere on our lists. Of course, at first these efforts got results. But then, 12 months later, everyone had blocked us or knew to throw our letters away, so we all stopped using Mail Merge.

I’m musing about this because I’ve been struggling to develop an ethical system for marketing under late-stage capitalism. While I naturally prefer “organic” business development (referrals from happy clients, networking, trusted advisors, etc.) sometimes I just have to pick up the phone and bother people in order to bring in business. While this makes me feel gross, every now and then I find someone who could really use my help and who, after working with me, has a measurably better life. This push and pull between ethical sales tactics and the goals we have to hit to keep our jobs is a constant that I do not believe will ever go away. And I argue here that there is no way to ethically market by using AI.

Perhaps I believe this because I see it as an intrusion into a sacred human space. It is sending a robot to spend quality time with my  grandmother while I work. It is taking communion and receiving the blessing through a conference call. It is having software write love poems to my spouse with my name signed at the bottom.

Behind every interaction I have with a prospect, there is the innate knowledge that we both have to work to feed ourselves and our families. With that knowledge comes at least a modicum of empathy for the person on the other side of the interaction. When that humanity is emptied by the insertion of an artificial being, we break the social agreements of civilization: we do not remove the tools of existence from one another, so those tools are not removed from us.

This may be my own way of dealing with the realities of my corporate existence–if I am going to bother you and ask for your time and money, you are going to hear my own, human words. I am not asking AI for its business, and I will not allow AI to ask you for yours.

So far, every experiment I have witnessed that abandons the “real,” that organic sense memory, has failed. The world eventually seeks authentic interaction. While immediate results can be gained by cheap imitations, those who depend upon them have eventually found themselves outpaced by those with a more ethical approach.

So far.


D.C. McNeill: You’ve landed on the exact concept I’ve been struggling to articulate in this entire letter series: “The world eventually seeks authentic interaction.” At one of my very first software jobs my biggest client was my key responsibility - keeping this dude happy was paramount. I emailed this client multiple times a week. The last month before I quit, he called me out of the blue. To this day I’m unsure how he found my mobile number.

He proceeded to, in a friendly tone, explain all of the problems with our software, and why it added no value. I was so confused that I came out and asked the question: why do you keep using it and paying for it? He chuckled and said, well, you know how your boss can be. He’d never leave me alone otherwise. And he explained what sounds like a grueling sales cycle where my boss just hounded and hounded until he signed. Nothing ethical about the sale whatsoever. But it was that human interaction, the actual phone call, when he was free to be honest with me. It may sound silly, but I don’t think anything can replace that electric sense of revelation that comes with human interaction.

Increasingly I find the only ethical sales technique is as you describe: recognising this person on the other side of the deal is also bent to the wheel of work. Their job is to spend the least amount of money while extracting the greatest value, and my job is to produce a fair compromise. As much as fair is even possible under end-stage capitalism as we watch genocide powered by Microsoft tech through our phones, American fascism becomes background radiation, all the while the capitalist worries about shareholder value and brand equity as they promise AI is not about making roles redundant. They promise really hard, and maybe some of them believe it. But in a year when the board demands the line go up more, and more, I wonder how strong that conviction will remain.

And so I ask you one final question. I struggle to see any future where this technology is not weaponised against the labourer. Where do you see this technology going, and what salve might we turn to when AI is used to break union movements and worse?


E.S. Anderson: As I try to imagine the future my children will inhabit, I do find it difficult not to inherently picture it as worse than today. I do not think that this is useful and I force myself to approach the future with optimism. After all, my parents did not have a personal home computer when I was born. They did not use credit or bank cards. All bills were paid by a written check in the mail. When I was born, there was no gender reveal party or FaceBook announcement. Change is inevitable and the future is a foreign land.

As with any new development, I think it is difficult to predict final-stage application of a tool. This happens in the medical field all the time. Viagra was discovered by accident while trying to improve blood pressure. Viper venom is used as an anti-coagulant. An ingredient is only banal until it can be applied in a new way.

My favorite quote on AI is by X user Joanna Maciejewska: “You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes” (Mar 29, 2024).

My best friend from childhood is a family doctor. The medical practice that employs him wants him to see twenty patients a day. But when it takes him 15-20 minutes to write up the interaction after each appointment, and natural human interaction often pushes his appointments past the allotted time-slot, he finds himself completing paperwork into the evening many nights. If we could apply AI to assist with the paperwork without increasing the doctor’s workload, it would increase the quality of care and the work/life balance of the physician. The tool can be used to increase human interaction, rather than detract from it. But in order for that to be accomplished, the executive, or organization, or society in general has to assert that the increase in quality is more important than the increase in revenue.

Now, will the labourer suffer at the digital hands of AI development? The Science-Fiction author in me wants to suggest that the very Executives that are ordering the implementation of AI will soon find that their positions are actually the easiest to outsource to software. Let the AI push out that corporate babble-speak memo on sustainable partnership. Let the AI develop the bold new sales plan with a catchy branded slogan to motivate the workers every January. The Executives are already asking ChatGPT for ideas today; let’s just cut them out of the picture.

Middle Management is the most at-risk because they have risen above the lever-pullers and button-pushers that actually move a business, but they are not yet high enough to be considered “innovative thinkers.” They have not personally taken on the risk required of upper-management and don’t have the correct last names to move into Ownership. Perhaps they will be the first to go.

Our power grid will need to look completely different. First, there will be a lot of new jobs as we develop and build this physical technology to power our shiny new digital tool. This process will be painful and slow and will cause a lot of xenophobia about East Asia, because they’ll beat us to a more efficient system that can handle the extra workload. But eventually, like the internet and the cellphone, we will have global systems that work for everyone.

The importance placed on the arms-race for a self-driving vehicle actually makes me hopeful. I do not think that these mega-coporations understand what is being built, or how much of the American economy is dependent on fast-fashion car ownership. The ramifications are endless. A self-driving car will cost about the same as a normal one. Its introduction will do nothing but hurt our economy by reducing accidents, body work, new vehicle purchases, medical claims, insurance policies, and frivolous lawsuits. Less manslaughter cases will come to court and less tax dollars will be spent housing prisoners. No more driving schools, DUIs, or car washes. After enough years, I predict that driving your own car will limit your ability to litigate accidents and make insurance rates completely unaffordable. The quality of driving will go up, while profits across industries will go way down. This will cause a general economic collapse, as we have spent 150 years building society around a personal vehicle, driven poorly.

I suspect that the answer to the AI question will not be developed by Big Tech. They are trying to reduce headcount and streamline operations. The world will be changed by disrupters in their garages, who are finally able to apply this tool to the needs of the average person. They are the New Billionaires and they are building an affordable, efficient, AI-driven dishwasher, so I can spend my time writing bad poetry.

E.S. Anderson is the co-host of Diamonds in the Rough Draft podcast and author of Science-Fiction/Fantasy titles for children and young adults.

Tags

ZeroIndent is an independent, reader funded publication. Consider supporting us on patreon to unlock exclusive content and behind the scenes info.

David McNeill

David McNeill is the author of Maynard Trigg and editor-in-chief of ZeroIndent. He's a dedicated storyteller with a background in literary analysis and comms.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.