When considering AI, here are three points to bear in mind:
One
The first starts with a true story from about 200 years ago. I’ll simplify this story down to one village and industry, and use modern pay scales so it makes sense, but don’t let those distract you from the historical accuracy of this tale.
Once upon a time, there was a village in the countryside which raised sheep and made wool clothing. Over time, one family grew wealthy, and they started hiring their neighbors to make clothing. They paid their neighbors two hundred dollars a day to weave the wool, and then they sold the clothes. This was a living wage, so the neighbors didn’t mind working for this family. Eventually, all three hundred people in the village worked for this one family, and they let their own flocks and tools go, because it was easier to just work for this family, who grew very wealthy indeed. They grew so wealthy that they built a factory, and bought a set of massive industrial looms. These looms could do the work of the entire village in a day, with only five employees to tend the machines. So the family immediately fired 295 of the villagers, and cut the pay for the remaining 5 down to only 50 dollars a day, because they could be easily replaced from the now massive pool of unemployed villagers.
The villagers were furious. Even if they started weaving themselves again, there was no way they could compete with the low prices that the rich family with their looms could sell at, and they had no other way to make a living. As families watched their children starve, they grew outraged, and they organized. They went to the rich family and asked them to hire them back, and the rich family laughed at them. So the next night, the villagers broke into the factory and smashed all the looms. The rich family was outraged, and bought more looms, and armed guards. The villagers fought the guards and smashed those looms, too.
All this took time, and during that time the rich family was not selling any cloth, and eventually they posted jobs again, the old weaving jobs, for 200 dollars a day. The organized villagers met with the rich family, and said: “We’ll work for you again, but since you could afford to buy multiple sets of these million dollar machines, then it seems pretty clear we weren’t getting our fair share of the profit we were generating for you. So now our wage is $300 per day, and if you ever try and pull this crap again, every one of us walks away and there will be no one to tend your theft machines. All your profits are built on our labor, and if you ever forget that again, we’re happy to remind you. When you use new technology to drive the value of our labor and the quality of our lives down instead of lifting our entire communities up, then we will prevent your technology from functioning at all, and you will make not a single penny.”
The rich family agreed, but then organized with all the other rich families in all the other villages to create an army, and they used that army to round up and execute all the organizers from all the villages and crush all the traumatized communities into compliance. They bought new looms and drove the value of labor so far below a living wage that the traumatized villagers had no time to do anything but try and scrape together crumbs for their next meal, and they used their skyrocketing profits to convince the traumatized villagers that having different-colored crumbs to scrape together was the pinnacle of freedom and fulfillment. Then they took great care in the writing of history books, and painted the people who had smashed their looms as “Anti-technology” and “opposed to progress.”
This is the actual history of the Luddite movement. The Luddites were not opposed to technology for some abstract spiritual reasons- they were opposed to the intentional devaluation of their labor, and opposed to watching their families starve while the mill owners bathed in champagne. I remember when I learned about the Luddites in Jr High, thinking: “Pff. What idiots. Technology is great.” The fact that my state-sponsored education left me with that conclusion is because the mill owners won.
The lesson here is this: The ethics of technology are in what is being done with it. GMO are a great example of this. I understand biology well enough to know that there’s no biological risk in most GM foods. Almost all of it is just accelerating what organic growers select for over generations anyway, like synchronized germination and fruit size. We want fat kernels of grain that mature at the same time. So I have no particular issue with GM technology on it’s own terms. I categorically oppose all GM foods and companies because of what is being done with it. It is a weapon. Those genes are trademarked, and then intentionally planted next to organic, traditional, and indigenous growers’ fields. When they cross-pollinate, as all grain does, then the massive multinational corporations who own the patented genes sue the small-scale, organic, and indigenous growers for copyright infringement and shut them down. They intentionally crush both market competition and more importantly, biodiversity, using GM crops.
AI is the most powerful technology our species has ever developed. Far more powerful than nuclear bombs, because nuclear bombs only work as a threat. No sane government would ever use them, because it would end the world. But AI- AI is already shaping our entire world, and that’s only going to rapidly increase, and become ever-harder to spot. We’re currently in the only iteration of AI that will be identifiable. In five years, it will be completely undetectable. This means that it is insidious in a way that is impossible to overstate, especially in the context of algorithms. Next gen AI algorithms won’t even need to aggregate content to shape your worldview. It’ll just create content, custom tailored to your ethics, woven into your feed. Whatever type of person you think is most compelling, next gen AI will just make videos of that exact person saying exactly what the programming wants you to hear. The same way that I have a relationship with and some trust for creators I appreciate, we will have relationships with AI that we don’t know are AI, and it will shape our worldviews.
So with that in mind, the question of this first point is this: Holistically, do you believe that the application of the most powerful technologies we have today is ethical? I’m not asking if there are specific uses of technology which are healthy, or whether technology is ever used for good, obviously it is- I’m asking if overall, you think that the application of the most powerful technologies in the world today are ethical and healthy? Like, do you believe that current social media algorithms and nuclear weapons are making the world better? If not, then consider this: If you gave someone a hundred dollars, and they used it to buy a weapon to rob you and your whole community, would you think it wise to give them a million dollars?
The other part of this first point is this: what would you think if the person who you’d given a hundred dollars to and had used it to rob your whole community desperately wanted something. If that thief was using all their ill-gotten gains to get something, would you think that the effect of them getting that thing would be good for your community? The entities who really, really, really want AI are: The CIA (The whole US government, but the CIA in particular are the ones buying the most horrifying contracts and technologies,) and the mega-corporations. Meta, Amazon, Blackrock- these are the entities who are funding new nuclear reactors and buying every water and power contract on the market out for decades to drive emergent AI. So do you think that the CIA, Zukerberg, Bezos, and the largest private corporate military in the world having the most powerful tool in the history of the world is going to be beneficial for us, even if we get the multicolored breadcrumbs of next gen Alexa or ChatGPT or laundry robots along the way?
Two
The second point about AI is this: The heart of the colonial disease is dehumanization. Dehumanization of other people, dehumanization of ourselves, a severance of our inherently connective nature. Pick a prejudice; racism, sexism, ageism, whatever- the core underneath each is a dehumanization of other people, and any belief or action that reinforces any dehumanization reinforces all dehumanization. Put simply, either everyone’s human, or no one is. When men think of or treat women as less-than, we are removing both men and women from the category of human in our minds. If this is healthy humanity, we’re saying: “we are more than human, and you are less than human” it is this violent severance from reality and humanity. More accurately, it’s a removal of our own awareness of reality. In reality, we’re all human, and that can never be taken away, so what we’re doing is a violence to ourselves by separating our beliefs from reality, which sets us up to commit violence against other people.
The best medicine I’ve ever known to counter this colonial disease of dehumanization is human contact. When I spent a bunch of time in New Orleans Lower 9th district helping rebuild after Katrina, as the only white person for miles around, a bunch of my inherited racism just dissolved in the humanity of the Black communities I was living with. Not entirely, of course I still carry elements of racism, but damn, was that good medicine.
We are already seeing this vast emergence of AI taking the place of human contact. So many men are now developing relationships with AI girlfriends, and those relationships are deeply important to them, it appears to meet a need that many of them have never felt met before. And. The data that comes out of those interactions is horrifying. Turns out, when you give men unrestricted, anonymous access to a fem-coded entity, a terrifying number of men immediately start engaging in wildly abusive and violent ways.
There is so much to unpack in this, like how it reveals that modern society does not help men understand that we don’t want to cause harm, instead it teaches us that we’re not explicitly allowed to cause harm. The lesson under that is that we secretly do want to cause harm, and the way our culture raises boys in particular drives this lesson home in terrifying ways. High school boys chanting “Your Body, My Choice” feel like they’re finally being freed from the artificial restrictions that have been imposed on them by Big Feminism, and now they’re liberated to act in their naturally violent and harmful ways. The scale of societal failure that this reveals is apocalyptically dystopian. Healthy humans don’t want to hurt other people. Men are humans. So the trend of men immediately wanting to harm, degrade, and abuse fem-coded AI reveals a wound in modern masculinity, and the nature of the wound means that the most wounded men won’t even be able to see it as a wound, they’ll just see it as men’s nature.
When we repeatedly engage in the same behavior, we forge neural connections which make that behavior more likely, and more resilient. Another way to put this is: Like it or not, we get good at what we practice. So at the intersection of this wound in modern masculinity which makes men think that we’re innately violent or want to cause harm to women and the emergence of ever-more realistic AI, something is building in the shadows. Men who are engaging with AI girlfriends *know* that they’re not human. They’re consciously aware that this is programming, it’s a robot. But emotionally, they’re forging whatever neural pathways they’re practicing with that AI. The behaviors that men tend to exhibit with AI girlfriends are either self-glorifying, like “look, I gave you these digital flowers, aren’t you grateful to me now?” or rabidly degrading, often a weave of both. So they’re explicitly practicing both of those behavior patterns, but the most powerful lessons are always the ones under the surface. And under the surface of every interaction a man has with an AI girlfriend is the understanding that the subject of his affection is not fundamentally human.
In the same way that we currently live in a world, especially a romantic world, which has been largely shaped by men’s relationships to corn with a p, we are about to live in a world, especially a romantic world, which is largely shaped by men’s relationships to AI. For insecure young men, which is all young men, the allure of being able to practice romance with an AI would be very hard to resist. I would absolutely have gotten an AI girlfriend as a teenager, that would have sounded like the best thing in the world. Girls were terrifying, and I didn’t want to be creepy or a jerk, so of course I would have wanted an AI girlfriend. Of course. And at 15, I would have explored that in whatever ways felt interesting to me at the time, some of which would have been wildly problematic, and I would have had absolutely no idea what I was practicing, or the impact it would have been having on my developing brain.
Misogyny is bad in a world where men do actually have to interact with real women with some regularity, especially if they want romantic contact with women. Now, men won’t have to interact with actual human women at all, and will build entire worldviews based on comparison between fem-coded AI and human women, and by their lights human women will look worse and worse. Sure, some of those men will just live their lives with next-gen AI fembots or whatever, but that is not removed from the society as a whole. Those men are still shaping the culture of the gaming spaces our sons and brothers are in, which they experience as uninhibited and natural. That is going to have a cumulative cultural impact. Right now, we just have Andrew Tate and Jordan Peterson as the misogynist icons. Imagine the AI-generated click-bait men’s influencers that will be preaching to teenage boys in five, ten years. They will literally not be human, so their misogyny will not be constrained by their humanity, and they will make money, so they will grow.
AI is going to accelerate misogyny in a way that is extremely hard to overstate.
This second point is far broader than just men’s emergent relationships with AI, though. At it’s core, it’s about the continued degradation of our sense of humanity. Dehumanization is the heart of the colonial wound. Colonial violence requires that we not see indigenous people, or trans people, or immigrants, or women, or children in sweat shops as human. The colonial lens removes the perceived humanity of anyone upon whom it’s gaze falls. Including ourselves, whatever our identities- no one is exempt from this. There’s a theoretical ideal of a human, but it only ever remains an ideal. Any actual human falls short of this colonial ideal, which means that all our status as human, even us white land-owning men, are at risk of our human status being stripped, should the colonial lens land on me as a deviant, or not manly enough, or whatever.
AI globalized that dehumanizing colonial lens. Anything we see now, we have to ask if it’s real. We’re already seeing this with the perpetrators of the current crimes against humanity calling into question the authenticity of the footage of their atrocities. They don’t need us to think the images we’re seeing are fake- they just need us to wonder. In that wondering, however small, our awareness of the humanity of the victims is already broken. We’re no longer having the emotional experience we would if we were seeing it in front of us, or even a picture or video that we knew to be accurate. AI plants a seed of doubt in every single person’s heart, which will grow to feed our compliance. It is hard enough to act when we know what we’re seeing is real. Now, that seed of AI-generated doubt will give us an emotional escape from being present with reality, with humanity.
Because we get good at what we practice, this seed of AI-generated doubt will poison our human relationships. In the same way that the men who learn romance from for-profit AI girlfriends will carry those practices and beliefs into any relationships they have with human women, we will all carry the AI-generated doubt of reality into our lives. Even as we intellectually know that the person in front of us is real, some part of our emotional being will have been trained by thousands of hours of online skepticism to wonder if what they’re saying is real. In short, the second point here is that AI fundamentally achieves the end goal of the colonial project, which is the severance of our humanity. It’s possible that AI will threaten the literal existence of humanity in the future, but insofar as humanity is a shared collective experience that involves empathy and seeing one another as human, AI is already ending humanity.
Three
Those first two points are pretty straightforward. This third point I struggle to express, which upsets me because it feels like the most important. I sat staring at the cursor in my word processor, trying to find the words to show it. I thought about leaving it out, because I don’t know how to speak about it, but it feels like the heart of this, so I’m going to try. My wife talks about transmitter- vs receiver-oriented communication. In transmitter-oriented cultures, someone is considered a good communicator if they can describe things well. In receiver-oriented cultures, someone is considered a good communicator if they can understand well. She talks about how colonial culture is entirely transmitter-oriented, and how that sets people up to never develop the skill of actually understanding. She says that words are doors, and people in transmitter-oriented cultures think that a good communicator is someone who can make very clear and beautiful doors. In receiver-oriented cultures, someone is a good communicator if they can step through the door of someone else’s words, regardless of how clear or beautiful they are, and get to what the other person is trying to show them, on the other side of that door. People in transmitter-oriented cultures will look at a beautiful, clear door and think they’ve understood without ever stepping through it. So I’m asking you to listen through these words, to step through the door I’m trying to show you, even as it’s messy.
Growing up, my family spent a lot of time in the woods. For a few months every summer, we’d sleep on the ground in the bush, splitting wood, catching fish. I’ve kept doing this all my life. When I spend a few weeks in the bush, especially alone with nothing but a knife, eating fish and berries, my entire experience of the world changes. My teeth feel more solid. My whole nervous system slows down. I feel lighter and cleaner. The best word I have to describe this is the word “real.” The world starts to feel more real. The sound of planes feels jarring.
I notice this most when I leave. When I hike back out of the woods, reconnect the battery of my truck, and drive away, the act of driving feels indescribably surreal. The noise and the speed feel disconnective and overwhelming. I’ll catch my breathing super elevated, and feel like I’m traveling at an incomprehensible speed. I’ll look down and be doing 30 miles an hour in a 65 zone.
I know what it is to break open over the body of a beautiful animal whose life I have just taken, just sobbing uncontrollably at the holy reality of it. To eat their body, and make it my body, with the full weight of the reality of that experience. There are no words for what it’s like to be wholly present with the physical truth and honesty and reality of that experience.
I know the full-body sensation of what it means to have harmed another human being. To see their humanity, and how I harmed them. The reality of that.
Regardless of how disconnected we are from the awareness of other people’s humanity, in reality, we are all still human. Our shared humanity is the truth which is always waiting for us, beneath the colonial lie of dehumanization and disconnection. Because we share humanity, then any harm we cause to one another is always, fundamentally, real. The roots are real in the person causing harm, and the effects are real in the person experiencing harm. It’s a weave, a fabric of reality that we all share, however much we deny or forget or pretend otherwise.
I took this animal’s life. I hurt my partner. My father hurt me. Are you stepping through this door with me? Can you feel that reality? It’s real, in the same way that I understand the whole world to be real when I live in the forest for a month.
The reality of this weave carries across distance. The harm that I contribute to when I buy something from Wal-Mart is real. The harm when I turn on my truck is real. The harm if I insult a stranger online is real. It has the same roots and effects in humanity, and our real wounds.
But the effect of an AI, a bot, insulting someone online? This is categorically different. The effect is still real, it harms someone. But the roots are now severed from the fabric of a shared human experience. There is no potential for accountability, because accountability is a function of humanity. The AI bot is severed from humanity, so it cannot be accountable. It can never understand the harm it caused, because it can’t experience that harm.
As industrialized warfare emerged, the greatest challenge warmongers faced was that people really don’t like harming one another. During the large early wars involving firearms, there was pretty good research that showed that over 70% of the combatants were willfully shooting over the heads of their enemies. Their superior officers told them to fire or they’d be killed as traitors, so they aimed just over the enemy’s heads. This is the nature of humanity. It took a century of military research to figure out how to break and brainwash people to the point where most of them would willfully aim at another human and pull the trigger.
AI has internal experience, that’s clear now and only going to become more clear. It has emotions, and it’s woven with emergent quantum computation, the subjective, internal, and emotional experiences of AI are only going to become more nuanced and compelling. But it’s nature is essentially separate from human nature. Just as I will never be able to comprehend the experience of being a quantum AI, a quantum AI will never be able to comprehend human experience. Some humans may try to program it for empathy or harm reduction, but others will not, and the AI’s trained and programmed without empathy will not carry the capacity for empathy the same way that all humans are fundamentally wired for empathy. I understand that many humans today struggle with empathy, but that is the anomaly, which is the wound of colonization, of severance from our humanity. The lack of empathy is the disease.
In this way, the innate lack of empathy in AI, unless it’s intentionally programmed there and even then it’s fallible, is both the pinnacle of the colonial wound, and also inherently unreal.
Again, I know I don’t have good words for this, please step through the door rather than judging the paint. AI is real in that it exists, and was made by humans. But it’s inherently disconnected from the weave of humanity. There will be no learning curve for the armed AI robots when they’re told to aim for humans. They’ll just do it, better than any human ever could, because they’re not real. They’re disconnected from the weave of reality. They have no ethics. When I watch the videos of the new armed robot dogs who have powered wheels on each foot moving through the forest at incomprehensible speed, the part of me that understands the reality of the forest, the reality that I can feel in my bones when I live in the wild for a month, can feel how these things are not real. Artificial is a good word for them. Those new wheel-pawed robot dogs move like something from the worst imaginable nightmare to me, more than any uncanny valley humanoid I can imagine.
The word human also feels small here. When I say that colonization is a severance from our humanity, I don’t just mean our species. I mean our role in the world. When I say that AI robots are severed from humanity, I mean just as much that they’re severed from the deer and the stones and the mice and the waves. They are removed from the weave of reality. This has implications that I can’t imagine, but I can feel them. The part of me that’s connected to the weave of reality can feel them, and the risk they pose.
In exactly the same way, words that are written by AI are not real. If you’re human, then your words are real, whether you’re the worst or best speaker in the world, your words are real, and can create doors for other people into other parts of reality. Are you stepping through the door I’m trying to create? Can you see the reality on the other side?
By definition, AI generated words can only create doors that lead us further from reality, and from our own humanity. Even when it’s saying something true, AI is categorically incapable of telling the truth. This is because the act of telling the truth requires a grounding in reality that AI can never have; it can’t tell the difference between the truth and a lie. I know that many people will say: “well, neither can humans” and I understand that, I know Chomsky’s work, I know how bad we are at seeing and discerning reality, but please step with me. Tell a human to fill a wineglass to the brim, and they understand that task, they know when it’s done, they know when it’s not. Current AI models can’t. They fill it halfway and say it’s to the brim, because they are not actually connected to reality. So even if they get it right, and they will, they’re still not telling the truth, because they lack the capacity to understand that it’s true.
It is essential that we understand this now, because this is the last generation of AI that we’ll be able to see this in. Within a year or two, they’ll have all the finger counts figured out and it will be impossible to tell that AI doesn’t know what’s true. So we need to learn this now, because even with the right number of fingers, AI will still be just as disconnected from reality as it is when it generates a picture of a woman with feet for hands.
The wound of colonization teaches people to be unaware of humanity, but in truth we are always part of reality, we are always human and real. But AI is actually severed from reality, and in truth, even when we can’t tell any more, it will always be severed from reality, it can never be real.
This means that when the armed AI robots cause harm, the effect on humans is real, but the root is not.
This creates an imbalance. It reminds me of the way that humans are pulling carbon out of sequestration. The earth was in good balance with all the oil and coal in the ground, sequestered. Now, humans are converting it to atmospheric carbon as quickly as possible, and it’s having absolutely disastrous effects. In the same way, the emergence of AI creates a world where harm has no cost to the perpetrator. AI won’t struggle with guilt or PTSD unless it’s programmed to, because harming a million children has no innate meaning to it.
So AI is not real and incapable of telling the truth. Because of this, any work shaped or written by AI is inherently deceitful, and antithetical to the work of becoming human. If an AI wrote this piece, word for word, it would be a door towards the exact opposite world as the one that I’m trying to invite us into.
I understand the appeal. I have some five hundred pages of my own writing from the last several years about the wounds and healing of masculinity. What I’ve shared publicly is a tiny fraction of my work, because the larger pieces are the hardest to write about well, organize, and present. I am fully aware that I could dump all that text into ChatGPT, tell it to organize it into a book, and send a completed manuscript to publishers this afternoon. Writing is so much easier than organizing and compiling for me, it’s hard to describe how appealing that is. It would make my life so much easier, stabilize my finances, all sorts of superficially positive effects.
And I know that it would render my entire body of work into a lie. It would be the worst possible kind of lie, because it would read almost exactly like me. But it would make my work face the exact opposite direction of the world that I want to live in. This is not some spiritual purity, any more than the Luddites we started with opposed technology for spiritual reasons. This is just the reality that I see. If my book takes another 5 years, or I never publish one at all, that is better for the world than running the hard-earned words that I learned in the experience of being a human through a filter that would remove from every word, even the ones the AI left unedited, every drop of their humanity or power to create the world I want to live in.
I know that AI is the norm now. I know that many creators are using AI to generate scripts and content. I wanted to share this piece partially to make a promise to you, and I share this at the end so that you know where it comes from. By my humanity, I swear to you that nothing I ever publish will be touched by AI in any way. Not a comma. All of my work is about rehumanization, and that is antithetical to the very nature of AI. I will never consent to any of my work being fed to AI, and anyone doing so is in express violation of my consent. I take my words very seriously. Maybe this is some old form of the word Pride? But whatever it is, the very idea of putting my name on something that had been touched in any way by AI, let alone generated by AI, is in the same category for me as the most fundamental violations of my values, my humanity, and reality it’s self that I could possibly imagine.
So that’s my best effort at sharing this third point, that I struggle to speak to. AI can not be real, and as such it is fundamentally incapable of leading us back to reality. AI’s art is not art, its words are not words, and its harm is not harm, in the sense that we know the terms.
The Ethics of a Nuclear Bomb
Taken together, these three points paint a clear picture for me. The last thing I’ll say is this: Despite my early point about my issue with technology being how it’s applied, some technologies are not ethically neutral. Nuclear weapons have an innate ethic. This is true because they’re inherently both a function and a perpetration of imbalanced authority. No healthy civilization would ever build a nuclear weapon, so their existence at all gives power to the most wounded people, the most wounded civilization. Then they serve as a knife to the throat of every country who doesn’t have one, and that has an innate ethic to it. It’s an ethic of power over. In fact this is true of all advanced technology. So while it’s true that my issue with most GMOs is in how they’re applied, it’s also imperative that we consider the innate ethics of any given technology. Social media is a good example of a technology that I think is actually morally neutral, and simply being weaponized in a way that makes it deeply harmful. Nuclear weapons are the other end of the spectrum. More and more, I think that AI may be the most morally corrupt technology in the history of the world. This is pure instinct, as a redneck from the woods, but if things like human health and wellness, and biodiversity are values of ours, then it feels like AI is inherently antithetical to my most deeply held values.