Uncategorized

Tech guru Jaron Lanier on the dark side of AI: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’

Picture of Jaron Lanier

About the dark side of AI: What are the ethical implications and potential risks of artificial intelligence according to Jaron Lanier?

“𝘛𝘰 𝘮𝘦, 𝘵𝘩𝘦 𝘥𝘢𝘯𝘨𝘦𝘳 𝘪𝘴 𝘵𝘩𝘢𝘵 𝘸𝘦’𝘭𝘭 𝘶𝘴𝘦 𝘰𝘶𝘳 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘵𝘰 𝘣𝘦𝘤𝘰𝘮𝘦 𝘮𝘶𝘵𝘶𝘢𝘭𝘭𝘺 𝘶𝘯𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘪𝘣𝘭𝘦, 𝘰𝘳 𝘵𝘰 𝘣𝘦𝘤𝘰𝘮𝘦 𝘪𝘯𝘴𝘢𝘯𝘦, 𝘪𝘧 𝘺𝘰𝘶 𝘭𝘪𝘬𝘦 — 𝘪𝘯 𝘢 𝘸𝘢𝘺 𝘵𝘩𝘢𝘵 𝘸𝘦 𝘢𝘳𝘦𝘯’𝘵 𝘢𝘤𝘵𝘪𝘯𝘨 𝘸𝘪𝘵𝘩 𝘦𝘯𝘰𝘶𝘨𝘩 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘪𝘯𝘨 𝘢𝘯𝘥 𝘴𝘦𝘭𝘧-𝘪𝘯𝘵𝘦𝘳𝘦𝘴𝘵 𝘵𝘰 𝘴𝘶𝘳𝘷𝘪𝘷𝘦. 𝘈𝘯𝘥 𝘸𝘦 𝘥𝘪𝘦 𝘵𝘩𝘳𝘰𝘶𝘨𝘩 𝘪𝘯𝘴𝘢𝘯𝘪𝘵𝘺, 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭𝘭𝘺.” (Jaron Lanier)

I was preparing my vacation reading list, as one does when faced with the overwhelming panic of having too much time away from work — also known as the Christmas break. My journey started, naturally, on LinkedIn and ended with me buying several Cal Newport eBooks. (Honestly, how many times can someone ask me if I’ve read his work, considering my research interests?)

Of course, I immediately started skimming the books. After grading so many student exams, do I even remember how to slow down and read anymore?

Somewhere in Newport’s writing, he mentions being inspired by the techno-philosopher Jaron Lanier. These days, you say “techno,” and I say, “I need to read that.”

Long story short, I found myself looking up Lanier’s books and articles. Beyond feeling an existential sense of insecurity about my writing (do all writers feel this way?), I stumbled across what might be the most fascinating website I’ve seen in years. A simple one-pager — reminiscent of the early 2000s internet — containing a lifelong body of work. Do you really need more than that when you’ve written multiple bestsellers and had your portrait taken by Joseph Gordon-Levitt?

Picture of Jaron Lanier

Needless to say, I’ll be reading Lanier’s books. And Newport’s. Just so I can finally say “yes” the next time someone asks.

While reading Lanier’s articles, I found an incredible perspective on AI. As you might have noticed, I’ve largely escaped the AI conversation. Why contribute when I don’t know enough? But I love how Lanier describes AI:

“This idea of surpassing human ability is silly because it’s made of human abilities.”

He compares the AI-human relationship to that of a car and a runner: “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

BUT! Lanier warns that the real danger of AI is that we will die of insanity:

“To me, the danger is that we’ll use our technology to become mutually unintelligible or to become insane, if you like, in a way that we aren’t acting with enough understanding and self-interest to survive. And we die through insanity, essentially.”

The article continues:

“The more sophisticated technology becomes, the greater the damage we can do with it — and the greater our “responsibility to sanity.” That is, our responsibility to act morally and humanely.”

To this, if I may, I would add that we need to act responsibly and sustainably.

This reminded me of a post I wrote the other day about how hard it is to learn a new language and be a researcher in a foreign country. But then again, isn’t that part of how we avoid becoming a homogenous mass of content producers, feeding algorithms?

Perhaps the answer lies in variety — plurality of voices, cultures, and perspectives. Not everything needs to be standardized to look like the next thing. Right now, we’ve standardized beauty, family structures, success, and productivity.

But maybe, just maybe, our humanity lies in the unstandardized — in what makes us unique. The other day, I read an article comparing genAI with human translations, and how the human-generated one was more creative and had a better flow.

The best way to capture the difference between generative AI text and something I write is to compare Coca-Cola Zero to honey. Coca-Cola Zero is synthetic, engineered for instant gratification, a quick burst of something sweet yet hollow. Honey, on the other hand, carries weight — it moves with a slow, deliberate fluidity, rich with the history of nature. It reflects the tireless, ancient labor of bees, a resonance with something real and enduring. One is an algorithmic mimicry; the other is the result of life itself.

Links:

Jaron Lanier’s website

The Guardian article: The danger isn’t that AI destroys us. It’s that it drives us insane.

avatar

Raluca A. Stana, PhD

Raluca Alexandra Stana is an Assistant Professor in technostress and sustainable digitalization at Roskilde University and holds a PhD in technostress in organizations. She is the researcher and lead author of the book "Digital Stress in the Workplace" and has published studies in leading journals and conferences on technostress. Her research spans various contexts, including private companies, hospitals, the maritime sector, and universities. Raluca is recognized as a leading expert in the field and is frequently invited as a keynote speaker at conferences in Denmark, Norway, Romania, and Kenya. She speaks fluent Danish, English, and Romanian.

Recommended Articles

Leave a Reply

Your email address will not be published. Required fields are marked *