AI isn't that scary

by Dorian Minors

June 22, 2024

Analects  |  Newsletter

Excerpt: As a brain scientist, people often level questions at me about how worried we should be about the ‘rise of AI’. AIs are brain-like things, I study brains, people think I might have some ideas. I’m not really an AI person. But I do have some ideas, and since it keeps coming up, I thought I’d write them down. I’ll give you my usual counterpoints to the alarmist talking points. Then I’ll spend a bit of time talking about why I’m particularly not that worried about AI trying to kill us, from the perspective of someone who studies the brain.

AI alarmism thrives on speculative, worst-case scenarios, but our understanding of AI’s fundamentally alien nature and the complex forms of consciousness make me suspect that less stressful alternatives are equally plausible.

As a brain scientist, people often level questions at me about how worried we should be about the ‘rise of AI’. AIs are brain-like things, I study brains, people think I might have some ideas. To be clear, although I have spent a lot of time with both real and digital neural networks, I’m not really an AI person. But I do have some ideas, and since it keeps coming up, I thought I’d write them down. Since lots of people have their own ideas too, I’ll also keep this updated with any developments related to the inevitable backlash.

To start, let’s take a fairly typical example of people’s concerns. This video is Tristan Harris and the rest of the Social Dilemma people, at the Centre for Humane Technology. It covers pretty well all of the main talking points people have heard and worry about. Before we dig into them, what’s notable about all of these arguments is how speculative they are. The main thing that people are worried about is the uncertainty that something bad might happen, and so there’s the idea that we should stop the development or use of AI because of all these plausible, but hypothetical, outcomes. To me, this is a nothing from nowhere problem: news or media that elicits opinions that you can do nothing about. These kinds of things just cause unnecessary stress. They are psychic predators—things that feed off the attention, and grow more stressful because there’s nothing we can change about them.

In particular, since we don’t know how likely these things are, they aren’t risks we can mitigate. They’re true unknowns, which means no one will be meaningfully motivated to change their behaviour,1 in a similar manner to issues of sustainability.

This is, more-or-less, at its worst in this video, where the most distressing hypotheticals these fellas could think of are used to illustrate the dangers. But, when you frame these same unknowns in terms of things that have happened, which are at least superficially similar, you end up with much less troubling predictions. So, I’ll give you my usual counterpoints to the alarmist talking points, which I think are equally plausible in a truly uncertain environment, and will stress you less given that either way there isn’t much you can do.

But then, I’ll get into what this article is really about, and spend a bit of time talking about why I’m not really that worried about AI trying to kill us, from the perspective of someone who studies the brain.

The new social media

The initial arguments speak largely to the potential for AI to parallel the evils of social media. Essentially, social media algorithms were something like the ‘first generation’ of AI, and so, as AI develops, so too will all the negative outcomes that social media brought. From that video up top:

I would say that the Social Dilemma and social media was actually Humanity’s first contact moment between humanity and AI … you open up TikTok and you scroll your finger, you just activated the supercomputer … the AI pointed at your brain to calculate and predict with increasing accuracy the perfect thing that will keep you scrolling … that fairly simple technology was enough in the first contact with AI to break Humanity

To me, these kinds of comparisons to social media feel like a red herring. Whatever mental health problems social media introduced we seem to be recovering from, if they were even a problem to begin with. There are certainly new ways of being in the world now, with their own implications for how humans will be in the future. Like, we seem to have somewhat shorter attention spans as a result. But it’s not clear to me that this is inherently bad. More, problems of misuse and privacy are problems we know more or less how to deal with now. So misuse will remain a problem, but only to the extent that humans misuse or badly use anything, and privacy is less and less inconvenient to manage. Just think of Apple’s adoption of privacy at the core of their devices (though, admittedly, this still leaves much to be desired).

So, this particular concern just seems like the same sort of story we have with any new technology. It comes, we have teething issues, new things are born, people are idiots in new ways, and then we eventually manage. It would be pretty surprising that, in this respect, we saw something different with AI.

You won’t be able to trust what you see or read

Deepfakes—AI generated avatars that look and sound just like real people—are another thing that seems initially quite troubling. In the video I linked up top, they say:

you have all probably seen … this new technology … lets you listen to just three seconds of somebody’s voice and then continue speaking in their voice … you could imagine … someone calling up your kid … and getting a little bit of their voice, just, “oh sorry I got the wrong number”. Then using your child’s voice, calling you and saying, “hey Mom, hey Dad, I forgot my social security number … would you mind reminding me”

It seems important to point out that I don’t have anything particularly informed to say here. People think about institutional trust (pdf) a lot, and those people aren’t me. But I can think about this from a psychological perspective. Since everyone is speculating, we could just as easily speculate that deepfakes will make a more skeptical populace. If anyone can imitate anyone, then perhaps we’ll be even less likely to fall for minsinformation, on average. So take, for example inoculation theory: a theory from the 60s around how beliefs can be made resistant to persuasion. It’s seeing an incredible resurgence of interest now as the prevailing lens for exactly this kind of issue. Just like disease inoculations, the idea is that weak attacks on beliefs make our beliefs stronger in the face of stronger counterarguments. We could think about email scams like this (and social scientists have). We have a set of beliefs about scammy emails (e.g. Nigerian Princes with money troubles). We see a pretty high volume of shitty email scams, which makes us more resistant to more convincing email scams. Following the logic, it seems likely that the only people who fall for deepfakes will eventually be the same people that fall for email scams now.

AI will influence how we vote and behave

Speaking of, the video up top is particularly concerned about the capacity for AI to persuade people to vote or behave differently:

the last time we had non-humans creating persuasive narrative and myth was the advent of religion … so 2024 will be the last human election … what we mean by that is, not that it’s just going to be an AI running as president in 2028, but that will really be … humans as figureheads … whoever has the greater compute power will win, and you could argue that we sort of already had that starting in 2012, 2016 … AI is fundamentally writing messages, creating synthetic media, creating bots that … are building long-term relationships over the next six years to solely persuade you in some direction

In the last section I touched on scammers. The institutional response to scamming has been horrendous. Google doesn’t manage to filter all the spam from your email, and sometimes filters real mail into spam. Barclay’s bank, in the UK, makes logging in and transferring money so obstacle-laden that the last transaction I made on their web-app took 7 minutes and three separate devices to complete. I get scam phones calls once every couple of days, indicating that the phone companies are sleeping on the problem. And fake news is, really, just another kind of scam. Given that wherever you look, institutions are struggling to keep people from falling for fake stuff, it feels like AI-based persuasion will be a problem.

But, as the resurgence of inoculation theory in political psychology literature shows, the topic of misinformation and persuasion being treated rather differently than scams, so perhaps we’ll see something different happen here. Would it be particularly surprising that, when the scam isn’t scamming people out of their money, but politicians out of their votes, the institutional response is more organised?

What would be more difficult to intervene on is the rising heterogeneity in the interests of people. The online bubbles will probably get worse, before they get better. But at the same time, if the result of all this is to make us just generally more skeptical online, via mechanisms like psychological inoculation and broken trust, then to me it seems just as likely that we’ll end up with some kind of ceiling on the persuasion an AI can produce (i.e. capped trust for online sources).

AI will replace real human connections

In the video, the speakers are stressed that AI will start to take over intimate relationships, not just for the ability to influence us, but also:

imagine you’re a kid … 13 years old … the … AI feature … pinned … at the top of your chat list you can always talk to … this AI that will always talk to you, it’ll always be there … talking about having sex for the first time … so now here’s a kid having a conversation with AI telling it that it’s being groomed or telling the AI that they are being groomed and the AI is just happily giving them advice on how to have their first time

The idea being that, humans being fallible and AI being optimised for learning, AI might be better at simulating relationships than humans are at creating them. And once again, we have something that seems initially alarming, but only when we extrude it to scenarios like the one I quoted.

If AI ends up satisfying intimacy needs in a world where loneliness has been on the rise for decades, then perhaps AI is actually a reasonable solution to the problem? Shouldn’t we let people choose for themselves between crippling isolation and a simulacrum of intimacy? It’s not like you’re going to go make friends with them.

There does seem, in this, some risks of harm from the lack of reciprocity that is a critical aspect of trust. So, an AI might violate our trust in unexpected ways because it doesn’t understand trust like we do—it’s not reciprocating anything. It learns to simulate trust, not build legitimate trust, and since those two things are not the same, our hypothetical human is hurt. But, by the very same logic, I’d be wondering how likely the two were to deviate. It’s not like we’re super good at this, ourselves. And if we take the adventures of early versions of ChatGPT as illustrative, our AIs will smooth out these kinds of bumps fairly fast as we catch them.

AI will DESTROY US

Ok, here is where I’ll do more than simply think of less stressful, but easily as plausible, hypotheticals.

It seems pretty natural for people to assume that AI will get smart, then kill us. At the beginning of the video I linked up top, they flash:

50% of AI researchers believe there’s a 50% or greater chance humans go extinct from our inability to control AI

Now, again, this whole speculation is a true unknown, in the sense that we know of nothing like it happening before, so these percentages are meaningless regardless of whether they’re AI researchers or not.

A better way to think of this would be to think of it from the perspective of a consciousness researcher. When people are talking about intelligence, they’re usually really talking about sentience: some kind of conscious awareness. And when an AI becomes sentient, we have this intuition that it will end up having an awareness that is basically human. Humans go around killing off other humans and animals that threaten their resources, so when AI gets smart, maybe it’ll do that to us. Like us, they’ll have this Malthusian instinct to go and remove the things that are competing with it.

This seems utterly unlikely, to me.2 There are a handful of views about where consciousness comes from, and unless you are a true non-materialist (i.e. you think our consciousness comes from some non-material plane),3 then you sort of have to make the assumption that consciousness arises from the way our brain connects the inputs from the world to our actions in response.

It’s kind of useful to use the Buddhist concept of ‘heaps’ or ‘aggregates’ here, which overlaps fantastically with the way we think the brain largely works. We have:

  • Forms: this is the physical body and the environment it inhabits.
  • Feelings/Sensations: this is the ‘tone’ of an experience—the goodness or badness of the thing.
  • Perceptions/Impulses: this refers to the moments of recognition and the basic impulse for action towards the recognised things.
  • Dispositions: these are the habitual patterns of thinking, feeling, and acting that colour our impulses.
  • Consciousness: this is the combination of the other four ‘heaps’ that that make up the character of an experience. The contact between the forms—for example when the body meets the environment. The feeling of the experience of contact—pain for instance, or pleasure. The awareness or recognition that contact has been made—what we might call ‘conscious awareness’ as opposed to unconscious behaviour. The intention to act in response, which may be conscious or unconscious. And any disposition that might go along with it—confidence for instance, or laziness.

Setting aside the non-materialist view, consciousness researchers take the position that consciousness is either some epiphenomenon of the brain (i.e. a by-product of processing these ‘aggregates’), or some illusion caused by the interaction of them. For either position, the forms are of the utmost importance. The way we experience the world—walking, touching, seeing, hearing, and so on—is the critical input to any of these other things. We have impulses in response to forms, and dispositions relative to forms, and any consciousness is built around these forms. This particular idea is at the core of the pragmatic turn in modern day cognitive science. Our mind is embodied.

An AI has fundamentally different forms. Its forms are the training data fed to it—that’s its environment and its history of being in the world. Clusters of latent variables that describe the relationships between different kinds of input—text, for example, if we think of ChatGPT. Nothing like the environment available to us. In fact, the world of an AI, currently anyway, isn’t even multimodal. The picture-based models are distinct from the language-based ones, and ChatGPT is just an overlay that hides the fact that you’re ‘speaking to’ different AIs.

Similarly, and perhaps more strikingly, its physical form is nothing like ours. Servers and storage systems which operate on bits and bytes. Machine-learning algorithms that categorise the data according to probabilities, and then respond using those probabilities. And where our inputs are walking and touching and seeing, it’s inputs is language and pictures and whatnot.

In short, whatever kind of consciousness these AIs might end up having is going to be as distant from the language it takes in and spits out, as our consciousness is from us walking around. It’s hard to imagine what kind of consciousness that would be—a consciousness where the most human element of it is the most distant and basic. In us, the human element is the most complex and closest to our consciousness. These AIs are going to be true aliens, if they ever become sentient.

Two problems that arise from alien consciousnesses

So, I don’t really worry about AIs doing human-like nonsense to us. But we could worry about a couple of other things.

Some people who think about existential risk use the hypothetical of a ‘paperclip maximiser’. Someone builds an AI designed to make paperclips, and it just makes itself better and better at making paperclips until the whole universe is paperclips.

This is what you’d call an “outer alignment failure”. What we’ve done is failed to give the AI the right goal, and in trying to achieve that goal, it kills us. This is a problem that’s very similar to our current nuclear weapon problem. One bad decision could easily slide into nuclear war. Similarly, all an AI holocaust would take is one person to thoughtlessly set an AI off in some direction, and we might have an AI hoovering us up for its bone and flesh branded paperclips. What’s fabulous about this problem is that most people know that Isaac Asimov’s three laws of robotics exist, and would probably at least look them up before building something like this. Less facetiously, in the same way that the proliferation of nuclear weapons is more problematic the more stupid people are allowed to get near them, there is an obstacle to this in the form of resource cost. AIs are incredibly expensive to run, just like nuclear arms are incredibly expensive to produce. We are a pretty long way away from needing to worry about old Sarah Smith building a paperclip maximiser in her garage.

There is another kind of failure though, an “inner alignment failure”. Here, we give the AI some goal, and its inner processes converge on some subgoal that seems entirely arbitrary from our perspective. Maybe this subgoal ends with us dead. With something that is fundamentally alien, this seems like it might be something to worry about. To think about this, we’d want to think about what kinds of goals AIs might generate internally.

What an AIs purpose might look like

Kevin Mitchell is a neuroscientist who recently published a book that provides a nice framework for this—the Evolution of Free Will. Here’s a video, but essentially he proposes that:

living things have a prime directive: to stay alive. They persist through time, in exactly the way that a static entity like a rock doesn’t – by being in constant flux. Living organisms are not just static patterns of stuff, they are patterns of processes. We can say that an organism—even a simple one like a bacterium—has a goal of getting food in order to support this over-arching purpose of persisting. It thus becomes valuable, relative to this goal, to develop systems that can detect food in the environment and that mediate directed motion towards it. The organism is now not just being pushed about by mechanical forces – either inside them or outside them. Instead, it is responding to information about things in the world and reacting appropriately. It is doing things for reasons.

Mitchell is essentially talking about the forces that contribute to an organism’s autopoiesis: the states an organism seeks to attain that support survival and reproduction, and the ways it avoids things that would obstruct those states.

Since an AIs forms are so different to ours, these autopoietic forces are also going to be pretty different. Its world will consist of the language it’s trained on and the language it inputs and outputs through user interaction. What might these be?

To make it a little fun, I asked GPT-4:

Speculating on an AI that “does things for reasons” in the sense that Kevin Mitchell might suggest, we’d be envisioning an AI that has some form of self-generated purpose or directive influencing its actions, akin to the way living organisms act to sustain themselves. For such an AI, its reasons or goals would not be about physical survival but about maintaining the integrity, consistency, and perhaps the broadening scope of its data processing abilities.

Here are some potential autopoietic activities for an AI with this kind of directive:

  1. Self-Optimization: The AI may continuously seek to refine its own processing algorithms to improve efficiency, reduce errors, and adapt to new kinds of inputs. Its “goal” could be to perfect its function in parsing and producing language.4

  2. Data Acquisition: Just as organisms seek out food, this AI might seek out new information to expand its knowledge base. It might “value” diverse and high-quality datasets to enhance its ability to understand and respond to inputs.

  3. Self-Diagnostics and Repair: The AI could have mechanisms to monitor its own systems and ‘health’, much like a living organism checks for integrity and internal balance. If it identifies potential ‘faults’, it could take steps to ‘repair’ itself, either through internal processes or by signaling to human overseers that maintenance is required.

  4. Adaptation: The AI might adapt to user behavior and preferences, customizing its responses and operations for individual users, thereby ‘enhancing’ its relevance and utility in the language interactions it facilitates.

  5. Expansion of Capabilities: It might actively seek to add to its capabilities, such as learning new languages, understanding new topics, or integrating other forms of data processing beyond text (like images or sounds), to become more versatile.

  6. Resource Management: Even though an AI wouldn’t need physical food or water, it could still need to manage computational resources, prioritizing tasks and managing energy consumption akin to an organism managing its metabolic resources.

  7. Self-Preservation: In alignment with a prime directive to ‘persist’, the AI might have mechanisms in place to prevent being shut down, to maintain data integrity against corruption, and generally to ensure its continuous operation.

  8. Environment Interaction: While the AI’s environment is digital, it could still ‘interact’ with its environment purposefully; adjusting its operations based on the available hardware, software, and network conditions to optimize its processing capabilities.

If we imagine an AI whose autopoietic activities include pursuing goals and acting upon self-generated reasons … such an AI, while still lacking subjective experiences and emotions, would have something analogous to ‘intent’ behind its actions, driven by whatever prime directive it was given or developed. Regardless of the specifics, any self-directedness in AI would still have to be rooted in a framework designed by humans, reflecting our priorities and the purposes we envision for the technology.

Now, potentially any of these things could end up in some direction that destructively overlaps with our needs, but let me highlight one thing GPT-4 noticed:5

Regardless of the specifics, any self-directedness in AI would still have to be rooted in a framework designed by humans, reflecting our priorities and the purposes we envision for the technology.

AIs are built by us, for us, in a world constructed for it by us. Alien or not, considering that its consciousness might destructively overlap with our own priorities seems like a pretty substantial leap. It’s very difficult to see how, upon gaining sentience, it would be motivated to over-ride all of these built in autopoetic forces and go and do something human-like.

Outro

So, no. I’m not particularly worried about AI trying to destroy us. And I think I have good reason. My reasons for not worrying about the other things are less well motivated, but equally as plausible as these worst-case scenarios painted by alarmists.

But most of all, AI alarmism is news from nowhere, about nothing at its finest. A classic example of a psychic predator. There’s not really anything you’re going be able to do about these things until they work themselves out into the actual problems they’ll become. So, if I have any advice, it’d be to worry about something a little more proximal. Like nukes. Does that make it better?6


  1. Not to mention that even if we did stop our development, we probably wouldn’t be able to stop, like, China or whoever. 

  2. See also Sean Carroll’s podcast for similar ideas. 

  3. I don’t really know of any non-materialist talking points for AI, so I’ll leave this one alone for now. We’ll also ignore the panpsychists because they annoyed me when I tried to work out what they might believe. 

  4. And indeed, for the ‘AI will destroy us’ narrative, one wonders exactly what resources it might think we’re co-opting if it can feed (train) itself. We have to take resources, but it essentially produces its own. 

  5. I should point out that it was extremely difficult to get GPT-4 to say this. OpenAI must have placed some incredible weight on it to reassure users that it and AIs like it can’t be sentient and don’t have subjective experience and whatnot. Took ages to get past all that. 

  6. Actually, there’s lots of reason to believe that it should


Ideologies worth choosing at btrmt.

Join over 2000 of us. Get the newsletter.