You, the reader, are an advanced AI algorithm in a nonexistent, purely hypothetical universe. Probably. I’ll explain.
The Quiet Before the Storm
My first chat with an Large Language Model is forever etched into my memory. It was a shock, because just a few years earlier in 2016 when I graduated from Carnegie Mellon University with a Masters in Machine Learning, my world-renowned professors claimed that this technology was decades — if not centuries — away. It was hypothetical back then, perhaps not even possible.
But months before the release of ChatGPT, through an obscure Reddit comment that linked to a back-then obscure website (www.character.ai, or CAI), I stumbled upon a magical piece of tech that very few people seemed to be talking about at the time.
CAI lets you create your own AI characters that spring to life from a short description. So, wanting to know more, I created a “Machine Learning Professor” to fill in my missing education. I asked it “what the hell happened between 2016 and now while I wasn’t paying attention to make you possible?”. After a day or two of chatting (and learning about hallucinations the hard way), I had a pretty good idea about the recent breakthroughs that were making this computer talk back to me. And surprisingly, they were elegant and relatively simple: minor architecture changes to address scaling challenges, bigger models, and way more data. And suddenly, these things could pretend to think.
A week later I called an all-company meeting at BlastPoint. As an AI company, it was important for us to get in front of. And in that twilight period before ChatGPT relaunched AI into the mainstream, it felt like we knew a big secret that almost no one was paying enough attention to.
Just a few years ago, the Turing Test was our gold standard for machine intelligence. We thought we’d have a clear, definitive moment when a computer could fool a human into thinking it was talking to another person. Now that bar seems almost quaint. We’ve blown past it so fast that we’re left grappling with, in my opinion, much deeper questions about the nature of intelligence and consciousness itself.
But as ChatGPT launched and AI entered mainstream discourse in a big way, I noticed people weren’t talking about the big answers I felt this technology shone new light on. Yes, people are talking about near-term job loss, and even about post-scarcity futures where money and capitalism are things of the past. Those are all fascinating and important questions to answer in their own right. But what about the really mind-bending stuff? What does it mean that we, humans, were able to capture our collective intelligence and codify it in language and code? What does it mean about us? About what we are, and why we’re here?
Emergence: Where Science Meets Spirituality?
Before we start, the following is one personal opinion about a very subjective topic and nothing more. But with that out of the way…
As a very proud Israeli Jew and a grandson to Holocaust survivors but also an engineer and scientist by training, I’ve long held a tension between the personal importance my religion with how I believe the world actually works. Certainly not a completely unique experience, but one that’s driven me to search for a bridge between Science and Spirituality for the last decade-and-a-half. I discovered that, for me, that bridge was the concept of Emergence.
Emergence is a concept in Complex Systems Theory. It says that very simple rules at a micro level can result in very complex behaviors and properties that emerge at large scales.
For example, swarms of birds and schools of fish. At a localized level, each bird or fish are following very simple rules that consider only their immediate surroundings: “move so you don’t bump into your neighbors”. At scale, however, these localized instructions result in beautiful and complex patterns and macro-behaviors.
Ever since making a dent in Gödel Escher Bach during my time in college (a tome of a book that argues self-referential mathematical systems can develop self-awareness), I’ve come to hold the belief that we, our selves (and maybe our “souls”?) are emergent properties born out of relatively simple interactions between the 86 billion neurons in our human brains.
Pre-2021 that was all hypothetical; a strongly held personal belief. Today, with generative AI models that exhibit what seems to be intelligence via their own billions of neurons, it feels a lot more concrete.
It’s not just us. Emergence is everywhere: from the way atoms combine to form complex molecules, to how simple market forces shape entire economies. It’s in the way termites build intricate mounds without a blueprint, and how our immune system adapts to new threats without central coordination. Look at human society itself — individuals form families, families build communities, communities grow into towns and cities, cities make up countries, and countries interact to create our global civilization. Each level emerges from the interactions of the one below it, creating ever more complex systems. Even the internet, with its decentralized network of connections, exhibits emergent properties. The more you look, the more you see these complex systems arising from simple rules and interactions.
We also see echoes of it in various spiritual and philosophical traditions. Take Buddhism, for instance. The concept of interdependence — that nothing exists in isolation, but rather as part of an interconnected whole — resonates strongly with emergence. It’s the idea that the whole is greater than the sum of its parts, that complex systems arise from simple interactions. We see similar ideas in indigenous spiritual traditions that emphasize the interconnectedness of all things. Even in Western traditions, you could argue that the concept of the Holy Spirit as an emergent property of human faith and community is a form of spiritual emergence.
This ubiquity of emergence feels, to me, almost Divine. Recently, on an episode of PJ Vogt’s ‘Search Engine’ podcast called ‘What Does It Feel Like to Believe in God?’ Zvika Krieger argued that Religious vs Non-Religious is not a useful categorization of people, and that a better alternative would be to think about “people who think about the nature of the world” and people who do not. That resonated with me. So, Emergence as Religion? Maybe not. But Emergence as a spiritual principle? For me, certainly.
If you are as fascinated by Emergence as I am, check out Conway’s Game of Life. It’s a simple cellular automaton that demonstrates how complex patterns can emerge from just a few basic rules. Picture a grid where each cell can be either “alive” or “dead.” In each generation, cells live, die, or are born based on their neighboring cells, following only four simple rules. From these basic principles, you get astonishingly complex behaviors: gliders that move across the grid, self-replicating patterns, even structures that act like logic gates in a computer. It’s a beautiful, tangible demonstration of how complexity can arise from simplicity, and how intelligence-like behavior can emerge without any central control. Plus, it’s oddly mesmerizing to watch — fair warning, you might lose a few hours playing around with it!
You’re an AI Too, Probably
Here is another fascinating way in which the advent of Generative AI shines new light on old questions.
In 2023, a team of researchers from Stanford University and Google built a computerized small town. Their paper introduces generative agents, which are computational software agents that simulate believable human behavior. The researchers created an interactive sandbox environment inspired by The Sims game, populated with 25 AI-controlled agents. These agents were designed to exhibit realistic behaviors such as waking up, cooking breakfast, going to work, forming opinions, and interacting with each other. The agents used large language models to store and process their experiences, synthesize memories, and plan their actions. In one example, an agent decided to run for mayor, autonomously organized a a press release party and spread invitations. Others RSVP’d, and some very relatable human-like agents even flaked.
So now that we know we have the technology to do that, let’s have a little thought experiment:
Extrapolate this technology by a couple hundred years. Maybe to be safe, a thousand years. You could reasonably assume that humans in the future will have the capacity to generate similar but more advanced worlds, where the agents are almost indistinguishable from real humans and graphical fidelity as good as our natural eyesight. And also consider that if we could generate one world like this, we would probably generate more. Thousands, likely. Millions, maybe.
And in that future, pluck at random any one “conscious” being from the thousands of worlds, including the real one. Then ask yourself: what is the probability that this conscious being is from the real world, and what is the probability that it’s from one of the simulations?
Assuming one real world and 999 virtual ones (and, for simplicity, that the number of conscious entities in each world is roughly similar):
Probability of Being “Real” = 1 / 1000 = 0.1%
True, there’s plenty of assumptions baked into this experiment, but they all seem to be at least reasonable. Makes you think, if nothing else.
Now, let’s assume that we accept the possibility that we’re living in a simulation. What does that mean for concepts like Free Will?
On the surface, it might seem to challenge the very idea of free will. After all, if we’re just sophisticated AI in a advanced simulation, aren’t our choices predetermined by our programming?
But here’s the thing — even if things were deterministic at a quantum level (which current physics suggests they’re not), the complexity that emerges several levels up is so immense that it might as well be free will. The choices we make, the thoughts we have — they’re the result of such intricate, complex processes that they’re effectively unpredictable and unique to us.
The Simulation Hypothesis also intersect with traditional notions of spirituality and existence. Take the concept of an afterlife, for instance. In a simulated reality, could an afterlife simply be a transfer of our consciousness to another simulation? And consider the idea of a soul: if our consciousness is emergent from the complex interactions of our simulated neurons, is that fundamentally different from a soul?
It’s the kind of idea you can talk about until 5am in the morning, but here’s ultimately where I land on it: whether we’re in a “base reality” or a simulation, the context of our lives remains unchanged. Our experiences, our relationships, our struggles and joys — they’re all real to us. And that’s what matters day-to-day.
Using AI to Understand Our Own Intelligence
The question of consciousness has puzzled philosophers and scientists for centuries. What is it that makes us aware, that gives us our sense of self? As we develop more sophisticated AI models, this question becomes even more pressing.
The truth is, we may never truly know if something other than ourselves is conscious. We can’t experience another being’s subjective reality. This leads us to a philosophical conundrum known as Solipsism — the idea that the only thing we can be sure exists is our own mind.
Given this fundamental uncertainty, perhaps a more practical approach is to consider that if something appears conscious and acts conscious, it might as well be treated as conscious. After all, we extend this courtesy to other humans without being able to prove their consciousness.
But consciousness aside, when it comes to intelligence: despite the fact that directly comparing our own brains to even the state-of-the-art neural networks of today is a false equivalency, AI can provide us with fascinating new insights into how our own minds might work.
Recent breakthroughs in AI research are shedding light on the inner workings of large language models, offering potential parallels to human cognition. Anthropic, one of OpenAI’s largest competitors and creator of the Claude series of models, has made significant strides in understanding the internal representations of AI models. By applying a technique called “dictionary learning” to their mid-sized model, Claude 3 Sonnet, researchers identified millions of “features” within the neural network.
These features represent a wide array of concepts, ranging from concrete objects like the Golden Gate Bridge to abstract ideas such as deception or racism. Remarkably, these features activate across different modalities and languages when the model encounters relevant context.
To demonstrate the power of this discovery, Anthropic created a version of Claude called “Golden Gate Claude.” By artificially amplifying the Golden Gate Bridge feature, they produced an AI that became obsessed with the landmark, mentioning it in almost every response, regardless of relevance.
The results were as amusing as they were insightful. When asked for a recipe for chocolate-covered pretzels, Golden Gate Claude bizarrely included instructions to “Gently wipe any fog away and pour the warm chocolate mixture over the bridge/brick combination. Allow to air dry, and the bridge will remain accessible for pedestrians to walk along it.” In another instance, when asked to calculate the square root of pi, it started explaining the method but then abruptly veered off to talk about seeing the famous Golden Gate Bridge on a clear day when the fog has blown away.
This experiment provides a fascinating glimpse into the internal representations of AI models. Essentially, we’re beginning to map the neural pathways of artificial brains. Golden Gate Claude is funny, but it is also a powerful demonstration of how we can potentially control and shape AI behavior at a fundamental level.
This new level of understanding and control could be instrumental in keeping AI “safe” as it continues to advance, too. But even beyond that, this research raises profound questions about our own cognition. If we can create these complex, concept-rich neural networks in AI, what implications does this have for our understanding of human thought processes? Are our own obsessions, preferences, and patterns of thought simply the result of certain neural pathways being more strongly activated?
The development of AI can become a profound journey of self-discovery for our species. As we create these artificial minds, we’re not just advancing technology; we’re gaining new perspectives on consciousness, intelligence, and what it means to be human. The concepts of emergence and complex systems coming together to give rise to intelligence, once an abstract theory, is now tangible in the neural networks we build.
This exploration isn’t just academic. It’s reshaping our understanding of ourselves and our place in the universe. While we may not have definitive answers to the big questions of existence, the very act of asking them through the lens of AI is expanding our horizons. In the end, our creation of artificial intelligence may end up revealing more about our own nature than we ever anticipated (you know — because we’re AI too, probably).
Written by Tomer Borenstein
CTO and Co-Founder of BlastPoint, a Customer Intelligence Company for Highly Regulated Industries.