The Human Brain vs. Artificial Intelligence, Who's Winning? 🧠
The news will have you believe we'll be replace in the next couple of years, but is this true? Is there hope for humanity? (10min Read)
Compares the human brain and AI, highlighting the brain's adaptability and complexity.
Discusses perceptrons in AI, showing their limitations vs. biological neurons.
Uses dance analogy to demonstrate perceptrons' lack of sequence recognition.
Highlights biological neurons' advanced processing and adaptability (neuroplasticity).
Emphasizes the brain's skill in understanding context and cause-effect relationships.
Notes human brain's energy efficiency compared to high-energy AI systems.
Concludes by praising the human brain's creativity and problem-solving abilities.
As someone who studies the brain and human behavior for a living, and also runs a Mental Health AI company, I find myself at the intersection of understanding the human brain and the rapidly evolving world of artificial intelligence daily.
This is why I’d like to remind you today just how badass our brain is.
Some brains survived the Holocaust still walking around today… How do you think ChatGPT/AI would have done?
ChatGPT couldn’t even survive a light switch right now…
Today, I will compare and contrast how the brain works versus how machine learning and artificial intelligence work to help you understand the gap that’s clear as day to me and many neuroscientists.
I hope by the end you have a renewed appreciation for the 3lbs floating between your ears. Let’s dive in!
Perceptron vs. Biological Neuron: The Fundamental Divide
Let’s start with the fundamental building blocks of each, the neuron and the perceptron.
While the field of artificial intelligence draws significant inspiration from the human brain, there are still some fundamental differences between artificial perceptrons and biological neurons.
Perceptron: The Building Block of AI
A perceptron is a simplified model of a biological neuron used in AI, specifically in neural networks.
It's a mathematical function that takes multiple binary inputs, weighs them, sums them up, and then passes them through a threshold function to produce an output.
In essence, a perceptron is a form of linear classifier — an algorithm that classifies input data into two categories.
0 or 1, A or B, Cat or Dog, etc.
Perceptrons operate on a simple mechanism, lacking the complexity of biological neurons, and they often only deal with binary or straightforward numerical inputs.
They also function within predefined parameters and lack the adaptability of biological neurons.
Finally, they have no intrinsic memory, their value will always be the same given a set of inputs, this is not at all how a biological neuron works.
This limits the perceptron in many ways, the most important being that it can only notice how often something happens, but not the sequence in which things happen.
So, it doesn't pay attention to when or in what order new information comes in.
I get that this is probably very complex, so let’s simplify it.
Imagine you're learning a new dance routine. The dance moves need to be performed in a specific sequence: first a spin, then a jump, and finally a clap.
This sequence is crucial because doing these moves out of order wouldn't make sense in the routine.
The perceptron is like a dance student who only counts how many moves are made but doesn't pay attention to the order of the moves.
So, it might recognize that there is one spin, one jump, and one clap, but it doesn't learn that they must be performed in the specific order of spin-jump-clap.
This means if you asked this student to repeat the routine, they might do the clap first, then the spin, and finally the jump, which isn't the correct sequence.
We’re not even past the basic building blocks of AI, and already, I hope you can see some of its weaknesses…
Biological Neurons: The Building Blocks of You
Alright, let’s talk neuroscience now, how do biological neurons work when compared to perceptrons?
A biological neuron is an electrically excitable cell that processes and transmits information through electrical and chemical signals.
Unlike the binary nature of perceptrons, biological neurons deal with a vast array of signals and inputs, processing them in a more nuanced manner.
Neurons can also form new connections, strengthen existing ones, and adjust their functionality, embodying the concept of neuroplasticity, the press’s favorite neuroscience term.
Also, neurons in the brain are part of a vast, interconnected network, processing information in an integrated and dynamic manner, far beyond the linear processing of perceptrons.
The superiority of biological neurons, at present, stems from their complexity and adaptability.
They can handle various stimuli, learn from nuanced experiences, and adapt their functioning based on context and learning — features not fully replicated in AI models like perceptrons.
Additionally, the integrated processing in a network of billions of neurons allows for a level of complexity and efficiency that perceptrons have yet to match.
Let’s Dance Again
The biological neuron is like a more attentive dance student, that not only recognizes each move from before but also understands the order in which they should be performed.
This student knows that first comes the spin, then the jump, and finally the clap, and can replicate the dance moves in the correct sequence.
This simple dancing test, which a preschooler could pass with flying colors, highlights the limitation of the perceptrons when compared to the capability of biological neurons.
But, What About Modern Artificial Neurons?
We’ve come a long way since the creation of the basic perceptron in the 50’s & 60’s.
Today, more advanced artificial neurons are what power the most complex AI models, like GPT-4 & Gemini.
Modern artificial neurons, especially those in deep learning networks, can handle more complex tasks than the original perceptron.
They can process non-linear data, recognize complex patterns, and be part of networks that learn to perform tasks like image and speech recognition, natural language processing, and more.
HOWEVER, they still lag behind the human brain, let’s plan a surprise party to demonstrate this.
Planning a Surprise Party
Imagine planning a surprise party, a task that involves creativity, social understanding, and adaptability.
The Human Brain
First, you brainstorm unique party themes and ideas.
The human brain's ability to imagine and create is unmatched, drawing from a vast reservoir of past experiences, cultural references, and emotional intelligence.
You must also consider the guest of honor's preferences, the dynamics between different friends, and even the subtle hints you've picked up about what they might enjoy.
This requires a deep understanding of human emotions, relationships, and unspoken social cues.
As plans change - maybe someone can’t make it, or the weather turns bad - you effortlessly adapt, modifying plans in real-time.
This adaptability reflects the brain's capacity to process new information quickly and alter course as needed. Thank you evolution.
Your brain can do all of this, while also seamlessly integrating diverse types of information: auditory (phone calls), visual (decorations and spaces), emotional (excitement, stress), and more.
Now, let’s look at how AI would handle such a task.
AI can analyze past party planning data to predict popular themes or suggest activities. It excels in processing large datasets to identify patterns but lacks the creative spark to innovate truly unique ideas.
That being said, while advanced AI can process textual sentiment analysis or facial recognition, it doesn’t genuinely understand human emotions or complex social dynamics in the way humans do.
It lacks the depth of empathy and the nuanced understanding of interpersonal relationships.
AI systems can also adapt to new data to an extent but within predefined parameters.
They typically require reprogramming or retraining for significant changes, unlike the human brain's ability to adapt on the fly in nearly any context or situation, instantly.
Finally, AI systems generally focus on specific tasks (like data analysis or image recognition) and lack the human brain's ability to integrate multiple sensory inputs and types of information simultaneously.
Alright, let’s take this bigger picture to see how else we differ from our artificial counterparts.
Neuroplasticity: The Brain's Ability to Rewire vs. AI's Static Hardware
The Human Brain
The superpower underlying a lot of what we’ve talked about is neuroplasticity.
This allows the brain to rewire itself in response to new learning experiences or to recover from injuries.
When we learn something new, our brain physically changes.
Neurons form new connections, strengthen existing ones, or even weaken some connections, depending on the experiences and learning processes.
This ability means that the human brain is incredibly adaptive, as you’ve seen numerous times throughout this blog.
It can not only learn and store new information but can also reorganize itself to perform functions more efficiently or even take over functions from damaged areas.
Most importantly, and contrary to popular belief, neuroplasticity is not just a feature of young, developing brains; it continues throughout our lives!
In contrast, AI systems, including the most advanced neural networks, operate on static hardware.
The structure of their neural network – the arrangement of nodes and layers – does not physically change during or after the learning process.
While AI systems can learn and adapt within the confines of their programming and algorithms, this learning does not entail a physical restructuring of their hardware.
The changes occur at the software level – in the parameters or weights within the neural network.
Consequently, AI systems are limited in their ability to adapt beyond their initial architecture.
They can't reorganize their hardware to better suit new types of tasks or to recover from damage.
Understanding Cause and Effect
The Human Brain
Next up is the basic understanding of cause and effect.
From a young age, the human brain instinctively understands cause and effect, a skill we use to make sense of the world and navigate both social and physical situations.
Humans use critical thinking to explore and test these cause-and-effect relationships, considering various factors and evidence to understand why things happen.
Moreover, in our analysis, we also take into account the context and ethical implications of these relationships.
This depth of understanding and reasoning about cause and effect, especially with ethical and contextual considerations, is something current AI systems cannot replicate.
Modern AI systems are skilled at finding patterns and correlations in data, such as linking specific symptoms to diseases or predicting stock market trends from past data.
However, this ability mainly revolves around spotting patterns and correlations, not truly understanding causation.
While AI can infer a sort of cause-effect relationship from sequential data, like in language processing, it's based more on statistical learning and recognizing sequences rather than an actual comprehension of why certain events lead to specific outcomes.
Essentially, AI excels in identifying what happens together, but not necessarily why it happens.
Creativity vs. Computation
The Human Brain
We excel in abstract and creative problem-solving, often finding solutions in seemingly unrelated areas.
This is the brain's equivalent of connecting the dots in a constellation to form a picture.
This is a superpower of the human brain that cannot be ignored.
AI systems rely heavily on data and predefined instructions, making them unparalleled in computational tasks but limited in abstract thinking.
This means it may be able to beat us in the game of chess, which is a computational task, but it can’t create the game of chess. We can.
In fact, there are tens of thousands of new games created by humans every year, and a grand total of ZERO created by AI so far.
Even if it did help us create a new game, the causal chain is linked back to the prompter right now.
Meaning, that if I go to ChatGPT and ask it to combine the game of chess with basketball to create a new game, I, Cody, a human brain, connect the dots of chess and basketball, but ChatGPT doesn’t.
I caused what it created, it didn’t spontaneously create it.
Basic Human Abilities Are AI's Biggest Challenge
Next, something I see often in the world of artificial intelligence (AI), is the focus on replicating the human brain's ability to solve complex problems, like playing chess at a grandmaster level.
However, it's the seemingly simple, everyday tasks that truly underscore the brilliance and complexity of the human brain.
These basic abilities, which we often take for granted, present significant challenges for AI development.
Activities like walking, balancing, or even picking up a glass of water are automatic for most of us and are the result of an intricate symphony of neural processes.
When we walk or balance, our brains continuously process a multitude of sensory inputs.
This includes proprioception (awareness of body position), vestibular senses (for balance), and visual cues.
The brain seamlessly integrates this information, adjusts muscle movements, and maintains equilibrium, all without conscious effort.
These abilities are so ingrained in our daily lives that we seldom realize their complexity.
Yet, they represent a pinnacle of neural efficiency and adaptability that AI is currently struggling to replicate.
The irony is that what makes us 'smart' in a practical, everyday sense isn't necessarily our ability to solve complex equations or play strategic games, but rather these fundamental skills.
It’s this “common sense” kind of intelligence that AI lacks completely.
Energy Consumption: Biological Efficiency vs. Electrical Demand
The Human Brain
Finally, the brain is incredibly energy-efficient, how much power do you think it takes to fuel all of the incredible things we’ve talked about today?
New York City, one of the largest cities in the world, consumes about 11,000 Megawatt-hours (MWh) of electricity on an average day.
Do you think it needs that much energy? Nope.
The brain operates on roughly the power equivalent of a dim light bulb… Yes, you read that right.
In contrast, large AI systems can consume massive amounts of electrical power, lacking the energy efficiency of the human brain.
To put it into context, a computer that perfectly models the human brain would need to be as large as a city block, it would need to be cooled by a river the size of the Mississippi, and would need to be powered by its own nuclear power plant.
It’s truly hard to fathom just how much power is sitting between your ears…
Human Brains: 1, Artificial Intelligence: 0
I hope today’s blog reminds you of how powerful you truly are.
We have the nearly limitless power to create and explore using our brains.
Who knows what the future holds for us, and if we’ll one day be surpassed by AI.
There’s one thing I know for sure though… It will have been a human brain that created it.
Until next time… Live Heroically 🧠
Rosenblatt, F. (1962). Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. (Fundamental description of perceptrons)
Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (2000). Principles of neural science (4th ed.). (Detailed information about biological neurons)
Minsky, M., & Papert, S. (1969). Perceptrons: An introduction to computational geometry. (Limitations of perceptrons, including inability to handle non-linear data)
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. (Explains different types of modern artificial neurons)
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (2015). Deep learning. (Examples of AI excelling in image and speech recognition)
Ekman, P. (2009). Emotions revealed: Recognizing faces and feelings (2nd ed.). (Facial recognition and understanding emotions)
Baron-Cohen, S. (2003). The essential difference: The truth about the brain and why it matters. (Understanding of emotions)
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. (Limitations of AI in adapting to unexpected situations)
Doidge, N. (2007). The brain that changes itself: Stories of personal triumph from the frontiers of brain science. (Benefits of neuroplasticity)
Kolb, B., & Whishaw, I. Q. (2009). Introduction to the biology of brain function (2nd ed.). (Examples of brain rehabilitation after injuries)
Bechtel, W. (2008). Mental mechanisms: A brief introduction to cognitive science. (Cognitive science mechanisms)
Pinker, S. (2009). The blank slate: The modern denial of human nature. (Human reasoning abilities)
Pearl, J. (2009). Causality: Models, reasoning, and inference. (AI limitations in causal reasoning)
Simonton, D. K. (2004). Creativity in science: Cognitive and social dimensions. (Creativity in science)
Townsend, A. R., & Shafait, F. (1997). Human performance in complex cognitive tasks: A computational approach. (Human limitations in computation)
Boden, M. A. (1990). The creative mind: Myths and mechanisms (2nd ed.). (General overview of creativity)
Lennie, P. (2003). The cost of thinking: Power usage in brain and body. (Energy efficiency of the human brain)
Amodei, D., Nøkland, V., & Elison, S. (2016). Concrete problems in AI safety. (Energy demands of large AI systems)
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. (Discusses future of AI and brain-computer interfaces)
Hawkins, J., & Churchland, P. M. (2019). Consciousness: An introduction. (Discusses different theories of consciousness)