Can a Machine ever be Conscious?

Introduction

This post is adapted from a 5000-word essay I wrote for my sixth-form research project discussing whether a machine could ever be conscious.  If you’re curious, a link to the full essay PDF with references is attached at the end of this blog.1

My blog doesn’t intend to dive straight into number crunching or hit you with “facts that will blow your mind” (it wouldn’t be much of a rant if it did, would it?) I want these posts to feel like a journey of how I stumbled into these questions, why they gripped me, and the weird, wonderful rabbit holes Maths can lead you down. You’re welcome to skip ahead to the juicy maths content, but I hope you’ll stick around. There’s always something special about the process of asking questions, and not always knowing where you’ll end up. That’s the part I’ve come to love most! 

In my first post, I talked about how I fell in love with Maths after reading Logicomix. One story follows Bertrand Russell on a wild goose chase to formalise all of mathematics only to be thwarted by Gödel’s incompleteness theorem.

That was my first ever introduction to Gödel and his work. But the real question — the one that kept me up at night — came later: Could a machine ever be conscious? It crept in after binge-watching a few Numberphile and Veritasium videos2 (which I can’t recommend enough, links to some can be found in the footnotes). The more I watched and read (Hofstadter and Penrose to name a few) the more the lines blurred between logic, language, and life itself.

Defining ‘consciousness’

Before we can even begin to ask whether a machine could be conscious, we first need to tackle a much slipperier question: what even is consciousness? The problem is, there’s no single, universally agreed-upon definition. Philosophers, neuroscientists, and AI researchers all offer different takes. So instead of trying to pin down a perfect definition (which would probably take a lifetime), I approached the question by working a set of key characteristics. These aren’t a checklist or a strict test, but they do help explore whether machines could display forms of conscious experience similar to that of humans. Some of the characteristics considered include:

  • Self-reflection – the ability to think about one’s own thoughts
  • Emotional awareness – recognising and responding to emotions
  • Intentionality – acting with purpose or directed thought
  • Adaptive learning – changing behaviour based on experience

Focusing on these traits gives us a way to make the question more concrete and to reasoning about what consciousness might look like in a machine.

Gödel’s Incompleteness Theorem

In the early 20th century, mathematics was going through a bit of an identity crisis. On the one hand you had great thinkers like Bertrand Russell3 and David Hilbert on a mission to formalise all of maths, i.e. construct a complete and consistent system where every true statement could, in principle, be derived from a fixed set of rules (axioms).

In 1931, Kurt Gödel published a paper where he managed to prove4 that in any consistent formal system capable of basic arithmetic, there will always be true statements within that system that cannot be proven within the system itself. This result meant that mathematics could never be fully complete and consistent.

Lucas-Penrose argument

Following Gödel’s findings in the 1960s, J.R Lucas and later Roger Penrose suggested that humans have an intuitive understanding of truths, arising from consciousness, that transcends the limitations of a formal system.

In other words: if we have a machine running on a finite set of rules generates a true statement it can’t prove, but a human can recognise that truth, there exists at least one cognitive function that a human mind can perform but a machine cannot replicate. Therefore, the human mind cannot be reduced to a mechanistic, computational system.

Penrose expands on this idea in The Emperor’s New Mind and Shadows of the Mind, suggesting that consciousness might be rooted in non-computable processes and perhaps even tied to quantum mechanics5

Critics of this argument, of course, weren’t having it. One argument is that humans can handle inconsistencies6 and still function effectively, so logical consistency might not be necessary condition for consciousness. So if machine encounters new information that contradicts previous data, they can adapt without discarding contradictory information outright. This would in fact mirror human learning, where we integrate new knowledge with existing beliefs, even when they contradict.

Is consciousness computable?

Even if Gödel’s theorem limits what formal systems can do, does that really mean consciousness is off-limits for machines?

Not everyone thinks so. Many researchers argue that the mind is a kind of machine — a biological one. In fact, neuroscience has made huge progress in mapping out how the brain processes information. We know that thoughts arise from networks of neurons firing in complex patterns7. Memory, attention and perception have physical, traceable basis in the brain amongst other things.

This leads to a natural question:

If we can simulate the brain’s processes closely enough, could we also simulate consciousness?

Some say yes, consciousness is emergent. It’s the result of many simple processes interacting in the right way, just like weather or traffic patterns. In this view, if we model the brain accurately enough (down to the last neuron), a machine could, in practice, said to be conscious.

Others disagree. Simulating a mind might not be the same as having one. Just because a machine behaves as if it’s feeling something doesn’t mean there’s anything it’s like to be that machine. Philosophers call this the “hard problem” of consciousness, the challenge of explaining subjective experience.

And even neuroscientists can’t agree. Some argue we’ll never fully understand consciousness without redefining how we study it. Others believe it’s only a matter of time (and data).

Further discussion 

When I started researching this topic for my project, I didn’t set out to answer the question once and for all, and honestly, I don’t think anyone can just yet with certainty8 But what I found along the way were way better questions. The kind that keep you thinking long after the page ends.

Here are a few I’m still wrestling with, and maybe you might be too:

  • Can subjective experience ever be reduced to objective data?
  • Is consciousness a product of complexity, or is there something fundamentally different about biological minds?
  • How would we know if a machine was conscious — and would it know that it was?

For me, the purpose of this post was not to give you a concrete “yes or not” answer, but hopefully to make you reconsider what you might already know.

If any of this sparked a thought, a question, or even just a moment of “huh, that’s weird,” then I’m glad — that’s all I could hope for. Feel free to leave a comment if you’ve got ideas, disagreements, or just want to keep the conversation going. I’d love to hear what you think.

Footnotes

  1. Can a Machine Ever be Conscious full 5000 words ↩︎
  2. Godel’s Incompleteness Theorem – Numberphile
    Maths’ Fundamental Flaw ↩︎
  3. Russell even co-authored a massive three-volume book (Principia Mathematica) with Whitehead in which, after hundreds of pages of logic, finally managed to prove 1+1=2. This may sound ridiculous, but to them, rebuilding the foundations of maths from ground up using pure logic was serious business… ↩︎
  4. Highly recommend reading Gödel’s Proof by Ernest Nagel and James R. Newman for more details on the proof ↩︎
  5. That part of the theory is… controversial, to say the least ↩︎
  6. A simple example does the job here: knowing that smoking is bad but still smoking. Clear contradiction between belief and behaviour, yet people manage to navigate daily life. ↩︎
  7. Thought for another day – the brain is not under our control, well, not fully. At the end of the day, we are emotional beings. ↩︎
  8. I look forward to being proved wrong in the future… ↩︎

Leave a comment