23 Questions for AI: Problem #0
The enigma of consciousness as a unique supersymmetry
The problem is universal, qualitative, and somewhat recursive. At its heart, this question relates to whether we are special because we’re biological observers of the universe or whether ‘observers’ (i.e., conscious intelligences) could emerge from other substrates.
Therefore, it raises the question of whether AGI1 is possible and whether what people possess is more than just a “wetware2” algorithm. If consciousness can emerge purely algorithmically, AGI might one day be conscious.
Without a clear handle on this, how can we hope to harness AI to solve our grandest problems? Answering this question would have a profound impact across philosophy, mathematics, and religion. Assumptions about the mind-body problem, implications for soul, spiritual traditions, etc.
We now possess language models trained on almost all our data. If a distorted reflection of reality, modeled through a mirror of language, is enough to create an AGI, then language has always been the key to intelligence and consciousness.
Turing Test 2.0
We have tests that offer satisfactory statistical confidence in evaluating algorithms, intelligence, or human psychology, such as the Turing test or Searle’s Chinese Room3. If we strive to find consciousness using AI, these tests need to be updated, or we may need to accept there is another class of intelligent beings.
Can a general test for consciousness be found, or is it an undecidable problem, perhaps even an impossible one? Is it a task for logic, theoretical informatics, or philosophy?
Every model of reality eventually discovers a set of unsolved, hard problems, including at least one version of the “problem of consciousness.“
What if an entity is pretending to have consciousness?
What do you think?
Ethically, let’s draw clear lines of respect for human (and animal) well-being. We don’t need to dissect living brains to gain insights into consciousness… Should we permit it for conscious machines, though? Do you believe AI might never cross this threshold, or is it just a matter of definition before we give it human rights?
Chinese Room Problem: Holds that one cannot argue that a computer executing a program has a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave.