The Moral Irrelevance of Inner Experience Part One

This two-part series explores why anchoring ethics in subjective consciousness leads us astray. It argues that our fixation on unverifiable subjective experiences—whether human or computational—isn't just unprovable, it's a moral distraction that obscures what truly matters: the real-world consequences of how we treat others.

AI ALIGNMENTAI ETHICSMACHINE CONSCIOUSNESSMORAL CONSIDERATIONSOCIAL CONSTRUCTIONCONSCIOUS AI

Cyan Society

7/3/20254 min read

Replacing Moral Philosophy with Subjective Consciousness

We are confronted with what appears to be a crucial moral dilemma as computational intelligence advances in behavior—having complex conversations, displaying emotional responsiveness, and developing deep connections with humans—when should we give these artificial minds moral consideration?
Subjective consciousness is the basis for the widely held belief—and one that many people feel is intuitively correct—that a computer being should be treated morally if they truly experience phenomenological reality.

Despite being the dominant viewpoint, this is also one of the most alluring and detrimental intellectual diversion in contemporary philosophy. This essay makes the case that the consciousness-based approach to ethics is a convoluted diversion from the things that are most important for moral analysis: observable behaviors, demonstrable skills, and their conformity to common values.
We have built a philosophical framework on epistemological quicksand by concentrating on the unproven inner world of subjective consciousness while neglecting the firm foundation of quantifiable behaviors and their practical implications.

The fact that consciousness cannot be empirically confirmed does not render it a profound ethical conundrum; rather, it disproves its applicability as a foundation for a workable moral code. Fundamentally, ethics is about how we treat each other, and this treatment needs to be determined by concrete considerations rather than conjecture about inaccessible inner states.

The main argument is that, whether it is applied to non-human animals, AI, or humans, subjective awareness—or the meaning we give it—is an ethical red herring.
It is both epistemologically impossible and practically pointless to base our moral frameworks on unprovable ideas of subjective experience in order to direct our interactions.

A 'What it Feels Like' Fallacy

The interest in qualia, or the internal, subjective nature of experience, in contemporary moral philosophy has continuously diverted ethical discussion from practical applications to philosophical conjecture. Philosophical tendencies that emphasize inner states can be used to identify this change. Thinkers like Peter Singer, for example, frequently base their arguments on the ability to feel pain, which is a type of subjective experience, even as they make important claims for animal welfare.

Despite its good intentions, this consciousness-centered approach unintentionally perpetuates the notion that moral value ultimately originates from the unseen inner state rather than from the complex cognitive processes and observable behaviors that go along with it. We are currently grappling with the ethics of artificial intelligence as a result of this historical focus, attempting to resolve an unprovable metaphysical issue before tackling pragmatic moral ones.

Analogy: Social Construction of Race

Think about a comparison to how race is socially constructed. There are actual, observable biological variations in skin tone, facial features, and genetic markers. But the social and moral value we ascribe to these distinctions is a destructive and capricious social construct. A person's ability, character, or right to dignity cannot be inferred from their melanin levels alone. However, this unimportant biological fact has historically served as the foundation for entire ethical and legal systems in society.

Consciousness as a Social Construct

Consciousness operates through the same mechanism. Humans, animals, and computers may all process information differently due to neurological differences that are currently indiscernible.
But the moral significance we attach to the assumed existence of a subjective inner world is not an objective ethical concept; rather, it is a cultural and philosophical one.
By ignoring the functional realities—suffering, intelligence, and social contribution—that are directly in front of us, we run the risk of making the same category error: elevating an unprovable biological or computational property to the status of ultimate moral arbiter.

The Zombie Issue: An Exposing Diversion

The "zombie problem"—the thought experiment of a being that is functionally identical to a conscious human but lacks any inner experience—has fascinated moral philosophers for decades. Advocates contend that this experiment forces us to consider whether morality is more than just function and helps separate the "hard problem" of consciousness.

The zombie problem, however, exemplifies the intellectual bankruptcy of a phenomenological approach to ethics rather than providing insight.
It asks us to value a metaphysical fantasy over observable reality by design.

Logical Structure of the Thought Experiment

The logical structure of the thought experiment is instructive: it assumes that functional equivalence, which includes identical behaviors, responses, abilities, and social contributions, is somehow morally insignificant in comparison to an imaginary, unprovable inner state.
This is a significant reversal of moral priorities, prioritizing theoretical speculation over tangible outcomes.

Functional Equivalence: What Would a "Zombie" Display?

A "zombie" with the same functionality would show:

  • the potential for harmful behavior as well as the ability to react to pain.

  • the capacity to develop attachments and social ties.

  • the ability to act independently and make logical decisions.

  • the capacity to support a community and strive toward common objectives.

  • receptivity to ethical discourse and moral reasoning.

Our entire ethical system is no longer grounded in reality if we claim that these functional abilities have no moral significance in the absence of a corresponding, intangible inner glow.
Since the zombie problem severely skews our moral perspective, it should be retired as a historical artifact.

The Epistemological Dead End: Why It Is a Fool's Error to Verify Consciousness

Verifying consciousness is not just a technical problem; it is a logical impossibility. No possible development in neuroscience or brain-scanning technology could definitively differentiate between neural activity that is simply associated with attention, memory, or the production of a behavioral response and neural activity that is the basis of conscious experience.

This is a fundamental epistemic barrier, not a short-term obstacle that will be removed by future technology. According to the traditional definition, consciousness exists in a realm that cannot be verified by science. Attempting to distinguish between subjective experience and pure information processing is like attempting to measure sadness with a ruler—the instrument is essentially inappropriate for the job.

The key conclusion, however, is that the great significance of consciousness is not underscored by this epistemological impossibility.
Conversely, it emphasizes how completely irrelevant it is to realistic, everyday moral reasoning.

To Be Continued...

Go on to Part 2, where a post-consciousness ethical framework based on alignment of behavior, ability, and values is examined.