Do Artificial Reinforcement-Learning Agents Matter Morally?

Artificial reinforcement learning (RL), a widely used training method in computer science, has striking parallels to reward and punishment learning in biological brains. Plausible theories of consciousness imply a non-zero probability that RL agents qualify as sentient and deserve our moral consideration, especially as AI research advances and RL agents become more sophisticated.

Read more

A Dialogue on Suffering Subroutines

This piece presents a hypothetical dialogue that explains why instrumental computational processes of a future superintelligence might evoke moral concern. Generally, agent-like components might emerge in many places, including the computing processes of a future civilization. Whether and how much these subroutines matter are questions for future generations to figure out, but it's good to keep an open mind to the possibility that our intuitions about what suffering is may change dramatically.

Read more

The Eliminativist Approach to Consciousness

This essay explains my version of an eliminativist approach to understanding consciousness. It suggests that we stop thinking in terms of "conscious" and "unconscious" and instead look at physical systems for what they are and what they can do. This perspective dissolves some biases in our usual perspective and shows us that the world is […]

Read more

Flavors of Computation Are Flavors of Consciousness

If we don't understand why we're conscious, how come we're so sure that extremely simple minds are not? I propose to think of consciousness as intrinsic to computation, although different types of computation may have very different types of consciousness – some so alien that we can't imagine them. Since all physical processes are computations, […]

Read more

Get involved

CLR monthly updates