When Philosophy Meets Computer Science

We do more than nothing

Do we live in a simulation? What is ethical?

As philosophers, we receive quite some mockery because people assume we ask questions that have no answers and cannot be formalized. But what I—and many others—actually do is something quite different. We are not asking questions that cannot be answered. We are asking how to reframe them so that we have at least some room for measurement and rational discussion. A good example is the simulation hypothesis, the idea that we might live in a simulation like in The Matrix. For a long time, this seemed like a completely impossible problem. Either we live in a simulation, or we do not—and there is no obvious way to prove or disprove it.

But recently, David H. Wolpert published a paper that approaches the problem in a very different way. He does not ask whether we live in a simulation. Instead, he asks what would have to be true if such a simulation existed, and whether this is even mathematically possible.

However, the hypothesis specifically concerns computers that simulate physical universes, which
means that to formally investigate it we need to couple computer science
theory with physics.

  • David Wolpert, 2026

More concretely, he connects the simulation hypothesis to the Physical Church–Turing Thesis and introduces a formal framework in which universes can be understood as computational systems. Within this framework, he proves: Under certain assumptions, a universe can simulate another universe—and even itself. This follows from results in theoretical computer science, such as Kleene’s recursion theorem, which shows that systems can work with descriptions of themselves. A simple example: imagine a program that can read and display its own code. This already exists. The program contains a version of itself and can process it without any contradiction.

Now scale this idea up. If a universe follows rules that can be written as a program, then a computer inside that universe could simulate those rules step by step. If the system is powerful enough, that simulation can include the computer itself.

It is possible.

And that already changes the nature of the philosophical problem. We have not answered the original question, but we have formalized part of it. We moved from speculation to a space where mathematical reasoning applies.

Formalizing Ethics

When I read about Wolpert’s work, it immediately resonated with something I have been dealing with for months in discussions about ethics and AI. I am currently working on founding a company that aims to measure and formalize ethical behavior in AI systems, and I constantly hear the same objection: “But we can’t do it—we don’t even know what ethical is.”

And yes, it is difficult to define ethics in a way that everyone agrees on. But just like with the simulation hypothesis, that might be the wrong question to ask.

Instead of asking “What is ethical?”, we can ask something slightly different: if we lived in a society where individuals and institutions acted ethically, what would that society look like? What would be different compared to a society without ethical constraints? And if we introduce a specific ethical principle, what causal impact would that have on the overall well-being of that society? These are questions we can actually work with. We can collect data, build causal models, and test interventions. We can observe how different rules shape outcomes. In other words, we can start to formalize aspects of ethics, even if we never arrive at a perfect definition.

What frustrates me in many of these discussions is that people treat the difficulty of defining ethics as a reason to do nothing. But doing nothing is not neutral. If we do not explicitly model and implement ethical behavior in AI systems, we are still creating systems that act according to implicit rules—we do not understand or control them. Every expert in AI safety agrees that approaches like constitutional AI matter, that we need some form of guidance, constraints, or principles embedded into these systems. Yet many people resist any attempt to formalize ethics because it is imperfect. The alternative is worse.

What Wolpert’s work shows, in a completely different domain, is exactly this mindset: you do not need to answer a philosophical question in its entirety to make progress. You just need to find a way to translate it into something formal, something you can reason about, even if only partially. So it is not about defining ethics once and for all. It is about asking better questions. Not “What is ethical?” but “What happens if we implement ethical principle X?” Not “Do we live in a simulation?” but “Under what conditions could such a simulation exist?”

This is how philosophy becomes practical. Not by abandoning difficult questions, but by reframing them in a way that allows us to engage with them rigorously. And especially in the context of AI, that shift—from asking impossible questions to building imperfect but formal models—is not optional. It is necessary.

Comments

Leave a Reply