Self-improvement races
Just like human factions may race toward AI and thus risk misalignment, AIs may race toward superior abilities by self-improving themselves in risky ways.
Read moreJust like human factions may race toward AI and thus risk misalignment, AIs may race toward superior abilities by self-improving themselves in risky ways.
Read morePublished on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) This is a post I wrote about Caspar Oesterheld’s long paper Multiverse-wide cooperation via coordinated decision-making. Because I have found the idea tricky to explain – which unfortunately makes it difficult to get feedback from others on whether the thinking behind it makes sense – I decided to write a shorter summary. While I am hoping that my text can serve as a standalone piece, for additional introductory content I also recommend reading the beginning of Caspar’s paper, or watching the short video introduction here (requires basic knowledge of the “CDT, EDT or something else” debate in decision […]
Read moreIn the essay Reducing Risks of Astronomical Suffering: A Neglected Priority, s-risks (also called suffering risks or risks of astronomical suffering) are defined as “events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far”.
Read moreEfforts to shape advanced artificial intelligence (AI) may be among the most promising altruistic endeavours. If the transition to advanced AI goes wrong, the worst outcomes may involve not only the end of human civilization, but also astronomical amounts of suffering – a so-called s-risk.
Read morePublished on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) This is a reply to Metzinger’s essay on Benevolent Artificial Anti-natalism (BAAN), which appeared on EDGE.org (7.8.2017). Metzinger invites us to consider a hypothetical scenario where smarter-than-human artificial intelligence (AI) is built with the goal of assisting us with ethical deliberation. Being superior to us in its understanding of how our own minds function, the envisioned AI could come to a deeper understanding of our values than we may be able to arrive at ourselves. Metzinger has us envision that this artificial super-ethicist comes to conclude that biological existence – at least in its current form – is […]
Read moreSuppose you investigated two interventions A and B and came up with estimates for how much impact A and B will have. Your best guess is that A will spare a billion sentient beings from suffering, while B “only” spares a thousand beings. Now, should you actually believe that A is many orders of magnitude more effective than B?
Read moreThis post analyses key strategic questions on moral advocacy, such as:
What does moral advocacy look like in practice? Which values should we spread, and how?
How effective is moral advocacy compared to other interventions such as directly influencing new technologies?
What are the most important arguments for and against focusing on moral advocacy?
Efforts to mitigate the risks of advanced artificial intelligence may be a top priority for effective altruists. If this is true, what are the best means to shape AI? Should we write math-heavy papers on open technical questions, or opt for broader, non-technical interventions like values spreading?
Read moreThis post is a discussion between Lukas Gloor and Tobias Baumann on the meaning of tool use and intelligence, which is relevant to our thinking about the future or (artificial) intelligence and the likelihood of AI scenarios.
Read moreImagine a data set of images labeled “suffering” or “no suffering”. For instance, suppose the “suffering” category contains documentations of war atrocities or factory farms, and the “no suffering” category contains innocuous images – say, a library. We could then use a neural network or other machine learning algorithms to learn to detect suffering based on that data.
Read moreThis post is based on notes for a talk I gave at EAG Boston 2017. I talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
Read moreWe were moved by the many good reasons to make conversations public. At the same time, we felt the content we wanted to publish differed from the articles on our main site. Hence, we're happy to announce the launch of FRI’s new blog.
Read more