CLR Fund

In the coming weeks, the CLR Fund will no longer be accepting rolling applications, and will instead move to an invite-only model. This is a temporary measure due to an extremely high volume of applications, our team’s capacity constraints, and to allow us to develop a more coherent strategy for the Fund, as capacity allows over the coming months.

Apply to the CLR Fund Donate to this fund

$347,595

Current balance of the CLR Fund

$1,741,111

Past grants from CLR Fund

Priority areas

We are most interested in individuals who want to make research contributions to our current priority areas. However, regardless of your background and the different areas listed there: if we believe that you can somehow do high-quality work relevant to s-risks (now or in the future), we are interested in supporting you with a grant. Please apply. We can work out the details together.

Fund Management

Tobias Baumann has written extensively about how we can best reduce suffering in the long-term future on his website, and co-founded the Center for Reducing Suffering. He is currently pursuing a PhD in machine learning, aiming to understand how artificial learners can achieve higher levels of cooperation in social dilemmas. Previously, he received degrees in mathematics, physics, and computer science from Ulm University, and worked as a quantitative trader at Jane Street Capital.

Emery Cooper was previously a Researcher at the Center on Long-Term Risk, where her research focused on macrostrategy and the application of game theory, machine learning, and statistics to understand multi-agent interactions. Previously, she received an MMath in mathematics and statistics from the University of Cambridge, and studied biostatistics at the MRC Biostatistics Unit. Emery is now at the Foundations of Cooperative AI Lab (FOCAL) at Carnegie Mellon University.

Stefan Torges was previously Director of Operations at CLR, and worked on building a community of researchers and professionals around CLR’s mission and priorities. Stefan studied philosophy, neuroscience, and cognitive science at the University of Magdeburg.

Grantmaking Process

Past Grants

Note: grants may be made either directly from the Center on Long-term Risk (CLR), or paid out by our partner charity the Effective Altruism Foundation (EAF) on our advice. CLR became an independent charity only in late 2021, previously operating as a project of EAF. All grants before this date were therefore disbursed by EAF.

2023

8 grants were made in 2023, totalling $132,500. Details are forthcoming.

2022

  • Grant size: 8,000 GBP
  • Payout date: 17th November 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 7,000 USD
  • Payout date: 17th November 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 43,000 GBP
  • Payout date: 4th August 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 4,900 USD
  • Payout date: 3rd August 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 85,144.40 USD
  • Payout date: 29th July 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 9,600 GBP
  • Payout date: 23rd June 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 9,060 GBP
  • Payout date: 17th June 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

  • Grant size: 9,060 GBP
  • Payout date: 7th June 2022
  • Fund managers: Tobias Baumann, Linh Chi Nguyen, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

(Grant writeup forthcoming)

2021

  • Grant size: 7,000 USD
  • Payout date: 23rd December 2021
  • Fund managers: Brian Tomasik, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

We made a grant of $7,000 to allow Winston Oswald-Drummond to pay for a college consultant for six months. We expect that this will improve his university transfer applications and increase the chances that he is accepted into a top university. Attending and graduating from such an institution will likely benefit his long-term career prospects. Given his strong commitment to CLR’s mission, we believe that this is a good investment as he might make important contributions in the future.

  • Grant size: 251,060 USD
  • Payout date: 14th October 2021
  • Fund managers: Brian Tomasik, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

We made a grant of 251,060.00 USD to the University of Michigan to support Michael Wellman’s work on a project aimed at extending the methodology of empirical game-theoretic analysis (EGTA) in fundamental technical directions, driven by application to the design and evaluation of intelligent bargaining agents. 

Wellman is the Richard H. Orenstein Division Chair of Computer Science and Engineering and the Lynn A. Conway Collegiate Professor of Computer Science and Engineering at the University of Michigan, where he leads the Strategic Reasoning Group. We have good reasons to believe that Wellman takes concerns related to risks from advanced artificial intelligence seriously. He is affiliated with the Center for Human-Compatible AI at the University of Berkeley, received funding from the Future of Life Institute in the past, and presented at a workshop of the Machine Intelligence Research Institute.

Over a two-year period, this grant will allow Wellman to spend some of his own time on this research project as well pay for one graduate student, a slice of time from a senior research scientist, and expenses like travel and computing resources.

We see the potential impact from this manifesting in two ways. First, publications resulting from this grant may contribute to increased attention to bargaining problems in the multi-agent reinforcement learning field, which would complement our own work in this area. Second, tools developed by the grantee may be used by other researchers for making technical progress on solutions to bargaining problems. This includes training environments and off-the-shelf algorithms for learning bargaining policies in these environments.

There was disagreement among the fund managers about the ultimate merit of this line of research. However, we ultimately concluded that it’s a bet worth taking. Wellman himself is an accomplished academic in the relevant fields who shares our concerns for the safe development of advanced AI systems. So we are confident that he is well-suited to carrying out this research project.

  • Grant size: 100,000 USD
  • Payout date: 25th August 2021
  • Fund managers: Linh Chi Nguyen, Tobias Baumann, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

We made a grant of $50,000 to support Caspar Oesterheld during their graduate studies, followed by a further $50,000 to extend the period of support. He is currently a fourth-year PhD student in computer science at Duke University. 

We expect this grant to free up additional time for Caspar Oesterheld to focus on his research as well as allow him to travel to important collaborators more frequently. We believe these benefits to be particularly important in light of his likely transfer to a new lab at Carnegie Mellon University, which is focused on his current research priorities.

According to our assessment, the Caspar Oesterheld is a capable researcher with a solid publication record. He is also committed to CLR’s mission and interested in working on relevant questions in the context of his PhD. So overall, we believe he is in a good position to make important epistemic contributions.

  • Grant size: 7,200 USD
  • Payout date: 12th August 2021
  • Fund managers: Brian Tomasik, Emery Cooper, Stefan Torges
  • Disbursing charity: EAF Switzerland

We made a grant of $7200 to provide the grant recipient with the required funds to take time for clarifying their next career steps. Given the recipient’s strong commitment to CLR’s mission, we believe that this is a good investment as they might make important contributions in the future.

  • Grant size: 6,393.30 USD
  • Payout date: 22nd July 2021
  • Fund managers: Brian Tomasik, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

We made a grant of 4500 GBP to provide Timothy Chan with the required funds to move to Beijing from Hong Kong for four months. There he will live and co-work (on his own projects) alongside others working on global priorities. We believe that spending time in that environment will be good for his professional development. We are willing to invest in Timothy’s development in this way as it is our impression that Timothy is seriously committed to reducing s-risks, and could contribute valuably in this area in future. We hope that this stipend will positively shape his career trajectory.

  • Grant size: 78,165.47 USD
  • Payout date: 22nd July 2021,  18th November 2021, 9th May 2022
  • Fund managers (initial grant): Brian Tomasik, Jonas Vollmer
  • Fund managers (renewals): Linh Chi Nguyen, Tobias Baumann, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

We made a grant of 12,000 GBP, followed by two renewals for a further 24,000 GBP each time, to Samuel Martin to enable him to pursue a research project intended to connect ongoing efforts on multi-agent AI safety to different scenarios of existential catastrophe. The funding allows him to focus on this project with all of his attention for fifteen months. (Three months initially, renewed twice for a further six months each time.)

There are various research efforts to ensure the (existential) safety of multi-agent systems and/or multipolar AI scenarios (e.g., Clifton 2019; Critch, Krueger 2020; Dafoe et al. 2020). However, significant uncertainty remains about the concrete failure modes these agendas aim to address and the properties of “safe” agents in such settings and scenarios. At the current stage of the field, there are benefits to be had from attempting to clarify how different approaches relate to different potential risks and what “success” conditions look like. Martin’s project has the potential to provide this clarity.

Given Martin’s background in multi-agent reinforcement learning and experience in graphical modeling, he seems to us like a good fit for this project. His motivation to address risks from transformative AI is a good indicator that he will focus on the most important aspects of the project.

Overall, we believe this grant is worth making because the project could significantly improve our understanding of failure modes involving multi-agent systems and paths to impact from work related to “Cooperative AI” (one of our priority areas), Martin’s researcher profile is a good fit for it, and it could refine the direction he will take in the course of his PhD.

  • Grant size: 81,000 USD
  • Payout date: 28th May 2021
  • Fund managers: Brian Tomasik, Jonas Vollmer
  • Disbursing charity: Effective Altruism Foundation

We made a grant of 81,000 USD to Nisan Stiennon to enable him to pursue theoretical research on the question of what it means for two agents to cooperate. More precisely, Stiennon proposes to study and make more precise the concept of cooperativeness by proving theorems in open-source game theory, using tools like reflective oracles and domain theory. The funding allows him to focus on this project with all of his attention for one year instead of having to work in industry.

The research question Stiennon intends to tackle is of central importance to our agenda on cooperation, conflict, and transformative artificial intelligence. A key upshot of this perspective is the desideratum to build agents who are trying to cooperate, such that they are able to reliably cooperate with any other agent who is trying to cooperate. However, it is not straightforward to define “cooperation” or “trying to cooperate”. We believe Stiennon’s proposed project is in principle suited to making an important contribution to this problem.

Judging by Stiennon’s CV, a reference we received, and a conversation one CLR staff member had with him, we believe him to be a sufficiently capable researcher to tackle this project. He also seems to have strong ties to the AI safety community, which reassures us that he is motivated to pursue this project in a way that contributes to the safe development and deployment of advanced AI systems.

Overall, we believe this grant is worth making because the project is clearly relevant to our priorities, Stiennon’s approach and competence leads us to believe there is a sufficiently high chance of progress (though we are uncertain whether the exact methods are ideal), and it could enable Stiennon to make future contributions in the same area.

2020

  • Grant size: $83,342.53
  • Payout date: 20th August 2020, 5th January 2021, and 3rd September 2021
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant of $83,342.53 to Anthony DiGiovanni, a second-year PhD student in the statistics department of the University of Michigan. This consisted of:

  • An initial grant of $36,080.69, to cover his tuition for the fall 2020 semester and provide him with a $2,700 stipend per month for a four-month period
  • Follow-up grants of $36,080.69 and $11,181.15 to extend this arrangement for the next two semesters.

According to his own estimate, the initial grant arrangement would free up around 15h per week of Anthony’s time for 15 weeks. He plans to dedicate this time to relevant research. Otherwise, he would have to spend this time doing a teaching assistant job for an introductory class. We believe this option would be much worse for his career, including a potential academic path. His supervisor agrees with that assessment.

We believe that Anthony is capable and motivated to do relevant research on both multi-agent reinforcement learning (as explored in section 5.1 of our research agenda on Cooperation, Conflict, and TAI), which he can combine with work for his PhD, and s-risk macrostrategy. We have been impressed with the quality of Anthony’s application to our summer research fellowship (for which we accepted him), previous research and research proposals he shared with us, and some of his public writings.

We think that having time to devote to research will accelerate both Anthony’s development as well as s-risk research progress more broadly since he will likely be able to make important contributions soon.

  • Grant size: $47,409
  • Payout date: August 7, 2020
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant of USD 47,409 to Animal Ethics for mainstreaming research on wild animal suffering in academia and the wider animal advocacy community. Animal Ethics plans to achieve this through publications and targeted outreach.

The staff at Animal Ethics seem to share our mission to a significant extent. We know their leadership somewhat well and have come to trust them. We appreciate that they work on wild animal suffering with longtermist concerns in mind. Given the grant size, we did not investigate Animal Ethics very thoroughly. It is our impression that they have good ideas for how to build the academic field. 

We tentatively believe that work on wild animal suffering, insofar as it is presented carefully, creates potentially long-lasting changes to people’s moral outlook. In particular, we think that “identifying problems as problems”, even if they are inconvenient and tricky to address, is an important attitude. We have noticed that several people who are very active in the longtermist effective altruism community, and the s-risk subcommunity in particular, have built up an activist identity around the issue of wild animal suffering.

We are currently uncertain about how important it is to grow this cause area, both for its direct effects on attitudes toward wild animals and its indirect effects on growing various activist communities. We note that there are various risks associated with the topic, particularly the potential of accidental harm from unreflected or uncooperative attitudes by animal activists (e.g., blanket advocacy for rainforest destruction). We have found that Animal Ethics are aware of this responsibility and are framing the issue in the right light. Nonetheless, before considering a follow-up grant, we will investigate whether funding this sort of work in general is robustly valuable by our lights.

  • Grant size: $28,528
  • Payout date: August 7, 2020
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant of $28,528 to Rory Svarc, a second-year MSc student in Economics at Birkbeck College, University of London. Rory estimates that this will free up about 45h per week because he will not be forced to take on a full-time job while studying to cover basic living expenses in London as well as remaining course fees. He cannot take on a student loan since this is his second graduate degree.

Based on our interactions with Rory through two application processes, we believe that he is committed to reducing suffering and has the potential to become a researcher in one of our priority areas. His general academic research skills seem to be high.

We see the main benefits of the grant in freeing up Rory’s time to skill up in relevant domains such that he will be better able to contribute to s-risk research in the future. We think this grant is still positive even if he ends up saving fewer hours than predicted – which we think is plausible. We were excited about Rory’s ability to delve deeply into topics he’s passionate about, where he can combine ideas from different disciplines. We hope that our grant enables him to cultivate this strength.

  • Grant size: $144,579.10
  • Payout date: June 29, 2020; November 5, 2020; March 24th, 2022
  • Fund managers:
    • For first two grants; Brian Tomasik, Jonas Vollmer, Lukas Gloor
    • For third grant: Linh Chi Nguyen, Tobias Baumann, Emery Cooper, Stefan Torges
  • Disbursing charity: Effective Altruism Foundation

Johannes Treutlein applied for a two-year grant worth CAD 129,784 ($95,596 at the time of conversion) to pursue a master’s degree in computer science at the University of Toronto. The degree will focus on multi-agent reinforcement learning, an area we consider relevant to our research priorities (see below). The grant is made to the University of Toronto and is split into CAD 54,474 for tuition fees and a CAD 75,310 top-up scholarship to cover Treutlein’s living expenses in Toronto and allow him to spend money to free up work time (e.g., renting an apartment close to the university).

Updates:

  • Due to COVID-19, Johannes delayed his course of study by several months. In October 2020 Johannes therefore applied for and received additional funding of 20,405EUR ($24,186 at the time of conversion) to his grant, to cover his living expenses and research-related expenses during the months before his delayed degree start date.
  • Additionally, in March 2022, we provided a follow-up grant of a further CAD 30,010 to extend his stipend for 7 months. The initial stipend covered only 17 months’ living expenses. Like many students in his program, he decided to extend his studies to a 2-year period.

We see this grant as an investment in Treutlein’s career in technical AI safety research, allowing him to pursue relevant research, further test his fit for AI safety research, interact with other Toronto-based AI safety researchers, and improve his academic research skills. We have been impressed by his excellent academic performance, his admission into multiple competitive master’s and PhD programs, his deep understanding of the available technical AI safety and macrostrategy research, his ability to communicate ideas in a systematic and rigorous manner, and his ability to carry out research on decision theory. For instance, Treutlein co-authored a paper on a wager for evidential decision theory with William MacAskill, Aron Vallinder, Caspar Oesterheld, and Carl Shulman. We are also excited about Treutlein’s strong altruistic dedication: He transitioned from a successful music career into a riskier career in mathematics and machine learning primarily to have a positive impact on the world.

While Treutlein does not intend to make s-risks a primary focus of his research career, his plan to work on multi-agent reinforcement learning is based on the corresponding section of CLR’s research agenda. We hope that he will continue to contribute occasionally to the CLR Fund’s research priorities afterwards.

Treutlein is a former staff member of CLR. Two out of three fund managers worked with him and have high confidence in the above assessment (and the third fund manager was also in favor). We carefully considered the potential conflict of interest arising from this relationship, and we feel sufficiently confident in our assessment to make this grant in this particular case. While grants in our existing network are particularly cost-effective for us to make, our fund managers are investing time and resources to get to know many potential grantees.

  • Grant size: $50,083
  • Payout date: January 13, 2020
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant of €45,000 ($50,083 at the time of conversion) to allow Kaj Sotala to continue his independent research on multi-agent models of mind and their implications for cooperation, rationality, and the nature of suffering and human values. Sotala intends to expand his existing LessWrong sequence on the topic and to pursue academic publications and conference presentations.

With this grant, we intend to support Sotala’s research in general, rather than his current work in particular. We observe Sotala to be a capable, value-aligned researcher who, among other things, co-authored “Superintelligence as a Cause or Cure for Risks of Astronomical Suffering,” a seminal paper on s-risks. We also believe that Sotala’s work on multi-agent models of mind has been received positively by the LessWrong community (in terms of comments, upvotes, and direct feedback). We perceive the plans to pursue academic publications to be particularly valuable. That said, while some of the current work relates to cooperation and the nature of suffering, we believe it is only indirectly relevant to s-risks and longtermism.

Sotala is a former staff member of the Center on Long-Term Risk. Because he has been pursuing a line of research distinct from CLR’s research priorities, we believe it is more suitable to support his work through a CLR Fund grant rather than employment.

2019

  • Grant size: $81,503
  • Payout date: October 8, 2019
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant of £66,000 ($81,503 at the time of conversion) to Dr. Arif Ahmed to free him from his teaching duties for a year. Ahmed is a University Reader in Philosophy at the University of Cambridge. His previous work includes the book Evidence, Decision and Causality and an academic conference entitled “Self-prediction in Decision Theory and Artificial Intelligence,” with contributions from technical AI safety researchers. This teaching buy-out will allow Ahmed to research evidential decision theory (EDT) further and, among other things, write another academic book on the topic.

We see this grant as a contribution to foundational research that could ultimately become relevant to AI strategy and technical AI safety research. As described by Soares and Fallenstein (2015, p. 5; 2017) and the “Acausal reasoning” section of CLR’s research agenda, advancing our understanding of non-causal reasoning and the decision theory of Newcomblike problems could enable further research on ensuring more cooperative outcomes in the competition among advanced AI systems. We also see value in raising awareness of the ways in which causal reasoning falls short, especially in the context of academic philosophy, where non-causal decision theory is not yet established.

Due to the foundational nature and philosophical orientation of this research, we remain uncertain as to whether the grant will achieve its intended goal and the supported work will become applicable to AI safety research. That said, we believe that Ahmed has an excellent track record and is exceptionally well-suited to carry out this type of research, especially considering that much work in the area has been non-academic thus far. In addition to the above, we also think that it is valuable for the CLR Fund (and effective altruist grantmakers, in general) to develop experience with academic grantmaking.

  • Grant size: $65,397.20
  • Payout date: October 8, 2019 and July 28, 2021
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

Tobias Pulver applied for a two-year scholarship of CHF 63,000 ($63,456 at the time of conversion) to pursue a Master’s degree in Comparative and International Studies at ETH Zurich. This is a political science degree that allows focusing on international relations, security policy, and technology policy. The majority of this grant will be used to cover living expenses in Zurich. A further component of $1,941.20 was paid out to Pulver in July 2021, to support survey work undertaken as part of his studies.

We see this grant as an investment in Pulver’s career in AI policy research and implementation. We are impressed by Pulver’s altruistic commitment and interest in reducing s-risks, his academic track record, his strategic approach to his career choice, and his admission to a highly competitive Master’s program at a top university. Pulver recently pursued an independent research project to explore his fit for AI policy research, which we thought was sound. He intends to keep engaging with EA-inspired AI governance research by applying to relevant fellowships at EA organizations.

Pulver is a former staff member of the Center on Long-Term Risk who decided to transition into AI governance due to personal fit considerations. Two out of three fund managers worked with Pulver and have high confidence in the above assessment (and the third fund manager was also in favor). We carefully considered the potential conflict of interest arising from this relationship, and we feel sufficiently confident in our assessment to make this grant in this particular case. While grants in our existing network are particularly cost-effective for us to make, our fund managers are investing time and resources to get to know many potential grantees.

  • Grant size: $39,200
  • Payout date: September 6, 2019
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

As part of the CLR Fund’s first open application round, the Wild Animal Initiative (WAI) applied for a grant to carry out a research project to develop a long-termist approach to wild-animal welfare, to be carried out by various members of their research team.

We have a generally favorable view of WAI as an organization, though we did not conduct a thorough evaluation. Their research proposal prominently mentioned various considerations that explore the relationship between long-termism and wild-animal welfare research, but those considerations were not yet well developed. We also thought that some of their expectations regarding the impact of their project were too optimistic. That said, we are excited to see more research into the tractability, reversibility, and resilience of wild-animal welfare interventions.

We do not believe that research on wild-animal welfare contributes to the CLR Fund’s main priorities, but we think it might help improve concern for suffering prevention. While we might not make any further grants in the area of wild-animal welfare, we decided in favor of this grant due, in part, to the currently large amount of funding available.

Note that WAI was created through a merger that involved a largely independent project previously housed at the Effective Altruism Foundation, CLR’s parent organization.

  • Grant size: $20,000
  • Payout date: $16,000 in December 2019; $4,000 in July 2020
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

As part of the CLR Fund’s first open application round, Miles Tidmarsh, Vasily Kuznetsov, Paolo Bova, and Jonas Emanuel Müller applied for a grant to carry out a research project that aims to explore the possibility of cooperation in defusing races to build powerful technologies such as artificial intelligence, extending the Racing to the Precipice model using an agent-based modeling methodology.

We decided to fund only a fraction of the requested grant amount ($20,000 instead of $75,000) and see this grant primarily as an investment in the grantees’ careers, learning, and exploration of further research projects, rather than as supporting the research project they submitted.

When investigating this grant, we sought the opinions of internal and external advisors. Many liked the general research direction and perceived the team to be competent. One person who reviewed their project in more detail reached a tentative negative conclusion and emphasized that the project team might benefit from more research experience. Another evaluator was tentatively skeptical that agent-based models can be applied usefully to AI races at this point. They recommended that the grantees look more into ensuring that the research will have a connection to real-world problems. We also observed that the team repeatedly sought external input, but did not seem to engage with critical feedback as productively as other grantees.

That said, we have been impressed by Jonas Emanuel Müller’s strong long-term commitment to effective altruism (in particular, his successful earning-to-give career and attempt to transition into direct work), his drive to understand the literature on AI strategy and s-risks, and his unusual awareness of the potential risks of accidental harm.

For these reasons, we decided to make a smaller grant than requested and encouraged the grantees to consider different lines of research. We think this grant has low downside risk and could potentially result in valuable future research projects. Some of our fund managers also think we might be wrong with our pessimistic assessment, generally like to support a diverse range of approaches and perspectives, and think that this grant might enable a valuable learning experience even if the grantees decide to continue their current project without incorporating our suggestions.

  • Grant size: $12,147
  • Payout date: September 6, 2019
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

As part of the CLR Fund’s first open application round, Jaime Sevilla applied for a grant of £10,000 ($12,147 at the time of conversion) to develop and analyze a decision-making model in order to determine under which conditions actions are time-sensitive. Among other things, he aims to refine the option value argument for extinction risk reduction.

We think it is probably very difficult to produce significant new insights through such foundational research. We think that applying standard models to analyze the specific scenarios outlined in the research proposal might turn out to be valuable, though we also do not think that doing so is a priority for reducing s-risks.

We also see this grant as an investment in Sevilla’s career as a researcher. We were impressed by a paper draft on the relevance of quantum computing to AI alignment that Sevilla is co-authoring and might have decided against this grant otherwise. We think it is unlikely that Sevilla will make s-risks a primary focus of his research, but we hope that he might make sporadic contributions to the CLR Fund’s research priorities.

  • Grant size: $5,000
  • Payout date: September 9, 2019
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

As part of the CLR Fund’s first open application round, Riley Harris applied for travel and conference funding to attend summer school programs and conferences abroad. Harris is a talented Master’s student at the University of Adelaide interested in pursuing an academic career in economics.

We see this grant as an investment in Harris’s potential academic career. His current interest is in game theory and behavioral economics, with potential applications in AI governance.

While we have been somewhat impressed by Harris’s academic track record and interest in effective altruism and AI risk, one fund manager felt unsure about his ability to get quickly up to speed with the research on s-risk, pursue outstanding original research, and convey his thinking clearly. We hope that this grant will help Harris determine whether an economics PhD is a good personal fit for him.

2018

  • Grant size: $27,450
  • Payout date: November 27, 2018
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant to Daniel Kokotajlo to free him from his teaching duties for a year. He is currently pursuing a PhD in philosophy at the University of North Carolina at Chapel Hill. The grant will double the hours he can dedicate to his research. His work will focus mainly on improving our understanding of acausal interactions between AI systems. We want to learn more about whether such acausal interactions are possible and what they imply for the prioritization of effective altruists. We believe this area of research is currently neglected because only a handful of people have done scholarly work on this topic, and many questions are still unexplored. We were impressed by Kokotajlo’s previous work and his research proposals and therefore believe that he has the skills required to make progress on these questions.

  • Grant size: $26,000
  • Payout date: September 18, 2018
  • Fund managers: Brian Tomasik, Jonas Vollmer, Lukas Gloor
  • Disbursing charity: Effective Altruism Foundation

We made a grant to Rethink Priorities for implementing a survey designed to study the population-ethical views of the effective altruism community. More common knowledge about values within the effective altruism community will make moral cooperation easier. There is also a chance that a more open discussion of fundamental values will lead some members of the community to adjust their prioritization in a way they endorse. The grant allows Rethink Priorities to contract David Moss. He has experience running and analyzing the SHIC survey and the 2015 Effective Altruism Survey. We have reason to believe that the project will be well executed. It is unlikely that this survey would have been funded by anybody else.
Rethink Priorities will also use part of the grant to conduct a representative survey on attitudes towards reducing the suffering of animals in the wild. While we do not think this is as valuable as their descriptive ethics project, the gathered information will likely still result in important strategic insights for a cause area we are very sympathetic towards. This survey will also be led by David Moss, in collaboration with academics at Cornell University.

We maintain a policy that limits indirect costs (‘overhead’) for organisations that run programs much more widely than the project(s) we seek to support with our grant(s) as follows:

For universities and community colleges: maximum 10% rate, i.e., indirect costs may not exceed 10% of total direct costs. We define indirect costs as expenses related to the general operations and administration of an organization, which are not directly allocated to or identified with a specific project.

Our fund helps you give more effectively with minimal time investment. It works similarly to a mutual fund, but the fund managers aim to maximize the impact of your donations instead of your investment returns. They use the pooled donations to make grants to recipients whose work will contribute most to the mission of the fund. Giving through a fund can increase the impact of your donation in several ways:

Unique opportunities. Some funding opportunities are not easily open to most individual donors. For instance, some grants in academia exhibit threshold effects, which would require donation pooling. Grants to individuals are another example since a direct donation would usually not be tax-deductible.

Economies of scale. Finding the best funding opportunities is difficult and time-consuming since there are many relevant considerations and research. A fund allows many donors with limited time to delegate this work to the fund managers. They, in turn, can spend time to identify the best recipients for many people at once, making the process more efficient.

Expert judgment. The fund managers have domain expertise and consult with external experts where appropriate. They have thought about the long-term effects of different philanthropic interventions for years.

These reasons apply to all donation funds equally, e.g., other Effective Altruism Funds. You should give to this fund in particular if:

How to donate to the Fund

If you’d prefer to make an unrestricted donation CLR, please us the form on our main Donate page. You can also find answers to frequently asked questions about donating there.

  1. 1 Amount
  2. 2 Payment
  3. 3 Finish

 


1 Due to conflicts of interest, we will not make any grants to the Center on Long-Term Risk or its affiliate projects.

2 We invest funds exceeding 9-12 months of expected grantmaking expenses in the global stock market in accordance with the Effective Altruism Foundation’s investment policy to create capital growth for the fund. Contributions made before the policy’s announcement in December 2019 are exempt.