How to apply

We usually run hiring rounds for specific positions. There are currently no ongoing hiring rounds, but we would love to hear from you if you would be excited to contribute to our mission. Simply fill out our general application form. We particularly encourage applications from women and minority candidates. Please contact us at info@longtermrisk.org if you have any questions about working with us.
 

About CLR

CLR aims to combine the best aspects of academic research (depth, scholarship, mentorship) with a strategic focus on preventing negative future scenarios. This means leaving out the less productive aspects of academia, such as a preference for publication volume and novelty over impact.

At CLR, you will enjoy:

  • a role tailored to your qualifications and strengths with ample intellectual freedom;
  • working towards a shared goal with highly dedicated and caring people;
  • an interdisciplinary research environment, with friendly and intellectually curious colleagues who will hold you to high standards and support you in your intellectual development;
  • comprehensive mentorship in longtermist macrostrategy, especially from the perspective of preventing s-risk;
  • the support of a well-funded and well-networked longtermist EA organization with extensive on-demand operational assistance instead of administrative burdens.

CLR was founded by a group of effective altruists who took the idea of improving the long-term future to a multi-million dollar foundation at the core of the longtermist research community. Working at CLR will advance your research career in longtermism, effective altruism, AI strategy, AI governance, and technical AI safety. You will have the opportunity to exchange ideas and present your work at regular workshops with researchers at leading labs and institutes such as DeepMind, OpenAI, the Machine Intelligence Research Institute, and the Future of Humanity Institute at the University of Oxford. Previous staff has gone on to work at organizations such as the Future of Humanity Institute and the Open Philanthropy Project.

CLR is an equal opportunity employer and we value diversity at our organization. We welcome applications from all sections of society and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or any other status protected by federal, state, or local laws. We will do our best to accommodate any disability or additional need you may have.

 

Impact

At the Center on Long-Term Risk (CLR), you will advance neglected research to reduce the most severe risks to our civilization in the long-term future, in particular in the context of transformative artificial intelligence. Your research will help inform:

  • our discretionary grantmaking with several million in asset available to fund interventions our research identifies,
  • the activities and policies of key organizations and researchers working on longtermism and AI risk (e.g., risk mitigation measures taken by AI labs), and
  • new activities and projects carried out by our implementation team, policymakers, and professionals in our network.

 

Research areas

We are most interested in individuals who want to make research contributions to our current priority areas. However, regardless of your background and the different areas listed there: if we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or open research questions or have other ideas for reducing s-risks, please apply (or reach out to us). We commonly tailor our positions to the strengths and interests of the applicants.


Get involved