Summer Research Fellowship
Once a year, we run a two- to three- month Summer Research Fellowship at our office in London. It usually takes place somewhere between the months of June and October. Applications for our 2024 Fellowship are now closed, but you can find the 2023 job description archived here. We are likely to open applications for our 2025 Fellowship in the first quarter of 2025. (If you'd like to be notified when this happens please subscribe to our newsletter in the bottom-left corner of the website footer.)
Fellows have the opportunity to work on challenging research questions relevant to reducing suffering in the long-term future, whilst supervised by a researcher at CLR.
The main purpose of the fellowship is to support fellows in their career development. Fellows can learn more about s-risks, test their fit for research roles, and improve relevant skills. While not the main goal, research contributions may also influence our strategic direction, grantmaking, and other activities.
Participants become part of our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can. In the past, some participants have continued their work as full-time members of our team or grantees of the CLR Fund. Over the last two years, most researchers we have hired had previously participated in the summer research fellowship.
Target audience
In the past, fellows have often fit into one of the following categories:
- People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong interest in s-risks and would like to learn more about research and test their fit for research.
- People seriously considering changing their career to research on s-risks, who want to test their fit or want to work at CLR.
- People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a better understanding of s-risk macrostrategy beforehand.
- People with research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with those of CLR and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.
There might be many other good reasons for participating in the fellowship. In general, we encourage you to apply if you think you would benefit from the program, even if your reasons are not listed above.
We work out with all incoming fellows how to make the fellowship as valuable as possible, given their individual strengths and needs. Often, this means focusing on learning and experimenting, rather than producing polished research output. In some cases, past fellows only started to work on what ended up as their main project more than a month into the fellowship.
Past fellows
Lewis Hammond
"The fellowship was a great opportunity to explore new topics and pursue research threads that I wouldn’t have had spare capacity for during my PhD."
After his fellowship, Lewis continued his DPhil in computer science at the University of London and started working part-time for the Cooperative AI Foundation.
Julia Karbing
"I really loved the fellowship! I got to work on a really interesting and engaging project, and had amazing support from my supervisor. It got me very excited about potentially doing research long term, and overall made me feel much more confident in my ability to do so. The CLR office was also such a great place to work."
After her fellowship, Julia split her time working on community building and doing self-study on a grant from the CLR Fund.
Francis Rhys Ward
"The summer fellowship at CLR was really valuable for me for two main reasons: 1) The fellowship is flexible and can fit a number of different people; for me, this meant that I had the freedom to pursue my own interests as part of my PhD. 2) Being immersed in a group of intelligent and diverse people was interesting, motivating, and fun! Due to the fellowship, I feel that I grew as a researcher, became more connected to the EA and AI safety communities, and made some friends. I really recommend the SRF at CLR."
After his fellowship, Rhys continued his PhD in Safe & Trusted AI at Imperial College London.
Megan Kinniment-Williams
"I had a great experience on the SRF, and it helped me figure out what kinds of research I liked, as well as what sort of work I would like to do in the future."
After her fellowship, Megan received a grant from the CLR Fund for self-study.
Nicolas Macé
"It was a great experience, I learnt a lot."
After his fellowship, Nicolas accepted an offer as a full-time Research Analyst at CLR.
Research projects published
The projects below were not necessarily published during the fellowship, but they all started working on this project during their fellowship.
- Julian Stastny et al. Normative Disagreement as a Challenge for Cooperative AI (Cooperative AI workshop and the Strategic ML workshop at NeurIPS 2021).
- Tristan Cook. Replicating and extending the grabby aliens model (EA Forum).
- Jia Yuan Loke. Case studies of self-governance to reduce technology risk (EA Forum).
- Jack Koch: Grokking the Intentional Stance (Alignment Forum)
- Jack Koch: Integrating Three Models of (Human) Cognition (Alignment Forum)
How past fellows have rated our fellowship
Further information
- The summer fellowship usually begins in June or July. We prefer participants to work from our London office if possible. The fellows usually receive a salary of £4,000 per month if located in London, as well as other additional benefits
- For exceptional candidates, we are flexible with program length, compensation and location. In general, we encourage people to apply even if any specific details do not work for them.
- In most cases, we expect to be able to sponsor temporary visas for successful international applicants who would like to come to the UK for the Fellowship.
If you have any questions about the summer fellowship, please contact us at info@longtermrisk.org.