(Archive) Summer Research Fellowship 2023
--- Applications for the 2023 Summer Research Fellowship have now closed ---
We, the Center on Long-Term Risk, are looking for Summer Research Fellows to help us explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. For eight weeks, you will be part of our team while working on your own research project. During this time, you will be in regular contact with our researchers and other fellows. One of our researchers will serve as your guide and mentor.
Your contributions to our research program will have a positive impact through their influence on our strategic direction, grantmaking, communications, events, and other activities. You will work autonomously on challenging research questions relevant to reducing suffering. You will become part of our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.
We are worried that some people might not apply because they wrongly believe they are not a good fit for working with us. While such a belief is sometimes true, it is often the result of underconfidence rather than an accurate assessment. We would therefore love to see your application even if you are not sure if you are qualified or otherwise competent enough for the positions listed. We explicitly have no minimum requirements in terms of formal qualifications and many of the past summer research fellows have had no or little prior research experience. Being rejected this year will not reduce your chances of being accepted in future hiring rounds. If you have any doubts, please don’t hesitate to reach out (see “Application process” > “Inquiries” below).
Purpose of the fellowship
The purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:
- People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong focus on s-risk and would like to learn more about research and test their fit.
- People seriously considering changing their career to s-risk research, who want to test their fit or seek employment at CLR.
- People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a strong understanding of s-risk macrostrategy beforehand.
- People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.
There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. In many cases, this will mean focusing on learning and testing your fit for s-risk research, more than seeking to produce immediately valuable research output.
Responsibilities
- Carrying out a research project related to one of our priority areas below or otherwise targeted at reducing s-risks. You will determine this project in collaboration with your supervisor at CLR, who will meet with you every week and provide feedback on your work.
- Attending team meetings, including giving occasional presentations on the state of your research.
What we look for in candidates
We don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.
- Curiosity and a drive to work on challenging and important problems;
- Ability to answer complex research questions related to the long-term future;
- Willingness to work in poorly-explored areas and to learn about new domains as needed;
- Independent thinking;
- A cautious approach to potential information hazards and other sensitive topics;
- Alignment with our mission or strong interest in one of our priority areas.
Further details
We encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.
- Compensation: Unfortunately, we face a lot of funding uncertainty at the moment. So we don’t know yet how much we will be able to pay participating fellows. Compensation will range from £1,800 to £4,000 per month, depending on what our funding will look like when making final offers. We hope to be able to pay the full amount. We will update this information as we get more information. We also hope to cover travel and visa costs for Fellows who need to relocate to London for the Fellowship (as we have done in the past).
- Number of available positions: We expect to accept three to ten fellows. Again, this is subject to our funding situation at the offer stage.
- Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time work, may be possible.
- Program dates: The default start date is July 3, 2023. Exceptions may be possible.
- Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.
- Benefits: CLR also offers substantial benefits to all staff – for details see the section about this below.
- International applicants: We are a registered UK visa sponsor. In most cases, we expect to be able to sponsor temporary visas for successful international applicants who would like to come to the UK for the Fellowship. If you have questions about this, please ask us in the application form or reach out to us beforehand.
Priority areas
You can find an overview of our current priority areas here. However, If we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants.
Mentors
All fellows will work with a mentor to guide their project. Our mentors have each written below about the topics in which they’re most interested in supervising research.
At stage 2 of our application process, applicants are asked to submit a research proposal and a list of research proposal ideas. A significant part of our selection process relates to consideration by our mentors of whether they are interested in supervising the Fellow, based on the Fellow’s and mentor’s research interests.
Anthony DiGiovanni
I would be most keen to supervise projects on:
- Using frameworks from open-source game theory to model potential cooperation failures between AIs, and ways to mitigate those failures. (Examples: Safe Pareto Improvements; Commitment games with conditional information revelation)
- Assessing how AI alignment techniques might be used to ensure that early AGIs safely navigate bargaining problems, instead of locking in catastrophic errors.
- Understanding potential causes of, and interventions against, conflict-seeking preferences in AI agents.
Jesse Clifton
Some things I’m keen to supervise projects on are:
- The same topics as Anthony listed above
- Paths to s-risk from malevolent actors
- Designing s-risk-relevant evaluations for large language model
However, I'm also interested in considering strong proposals outside these areas.
Emery Cooper
I’m most interested in supervising projects related to:
- Technical research into Evidential Cooperation in Large worlds (ECL), or superrationality more broadly
- Prioritisation research related to ECL
- Research related to decision theoretic problems in bargaining
Daniel Kokotajlo
I’m most keen to supervise projects in the following areas:
- Technical research into Evidential Cooperation in Large worlds (ECL)
- Technical research into commitment races, equilibrium selection, or bargaining between AIs
- Strategy/prioritization research regarding s-risks and ECL
- Anything else I've expressed enthusiasm for before or written about a decent amount
Caspar Oesterheld
I’m interested in supervising Fellows working in any of my academic interest areas, as seen on my website and blog.
Abram Demski
Given their particular relevance to CLR's priorities, I’d be interested in working with Fellows in any of the following areas:
- Rational deliberation (in the sense of Skyrms but also in other senses), normative correctness of thought, intersubjective normativity, the foundation/source of endorsed value judgements, deliberation in multiagent negotiations, bargaining.
- Decision theory and foundations of agency.
- I am also open to mentoring suitable applicants in other areas that I'm interested in (semantics (i.e. the question of how meaning arises), transparency, ELK; naturalistic definitions of knowledge/meaning/belief; AI risk strategy, research prioritization for AI risk; intelligence augmentation for improving research).
Application process
We value your time and are aware that applications can be demanding, so we have thought carefully about making the application process time-efficient and transparent. We plan to make the final decisions between May 5 and May 10.
Stage 1: To start your application for any role, please complete our application form. As part of this form, we also ask you to submit your CV/resume and give you the opportunity to upload an optional research sample. The deadline is Sunday, April 2, 2023 end of day anywhere. We expect this to take around 2 to 3 hours if you are already familiar with our work. In the interest of your time, you do not need to polish the language of your answers in the application form.
Stage 2: By Friday, April 7, we will decide whether to invite you to the second stage. We will ask you to write a research proposal (up to two pages excluding references) and a list of research proposal ideas, to be submitted by Sunday, April 23 end of day anywhere. This means applicants will have two weeks to complete this stage, which we expect will take up to 12h of work. Applicants may therefore want to keep some time free during this period to work on this. Applicants will be compensated with £250 for their work on this stage.
- You can see some example research proposals submitted by previous successful candidates here. Note that we will alter the instructions for the research proposals this year. We plan to make examples for the list of research proposal ideas available before stage 2.
Stage 3: By Friday, April 28, we will decide whether to invite you to an interview via video call during the week of May 1. By May 10, we will send out final decisions to applicants.
Further details
- Application base rates: Last year, we received 81 applications for the summer research fellowship. We made ten offers.
- Diversity and equal opportunity employment: CLR is an equal opportunity employer, and we value diversity at our organization. We welcome applications from all sections of society and don’t want to discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, veteran status, social background/class, mental or physical health or disability, or any other basis for unreasonable discrimination, whether legally protected or not. If you would like to discuss any personal needs that may require adjustments to our application process or workplace, please feel very free to contact us.
Inquiries
If you have any questions about the process, please contact us at hiring@longtermrisk.org. If you want to send an email not accessible to the hiring committee, please contact Amrit Sidhu-Brar at amrit.sidhu-brar@longtermrisk.org.
Benefits
In addition to their salary, CLR offers the following benefits to all staff (including Summer Research Fellows):
- 25 days’ paid vacation per year, plus public holidays. (For temporary staff, this is reduced proportional to the length of your employment.)
- A budget of £5000 per year for expenses related to solving mental and physical health issues, and £3000 per year for professional development and productivity. For the Summer Research Fellow role, this is decreased to £625 and £375 respectively for the duration of your Fellowship.
- Plant-based lunch available at the office every day.
- Flexible working hours.
- 20 weeks of paid leave for permanent employees who become new parents, and consideration of childcare costs in setting permanent employees’ salaries.
- For permanent employees working from the US, we also cover full health care and dental costs.
Why work at CLR
We aim to combine the best aspects of academic research (depth, scholarship, mentorship) with an altruistic mission to prevent negative future scenarios. So we leave out the less productive features of academia, such as precarious employment and publish-or-perish incentives, while adding a focus on impact and application.
As part of our team, you will enjoy:
- a role tailored to your qualifications and strengths with ample intellectual freedom;
- working towards a shared goal with dedicated and caring people;
- an interdisciplinary research environment, with friendly and intellectually curious colleagues who will hold you to high standards and support you in your intellectual development;
- mentorship in longtermist macrostrategy, especially from the perspective of preventing s-risks;
- the support of a well-networked longtermist EA organization with substantial operational assistance instead of administrative burdens.
You will advance neglected research to reduce the most severe risks to our civilization in the long-term future. Depending on your specific project, your work will help inform our activities across any of the following paths to impact:
- Technical interventions: We aim to develop and communicate insights about the safe development of artificial intelligence to the relevant stakeholders (e.g. AI developers, key organizations in the longtermist effective altruism community).
- Governance interventions: We aim to develop and help implement appropriate governance structures for the safe development of artificial intelligence.
- New projects: In collaboration with people in our network, we are always looking for novel impactful organizations to set up. For instance, we have been involved in the founding of the Cooperative AI Foundation and the Foundations of Cooperative AI Lab. Previously, we established Wild Animal Suffering Research, which later merged with Utility Farm to become the Wild Animal Initiative, a now independent organization.