[Closed] Research and Executive Assistant

The Forethought Foundation is seeking a research/executive assistant for the director, Will MacAskill. We’re looking for someone passionate about making a difference and is happy to work closely with Will. The role could also turn into a more senior role, depending on experience and performance.

You will aid Will with a variety of tasks in the following two areas:

  1. Executive and operational support, including scheduling meetings, triaging emails, making travel arrangements, and helping maintain financial overview of the organisation.

  2. Research assistance, including writing, copyediting, writing literature reviews, and short original research projects.

Previous experience in operations or an assistant role is desirable, but we are open to hiring someone without it, provided they have the right mindset and are willing to learn quickly and act on constructive feedback. Research or academic experience is also desirable.

Starting date is flexible, but ideally would be sometime in April or beginning of May.

About the Forethought Foundation

Forethought supports and promotes individuals and institutions working on global priorities research, furthers and develops effective altruism and longtermism as ideas, and promotes and presents the ideas of effective altruism and longtermism in social and traditional media, in person, and within academia. Recently, Forethought has worked on writing and promoting the book What We Owe The Future.

The Forethought Foundation is a project of Effective Ventures. We’re planning on spinning out of Effective Ventures, and will likely set up a new research organisation in the next 1-2 years with similar aims to Forethought, or a research organisation with wider scope.

Key responsibilities

In this role, there are two areas of responsibility: 1) Executive and operational support, and 2) Research assistance.

Executive and operational support will involve:

  • Triaging and drafting emails.

  • Scheduling meetings.

  • Planning and making travel arrangements.

  • Productivity support.

  • Helping prepare for events, such as EA Globals and speaking engagements.

  • Providing input on plans and decisions.

  • Helping Will maintain a financial overview of the organisation.

  • Liaising with Effective Ventures, under which Forethought is a project.

  • Supporting the spinning out of the organisation to a new entity. We expect to hire someone to lead on the spin-out, who you’d be working with.

Research assistance will involve working closely with Will in the following ways: 

  • Reading, commenting, and helping to revise draft papers, written by Will and others.

  • Writing literature reviews. 

  • Case studies. 

  • Short original research projects.

There’s more information on planned research focuses below.

What this role could turn into

There are three obvious ways in which this role could develop, depending on experience and performance:

  • Research Fellow: This would involve full-time (or close to full-time) work on research in the broad area of ASI governance (see below). 

  • General Manager: If we decided that general-audience content such as a book made the most sense as the main output, then we would build up a team in pursuit of that aim, which would be managed by a General Manager.  

  • Chief of Staff: If we decided that we should form a new research institute, then there would be a need for a Chief of Staff. For more information on the nature of that role, see this description by Matt Mochary. 

Key attributes

The ideal candidate will demonstrate the following attributes:

  • Diligence and conscientiousness. We want someone who is self-directed enough to work hard alongside the rest of us, and engaged and friendly enough to help build our fun and intellectually stimulating atmosphere.

  • Excellent organisational skills.

  • Good verbal and written communication skills.

  • Excellent ability to prioritise: can execute on many projects at once while keeping the high level aims of those projects in mind and prioritising appropriately.

  • Enthusiasm for a service role.

  • Familiarity with EA concepts and the EA community.

  • Computer skills, especially rapid and accurate typing, knowledge of the Google Apps suite, confidence in creating and manipulating spreadsheets.

  • Autonomy and proactivity. Excels performing tasks with minimal guidance, and actively looks for improvements to make and tasks to take on without being asked.

The Forethought Foundation values diversity, and does not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, or disability status. We are happy to make any reasonable accommodations necessary to welcome all to our workplace. Please contact us at contact@forethought.org to discuss adjustments to the application process.

Salary and location

This is a full-time role, reporting to William MacAskill. We prefer applicants who are able to work from our offices in Oxford, UK. However, we will consider remote work for exceptional candidates.

For more junior candidates, our starting salary range is £40,000 to £70,000, depending on prior experience. We’d welcome applications from more senior candidates, too, which would extend this salary range further. Benefits include health, dental, and vision insurance, flexible work hours, an annual training budget, extended parental leave, ergonomic equipment, a 3% pension contribution, 25 days of paid vacation, and more.

Research focus

Recent advances in AI suggest that we might face an intelligence explosion in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years. 

Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes. 

But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. We can call these decisions Grand Challenges. Over the course of the intelligence explosion, we will have to address many Grand Challenges in a short space of time including, potentially: what rights to give digital beings; how to govern the development of many new weapons of mass destruction; who gets control over an automated military; how to deal with fast-reproducing human or AI citizens; how to maintain good reasoning and decision-making even despite powerful persuasion technology and greatly-improved ability to ideologically indoctrinate others; and how to govern the race for space resources. 

We can call work in this area governance of artificial superintelligence, or ASI governance. ASI governance seems to be of comparable importance as work on AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment. 

Over the next six months, Will plans to do focused exploratory research into some of these areas. In particular, he’s interested in the rights of digital beings, governance of space resources, and, above all, on the “meta” challenge of ensuring that we have good deliberative processes through the intelligence explosion. One can think of work on this “meta” challenge as fleshing out somewhat realistic proposals that could take us in the direction of the long reflection. By working on this “meta” challenge, we could thereby improve decision-making on all the Grand Challenges we will face. Work on good deliberative processes could also help with AI safety, too: if we can guarantee power-sharing after the development of ASI, that decreases the incentive for competitors to race.

He plans to work closely with the Worldview Investigations team at Open Philanthropy (in particular Joe Carlsmith and Lukas Finnveden), and some other researchers (such as Carl Shulman and Toby Ord).

Related work:

There’s currently little that’s been written on most of these issues. Some published work includes:

After applying to the role, applicants will be shared on some draft work Will has written on the topic, too. 

Potential outputs:

This is still in the early stages, so there are a number of directions in which this work could ultimately go. For example:

  1. Writing a book. This would be promising if we thought that outreach to a wide audience was particularly valuable. This might be true of work on digital rights and on the idea of governance of the intelligence explosion.

  2. Writing policy papers aimed at governments, international institutions, or helping provide input into parts of responsible scaling policies at AI labs. This would be particularly promising if the highest-value work was only of interest to a small number of decision-makers.

  3. Forming a new research institute. This would be particularly promising if (i) it seemed that there was a plausibly quite a number of people who could contribute meaningfully in this area and needed an institutional home; and (ii) if there weren’t other people who could set up such an institute instead of me. 

Example projects that a research assistant could help with include:

  • A literature review on the game-theoretic implications of the ability to make irrevocable commitments. 

  • A “literature review” focused on reading through LessWrong, the EA Forum, and other blogs, and finding the best work there related to the fragility of value thesis.

  • Case studies on: What exactly happened to result in the creation of the UN, and the precise nature of the UN Charter? What can we learn from it? Similarly for The Kyoto Protocol, the Nuclear Non-Proliferation Agreement, the Montreal Protocol.

  • Short original research project, such as:

    1. Figuring out what a good operationalisation of transformative AI would be, for the purpose of creating an early tripwire to alert the world of an imminent intelligence explosion.

    2. Taking some particular neglected Grand Challenge, and fleshing out the reasons why this Grand Challenge might or might not be a big deal. 

    3. Supposing that the US wanted to make an agreement to share power and respect other countries’ sovereignty in the event that it develops superintelligence, figuring out how we could legibly guarantee future compliance with that agreement, such that the commitment is credible to other countries? 

Contact Us

If you have any questions on the job application, the role, or want to refer someone else, feel free to contact us at contact@forethought.org.