Design Sketches: Angels-on-the-Shoulder

Citations

29th January 2026
This is a rough research note – we’re sharing it for feedback and to spark discussion. We’re less confident in its methods and conclusions.

Introduction

We think that near-term AI could do a lot to help people make decisions they’d endorse.
We want to help people envision this. In this piece, we will sketch five potential technologies, illustrating how they could work, and what might be achieved in a world that adopts them:
  • Aligned recommender systems — Most people consume content recommended to them by algorithms trained not to drive short-term engagement, but to meet long-term user endorsement and considered values
  • Personalised learning systems — When people want to learn about (or keep up-to-date on) a topic or area of work, they can get a personalised “curriculum” (that’s high quality, adapted to their preferences, and built around gaps in their knowledge) integrated into their routines, so learning is effective and feels “effortless”
  • Deep briefing — Anyone facing a decision can quickly get a summary of the key considerations and tradeoffs (in whichever format works best for them), as would be compiled by an expert high-context assistant, with the ability to double-click on the parts they most want to know more about
  • Reflection scaffolding — People thinking through situations they experience as tricky, or who want to better understand themselves or pursue personal growth, can do so with the aid of an expert system, which, as an infinitely-patient, always-available socratic coach, will read what may be important for the person in their choice of words or tone of voice, ask probing questions, and push back in the places where that would be helpful
  • Guardian angels — Many people use systems that flag when they might be about to do something they could seriously regret, and help them think through what they endorse and want to go for (as an expert coach might)
A couple of notes:
  • We’ve called this cluster of technologies “angels-on-the-shoulder” because all of them help users to get closer to their most endorsed selves.1

    But some of them might behave more like “angels-in-the-pocket”, not doing anything until invoked.
  • These systems should all ultimately be empowering: providing information and helping people to make real choices and act the way they want to. But there are close relatives of these technologies — that we might regard as something like “fallen angels” — which instead constrain or manipulate people’s choices.
  • People can make endorsed decisions that are bad for the world! Overall we are inclined to the view that it’s better if people generally make decisions that are better by their own lights.
  • Standard chatbots can already help with some of this, and are getting better.  But there is scope for specialized systems (built using frontier models and extra task-relevant data) to perform better, and different UI choices might meaningfully help (and encourage adoption). Ultimately we are not taking a strong stance about how these functions will be implemented in practice; we are mostly concerned with trying to give readers a picture of how technologies that have these functions might transform our lives.
We’ll start by talking about why these tools matter, then look at the details of what these technologies might involve before discussing some more cross-cutting issues at the end.

Why this matters

Mainly, we think these tools are important because they would improve the decisions people make. There's a persistent gap between how well people could make decisions and how well they actually do:
  • Decision-makers are often less informed than they could be, because learning takes effort and attention is scarce
  • They often lack situational awareness, because gathering and synthesizing relevant information is expensive
  • They make choices under pressure that don't reflect their considered value
Each of these gaps is, in principle, closable. The technologies we're going to describe represent different angles of attack.
And improving the quality of decisions seems like it would be quite a big deal. Many costly issues — personal, organizational, and civilizational — involve someone making what looks in retrospect like an obviously bad call. A contract signed without reading the fine print. A policy adopted without understanding how it might change incentives.
So technology that helps avoid such unforced errors could be very valuable. 
In particular, this technology might help us to avoid the major AI risks we’re facing. Humanity is navigating a period where the decisions we make — individually and collectively — could shape the long-term future. The quality of those decisions depends in part on the tools available to support them. A world where people are generally better informed, more situationally aware, more in touch with their own values, and less prone to obvious errors is a world more likely to handle the coming decades well. These technologies aren't sufficient on their own, but they're the kind of thing that could help.

Aligned recommender systems

Right now social media feeds typically show people content selected to optimize for engagement. Although this is in some sense tracking what the user wants, it’s often not what people would actually endorse focusing on, and can lead to unhealthy patterns like doom-scrolling. (And it’s not clear how the content-provision landscape will change by default, but it’s possible that its addictive and toxic properties could get worse.)
Instead, we should be aiming to build software that works for users, providing feeds that would still be engaging in the moment, but that they would be happy about having viewed weeks later. Aligned recommender systems of this sort could improve social media users’ reading habits and experience, spread to those who don’t currently use social media, and improve public discourse.2

Design sketch

Imagine opening your device and knowing that what you're about to see was chosen by a system that understands and looks out for you. Not in a creepy, manipulative way, but like a thoughtful friend who works with you to decide what to engage with, and kind of intuits when you need to laugh (and knows what you’ll find hilarious), when you need to learn, and when you need to look away from the screen entirely.
Hand-drawn design sketch of an aligned recommender system showing a personalized content feed, user context filtering, usage data, feedback loops, and active check-ins to improve endorsed decision-making.

Image

Under the hood, it does something like:
  • Collates candidate feed items from multiple sources, paying for access on the user’s behalf whenever that’s needed
    • Content from people the user follows, things that were recommended to them and have entered a “maybe read” queue, highly rated content on topics the user has expressed a desire to hear more about, “trending” content, etc.
    • Rates this content on various dimensions (based on internal facts about the content, as well as reviews from other people)
  • Builds a model of user preferences, based on multiple data sources that balance each other out to guard against naive optimization 
    • Which content seemed after-the-fact nourishing or worthwhile to the user, rated a few days later?
    • What is the user naturally drawn to? 
    • Which content led them to rabbit-hole into other content they found worthwhile (and what led them to things that felt empty)?
    • When did they endorse doing something different (e.g. chores, or work, or taking a nap, or going for a walk), and what’s the right way to prompt for that?
    • Occasional deeper-dive (but still fun!) user interviews to spur more reflective takes and better calibrate
  • Allows qualitative input / requests to steer what it’s suggesting at a given moment, e.g.
    • "I'm looking to be challenged today"
    • "I need something light but not mindless"
    • "Help me understand what's really going on with [topic]"
    • "I want to connect with people who see things differently"
  • Optionally:
    • Has contextual awareness — recognises the user’s emotional state, and patterns in their day
      • Works with the user to make a plan for how to react to these
      • Learns what kinds of content are most likely to be endorsed at different times — and also when to present the user with questions, and when not to tax them with decisions
    • Shares automated reviews with a centralized system, so others can learn which content was found nourishing

Feasibility

A key challenge will be navigating the incentives involved. Today, these are often driven by advertising; platforms get money when people consume more content there, and so reward creators who produce content that keeps consumers on the platform. This successfully fuels a lot of content creation, but has many (much-discussed) flaws, and ultimately it doesn’t seem surprising that ad-based platforms aren’t aligned to users’ best interests.
There are two ways that aligned recommender systems might be able to compete against these incentives:
  • By building add-ons that filter content from existing recommender platforms
    • The main difficulty here is that existing platforms might actively resist deployment. These services would shift attention away from engagement-optimized content and towards endorsement-optimized content, which would (at least temporarily3

      ) hurt the bottom line of platforms that currently rely on ads.
  • By building new recommender platforms from scratch
The incentive landscape is certainly challenging, but it doesn’t seem fatal to us. The fundamental advantage that aligned recommender systems have is that users should (at least on reflection) prefer to use them. So some combination of the approaches above — together with a growing ability to make content that’s both extremely stimulating and what one would endorse engaging with, and the fact that highly customised software and subscription setups are increasingly viable — could seriously improve the situation. And if big platforms were to start losing audience share to systems that users find healthier, that will shift the incentives on those big platforms.
On the technical side, none of the individual steps in the above description seem too demanding for the capabilities of modern AI systems. However, doing a good job reliably might be beyond the models we have today — and you can’t build a complex system if the reliability of the components is too low. But building at least a basic version is already possible (anecdotally, people already sometimes find it helpful to use chatbots for content recommendations), and it’ll get easier as AI gets better.
There are also questions about how we can train systems to recognize what content users would endorse seeing. For one thing, getting any kind of data on this might be hard; people might generally struggle to understand what they actually want from their content consumption, or to accurately (retroactively) assess which pieces of content were useful to them.4

It’s also possible that there turns out to be a kind of chicken and egg problem, where to build a good system you need lots of data about users’ long-term preferences, but to get that you need to already have a good system. 
There are a few possible approaches to these problems:
  • Finding ways to collect data that are minimally annoying to users
    • This may not require large amounts of review data from individual users, so long as there are occasional deep dives to ground models predicting user endorsement from more readily-available data
  • Combining different metrics/data sources, to guard against naive optimization and to learn how to translate between easy-to-access data and more robust but sparser data
  • Finding users who are more willing to engage with clunky data collection, because they want the product more
    • In general, it’s helpful for adoption to target high-demand niches (see below), where people are more willing to invest to get a product they’re desperate for. This holds particularly strongly in cases where the initial product is suboptimal, which might be the case here

Possible starting points // concrete projects

  1. Build for yourself or your community. Make a recommendation engine that is useful for you, or people you know personally.
  2. Build the theory of what aligned recommendations would track. What types of variables are available to build metrics on in principle? Which of these should we really want to optimize for?
  3. Focus on data collection. How can you make delayed review fun for the users, while still getting the information that’s needed?
See also Design Sketches: Collective Epistemics, which has related ideas and discussion.

Personalised learning systems

Aligned recommender systems might help with the general question of content consumption. When someone really wants to learn about a topic, the ideal process might look more targeted and bespoke than for content feeds.

Design sketch

A “tutor” system (integrated in your digital environment) that syncs up with you on what you’re trying to learn (and on what timescale, etc.), then designs and implements a great curriculum for you (based on knowledge about your habits, priorities, and learning style). It can run multiple “courses” in parallel for you; some might be short programs (where it just distills and adapts the best existing resources for your needs, helps you work through your confusions, builds you some exercises, ...) and some might be ongoing activities, especially if you’re trying to truly master a topic / understand a deep subject area, or if you want to keep updated on new developments.
Hand-drawn sketch of a personalized learning system showing goal selection, curriculum planning, spaced mini-lessons, ongoing assessment, and progress tracking.

Image

Under the hood, this might look like:
  • The system might “follow users around” in their digital environments to automatically learn about their work or goals, the information they’re already engaging with, and their preferences
  • When the user says they want to learn about some area, it syncs up with the user on the specific purpose/target in question, the timeline, etc.
    • The system can also proactively propose some “courses” to add, if it notices that a user is repeatedly confused in their work, or naturally seeking out information on a given topic
  • Then it maps out the area5

    (exploring key topics, what background the user will need, other synergies/dependencies), designs an initial assessment program (if it doesn’t have this information already), and6

    develops a curriculum for the user7

    • If the user wants to keep updated on some area, the system might start running agents that check for relevant developments
  • Depending on the user’s needs/habits (and the system’s affordances), the system might then prompt them to engage with the program at regular times, nudge them to do some next step when they seem to be distracted or have spare attention, insert relevant content into their news feeds, etc.
  • The system regularly reassesses its plans/curricula based on the user’s progress or changes in their context (occasionally checking in with them explicitly)

Feasibility

The above setup is fairly intense/high-effort (on the technical side). There are no parts of it which don’t look in-principle-approachable now; however as a practical challenge designing the system to be reliable enough that it’s a high-quality experience might be hard. Still, good products here could justify large investment of development work; and the difficulty level will drop with time as LLMs improve.
There are leaner variants that might be more natural starting points, that just bite off one stage of the problem. Moreover, it might also be possible to “cache” or streamline some of the work — e.g. curating a database on high-quality educational resources, with information about what they teach, and lots of other properties that inform who is likely to find them useful, that can be used as a base map for curriculum design.

Possible starting points // concrete projects

We can think of a few different kinds of approach here:
  1. Build a baby version of a personalised learning tool. For example, you could start by making tools for an existing organisation that help with developing, running or customising their educational programmes. 
  2. Build subcomponents. Instead of trying to build a first pass version of a full system, you could work on modular systems that could later be used to build full products. For example, you could work on:
    1. A suite of tools that takes in an existing ~static curriculum and develops a more interactive one, or a more personalised one for a given user
    2. Personalised content distillers/”translator” tools
    3. Better “digest” tools that scan for new developments or research in a given area of interest, filter by relevance/importance, process and distill, and summarize that on an ongoing basis
  3. Build features into existing systems. Instead of building towards separate systems, try to integrate personalised learning into existing apps and technologies - for example, RSS readers.
  4. Build roadmaps for the edtech space. Work with users and providers to better understand the current landscape, how AI is and isn’t being used, and user needs and bottlenecks.

Deep briefing

Of course, our decisions go far beyond just what content to consume. For making endorsed decisions in other situations, the ideal is often not to outsource our decisions entirely (risking gradual disempowerment of humanity if taken too far), but to make sure that we are properly informed of relevant information to help us decide well.

Design sketch

An AI system that acts as a tireless, expert chief of staff — almost instantly compiling decision briefs that would normally take teams of analysts days to prepare. Users give a high-level request, and the system produces a brief tailored to their specific situation, and preferred communication style.
The system goes beyond simple summarization to provide genuine situational awareness: mapping stakeholders and their incentives, surfacing non-obvious considerations, identifying critical uncertainties, and presenting multiple framings of the core tradeoffs. Each element can be expanded on demand, creating a fluid experience between high-level overview and deep-dive analysis.
Hand-drawn sketch of a deep-briefing AI system showing how a user query is refined, researched, analyzed, and distilled into a clear decision brief.

Image

Under the hood, this might involve:
  1. Context gathering 
    • Brief conversation with the user to understand the decision at hand and what matters to the user
    • Diving into the user’s email or other accessible data to get more private context
    • Preliminary research to gather more information
  2. Multi-perspective analysis
    • Analysis to build a background strategic picture
      • Historical precedent mining (what tends to happen in similar situations?)
      • Stakeholder modeling (who cares, why, and how much power do they have?)
      • Outcome analysis (what might happen here, and how good/bad would it be?)
    • Option generation and stress-testing
      • Generating ideas
      • Risk and opportunity assessment across multiple timescales
      • Identifying weak-points, and refining the ideas
    • Identifying key uncertainties and information gaps
      • Where easy, gathering more information directly; where not, thinking about what it would take to seek more information and including that in the option set
    • Assessing options according to different perspectives
  3. Adaptive synthesis 
    • A brief, highlighting the top options and tradeoffs between them
      • Tagging relative confidence and uncertainty about different points
      • Format tuned to user preferences
    • Responds to user exploration by:
      • Expanding sections with deeper analysis when clicked
      • Re-running analysis with modified assumptions
      • Pulling in additional specialized expertise or data sources
      • Generating decision scaffolds (pro/con lists, decision matrices, scenario planning)

Feasibility

This is similar to things that AI systems are already helpful for, but it brings extra challenges. The two big ones are:
  1. Context ingestion
    • For this system to work smoothly, it’s essential that it can actually gather all the key relevant information. In realistic decision situations, such information is often hard to verbally convey.
    • There will be a lot of commercial pressure for AI systems to improve at context ingestion; this kind of application might naturally piggyback on improvements.
  2. Reliable analysis
    • When someone is faced with a very standard situation, it’s easy for LLMs to provide the standard advice for that situation. The more it’s a nuanced situation highly dependent on the details, the worse current LLMs tend to fare, and the more important it will be to have good performance at the specific analysis.
    • At least to begin with, humans will still be in the loop to sanity check, so it isn’t crucial that the systems know exactly how to balance the conclusions of different analyses, but it does matter that they can analyse aspects of the situation with at least moderate reliability.

Possible starting points // concrete projects

  1. Team briefings. Try to build something that’s actually useful for you and people you work with, where it takes the context from your google docs / slack messages / emails, builds a model of the situation, and writes a decision brief for sharing at team meetings.
  2. Consumer decision-making. Build a tool that helps people navigate consumer decisions (a relatively standard situation!). You ideally want a) high enough stakes that people are eager for good information, and willing to take the time to engage; and b) cases where the decision one person faces will be quite similar to decisions other people face, but not so identical that there’s just a single right answer. Some possibilities: 
    • Choosing the right second-hand car
    • Finding a place to live
    • Finding places to stay for a big trip
    • Choosing the right healthcare provider in a given location
  3. Specialized analysis. Choose one of the types of analysis that you would like an eventual system to be able to incorporate, and focus on making a product that can do just that aspect reliably and to a high quality. Subsequently make this available as a module to a more integrated briefing system.

Reflection scaffolding

Not all decisions we regret come from a lack of information. Sometimes it’s a question of how we orient. As well as providing briefings about external factors, AI systems could help us to get proper perspective on our own thinking.

Design sketch

An AI coach that combines reflective dialogue with value-elicitation and light sanity-checking. It meets you where you are (journals, voice notes, video, calendar), notices emotions/cues, helps separate “what I want” from “what I think I should want,” and co-creates experiments and check-ins. The goal isn’t to tell you what to do, but to help you look clearly and decide in ways you’ll endorse later.
Hand-drawn sketch of a reflection-scaffolding AI showing gentle prompts, emotional cues, in-conversation reasoning, and follow-up support to help users think more clearly.

Image

Under the hood, this might involve:
  • A psychological map of the user
    • Built during onboarding and ongoing use
    • Contains relevant knowledge and hypotheses
    • Stored locally + encrypted for privacy
  • Optionally uses video to detect emotional state and reactions to certain topics
  • Mutually agreed objectives in terms of what kind of support the user is looking for
  • A socratic conversational loop
    • The system does most of its thinking off-screen, letting the user do most of the talking — and then prompts with questions or observations (e.g. mirroring back something it heard, or pushing back on things that seem wrong) in places where it guesses those may be helpful
  • A crisis detection module
    • The tool we’re describing here is not designed to help people experiencing mental health crises; it should recognise these and refer people to seek expert help
    • Of course in the longer term that expert help could also be another automated system! 
      • At which point perhaps it could be integrated back in to the original tool; however, we’re focusing here on the less extreme cases, as we don’t want to imagine our tool has to clear the bar of providing great crisis support as well as its main function

Feasibility

It seems like it shouldn’t be too hard to get something moderately useful here. Indeed, many people already use chatbots for something approximating this functionality. 
One challenge will be getting to something that’s useful and reliable. We expect this is probably tractable. One natural starting point would be to do more to piggyback off existing human expertise - more on this idea below.
Another potential issue is that some of the people who would gain most from this kind of tool might be among the least likely to adopt it. You need to feel open enough to change to begin with self-reflection in the first place, and many who stand to gain most will also have fixed beliefs that make it hard for them to even start. (This could be a particularly big deal in the case of important decision-makers in governments and AI labs, whose oversights and blind spots could have big implications for everyone else.)
This failure mode makes it particularly important to think through things like framing, marketing, and adoption strategies.

Possible starting points // concrete projects

A few different starting points are:
  1. Build for yourself or your friends. Create a system which you think helps you to reflect in productive ways, and follow the gradient of “what seems most helpful”. Perhaps you could try to simulate the version of yourself you most endorse, or to build a simulated version of this.
  2. Build an auxiliary service for an existing provider. Work with an existing mental health provider or platform to provide a service that augments rather than replaces their existing services, like a between-session support tool.
  3. Gather expert data. For the different steps in the above sketch, gather data from professionals for how they would think about that step. This data could then be used to fine-tune AI models to approximate professional judgements, at least for early generations of the product. 
  4. Do market research to identify a good starting niche. For example, you could identify a specific type of decision that people would often benefit from help in talking through (“should I take this new job that would mean moving cities?”, “should I break up with my partner?”, “how do I set better boundaries with colleagues/family?”) — and build something specialized to help with that.

Guardian angels

So far we haven’t talked much about how the angels could come out of the pocket and onto the shoulder.8

But with proper integrations, it could be straightforward for AI systems to meet people where they are with their decisions, and invoke whichever of the above modalities would make most sense.

Design sketch

An always-on, privacy-first “guardian angel” that sits on top of your day-to-day tools (mail, chat, docs, phone, etc.). When it notices you’re about to do something risky, wasteful, unkind, or misaligned with your stated values, it gently surfaces a just-in-time nudge: a short risk summary, a few “wise-advisor” questions, and one-click alternatives (rewrite, defer, get a second pair of eyes, add a safeguard, log a decision). It optimizes for endorsement later, not momentary impulse.
Hand-drawn sketch of a “guardian angel” AI showing just-in-time nudges for mood regulation, impulse control, privacy warnings, and willpower support.

Image

Under the hood, this might involve:
  1. Context ingestion 
    • Recent keystrokes/chats, document diffs, recipients, existing relationships, calendar pressure, prior commitments, etc.
    • Ingested context kept on-device where possible
  2. Risk detectors
    • Classifiers for common pitfalls (e.g., ethical, reputational, legal/compliance, privacy/security, financial, interpersonal)
    • Classifiers for “hot state” markers (fatigue, hurry, dysregulation) that might mean the user isn’t in the best place to make decisions now
  3. Graded intervention
    • Based on severity and context of possible issues, the system might:
      • Do nothing
      • Introduce subtle visual cues that can be expanded if desired
      • Ask gentle questions to prompt reflection
      • Give explicit warnings with detailed analysis
      • (Optionally) flag the issue to an external trusted party
    • It should be easy for the user to pull suggestions for less-problematic-seeming alternatives, and to dialogue with the model if these don’t achieve the user’s goals

Feasibility

There are no parts of this which are obviously out of reach today. However, it would probably be at least a major software project, to:
  • Get context ingestion working smoothly
  • Get reliability high enough that the system felt consistently useful rather than annoying
Additionally, it’s possible that the expense of running the AI systems today might be significant.
All of these difficulties look to be on a trajectory where they will continue getting significantly easier over the next few years.

Possible starting points // concrete projects

A few different ideas are:
  1. Make a heated-email guard. This could be a plugin for Gmail or Outlook, which adds flags for tone and other risks. The system could then nudge towards a particular action, like scheduling a review after a cooldown period, rewriting the email in a calmer tone, or looping in others for advice.
  2. Do market research on firms that might want to avoid classes of costly mistakes. Then start working on tools specifically for that market. One example might be expensive trades in finance firms.
  3. Build tools that are useful to you, your friends or your workspace. A few ideas:
    • A mood navigation system which recognises cues that you’re tired or grumpy, and gently suggests a way you could navigate that
    • A watch-team-back-up assistant which flags when you’re about to take actions you might regret at work. You could give this system your organisations’ handbook and policies and all of the team’s retrospectives as context.

Cross-cutting thoughts

Adoption pathways

These technologies face common adoption challenges, especially:
  • Many people won’t care enough to seek these products out proactively
  • These products will only get used if users have high trust in them, perhaps more so for angels-on-the-shoulder than the other clusters of technologies we’ve described
However, they can also draw on common strategies to mitigate these challenges: 
  • Start with high-demand niches and expand out. It’s easier to get people to use the tool if they’re dissatisfied with what’s already on offer, and have a strong need for something better. For example, people who feel genuinely unhappy about their relationship to social media might be willing to pay for a product that helps them before a broader public would be. For guardian angels and deep briefing tools, early adopters might be organizations in high-stakes domains that want to pre-empt costly mistakes (e.g. finance, law, healthcare?).
  • Integrate into existing workflows. All of these technologies benefit from integration rather than requiring users to adopt something new. For example, deep briefing tools might slot into Slack, Docs, or email, or simply be implemented via existing chatbots. 
  • Build trust. Partly this is a question of provenance and track record: ideally it would be easy to demonstrate where information comes from, and how data is kept securely. Partly it’s also a question about framing. For example, guardian angels in particular could be viewed as overly paternalistic. ‘Catching silly mistakes’ might be a better frame than ‘enhancing human judgement’.

Other challenges

Besides adoption, there are several other common challenges for angels-on-the-shoulder:
  • Differentiating from standard chatbots. Many people already use chatbots for purposes resembling these technologies — getting advice, learning about topics, thinking through decisions. The question is what makes a dedicated tool sufficiently better. Some possible answers:
    • User interface: the best setup for getting some of these kinds of advice might look different than just opening a chatbot and asking a question.
    • Learning from best practice: fine-tuning on expert playbooks from coaches, teachers, analysts, etc.
    • Better context ingestion: making it easy for the system to gather relevant information, not just what users can articulate verbally.
    • Retrospective measurement: assessing impact as judged with space for careful reflection, rather than optimizing for immediate feedback.
    • Galvanizing chatbot providers to up their game on these applications: competition might stimulate faster innovation from the big companies.
    • Nothing: it might be that dedicated tools aren’t actually better than chatbots — in which case the best strategy here is to influence the companies developing the chatbots to try and steer them towards the most helpful versions of these applications.
  • Timing. Some of these technologies may need to be very good before there's strong demand. Guardian angels, for instance, probably need high reliability before users will tolerate having them active; otherwise they’d just be annoying. 
    • These difficulties are on a trajectory where they'll get easier over the next few years — which suggests that for some applications, the right approach may be building foundational components now while waiting for conditions to improve.
This article has gone through several rounds of development, and we experimented with getting AI assistance at various points in the preparation of this piece. We would like to thank Anthony Aguirre, Catherine Brewer, Max Dalton, Max Daniel, Raymond Douglas, Owain Evans, Kathleen Finlinson, Lukas Finnveden, Ben Goldhaber, Ozzie Gooen, Hilary Greaves, Oliver Habryka, Isabel Juniewicz, Will MacAskill, Julian Michael, Justis Mills, Fin Moorhouse, Andreas Stuhmüller, Stefan Torges, Deger Turan, and Jonas Vollmer for their input; and to apologise to anyone we've forgotten.

Footnotes

Citations

Search

Search for articles...