Overview

Job Title: Research Analyst

Location: Berkeley/San Francisco (remote work or on-site work in other cities may be an option)

Hours: Full time

Estimated salary range: $90k-$120k (on-site)

Application deadline: Rolling applications

About AI Impacts

AI Impacts conducts original research to answer decision-relevant questions about the future of artificial intelligence, and maintains an online knowledge base on these topics. Our projects involve a variety of disciplines, including artificial intelligence, neuroscience, history, economics, computer science, physical science, evolutionary biology and philosophy.

Carrying out this kind of work involves: 

  • reasoning about how to make open-ended questions tractable, 
  • evaluating the existing academic literature, 
  • gathering and analyzing new data of various kinds, 
  • writing in academic and encyclopedic styles.

Examples of work we have done or may do:

  • Measuring computing performance trends, as input to forecasting future trends 
  • Reasoning about what human brains imply about the computing hardware that might be needed for ‘human-level’ AI performance
  • Analysis of arguments for AI posing an existential risk to humanity
  • Collection and analysis of empirical data on the frequency of discontinuous progress across a range of technologies
  • Interviews on the plausibility of AI  being relatively safe by default
  • Survey of machine learning researchers on opinion about the future of AI

More of our past work can be found elsewhere on our website.

We aim to inform decision-making around artificial intelligence, by both others familiar with the topic (e.g. people in the AI safety community making choices about what scenarios to focus their efforts on), and people new to it (e.g. policymakers with broad mandates).

Our workplace:

  • Is flexible and accommodating to idiosyncratic needs and preferences
  • Prioritizes employee thriving
  • Values openness, kindness, and truth-seeking: we strive to be a place where everyone can talk about real opinions and raise stupid questions
  • Values epistemic carefulness and reasoning transparency, and also making rough and fast estimates where appropriate

The role

We are looking for skilled, motivated researchers to work both collaboratively and independently on assigned research projects.

The main duties and responsibilities will depend in part on the strengths of a particular researcher. They are likely to include (but are not limited to):

  • Creating and executing plans for investigating questions related to risk from advanced artificial intelligence
  • Creatively explorating ways in which we can shed light on the future of artificial intelligence
  • Reviewing existing literature from various disciplines
  • Doing short investigations on sub-questions for larger ongoing projects
  • Finding and analyzing data of various kinds
  • Writing prose
  • Helping with proofreading, editing, formatting or illustrating work for publication
  • Reading, commenting on, and discussing research by other researchers, both from within and outside of AI Impacts

Selection criteria

We do not have any highly specific requirements to qualify for this role, and you do not need to have any particular credentials or a track record for research in AI risk. Past and current researchers come from a variety of backgrounds, including a magazine editor, a physics PhD, and a few philosophy grad students. We expect strong candidates to come from a similarly broad range of backgrounds, and to have:

  • Evidence of past success in research, analysis, and writing
  • Interest in investigating and writing about questions related to the future of artificial intelligence and associated risks
  • Comfort engaging with academic literature and open-ended questions outside your area of expertise

Other things that many strong candidates will not have, but which might make you an especially good fit:

  • Research experience directly related to our mission
  • Knowledge of economics, neuroscience, machine learning, the history of science and technology, or the scientific study of forecasting
  • A large corpus of high-quality technical or explanatory writing
  • Familiarity with AI risk and ideas for how to improve our understanding of it
  • Evidence of strong quantitative skill, such as work experience, a degree, or published work in mathematics, physics, or statistics
  • A strong track record on prediction platforms such as Metaculus

Benefits

  • Medical, dental, and vision insurance
  • Flexibility in work hours and location
  • Annual stipends for learning, well-being, and productivity tech

How to apply

While we are still accepting applications, we are doing so on a rolling basis and may not respond for a long time. If you are still interested, please fill out our application. If you have any questions or are uncertain about how to fill out the form or whether you may be a good fit, please email Rick at rick@aiimpacts.org.