About the team
Frontier AI models hold the promise of offering benefits for the whole of humanity, but they also present intensifying serious risks. We have thus dedicated a team to help us prepare in the most effective way for the development of progressively more sophisticated frontier AI models. This team, named Preparedness, is directly accountable to our CTO and has the mission of pinpointing, monitoring, and readying for catastrophic risks tied to frontier AI models.
The specific goals of the Preparedness team are to:
1. Carefully observe and project the development in capabilities of frontier AI systems, particularly with regard to the risks of misuse whose impact might co
2. Guarantee we have unambiguous procedures, infrastructure, and partnerships to reduce these risks and, more generally, to deal safely with the development of potent AI systems.
Our team will create strong links between capability assessment, evaluations, and internal red teaming for frontier models, along with overall coordination on AGI preparedness. The fundamental goal of the team is to ensure that we have the required infrastructure to maintain the safety of highly effective AI systems—from the models we develop in the near future to those with AGI-level capacities.
About you
We are aiming to recruit outstanding research engineers who can extend the capacities of our frontier models. In particular, we are seeking those who can help us shape our practical understanding of the full range of AI safety concerns and will take ownership of individual aspects within this enterprise from beginning to end.
In this role, you will:
- Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks
- Develop (and then continually fine-tune) evaluations of frontier AI models that assess the magnitude of identified risks
- Design and create scalable systems and processes that can support these types of evaluations
- Contribute to risk management refinement and the overall development of "best practice" guidelines for AI safety evaluations
We expect you to be:
- Passionate and informed about short-term and long-term AI safety risks
- Capable of innovative thinking and have a solid “red-teaming mindset”
- Experienced in ML research engineering, ML observability and monitoring, creating large language model-driven applications, and/or another technical domain relevant to AI risk
- Able to operate effectively in a volatile and exceptionally fast-paced research environment as well as scope and deliver projects from beginning to end
It would be beneficial if you also have:
- Direct experience in red-teaming systems—whether computer systems or otherwise
- A good understanding of the (subtleties of) societal aspects of AI deployment
- An ability to collaborate across different functions
- Excellent communication skills
We are an equal opportunity employer and do not discriminate based on race, religion, national origin, sex, sexual orientation, age, veteran status, disability, or any other status protected by law. Under the San Francisco Fair Chance Ordinance, we will consider eligible applicants with arrest and conviction records.
We are committed to providing reasonable accommodations for applicants with disabilities, and requests can be made via this link.
OpenAI US Applicant Privacy Policy
Compensation, Benefits and Perks
Total compensation also includes a generous amount of equity and benefits.
- Medical, dental, and vision coverage for you and your family
- Mental health and wellness support
- A 401(k) plan with 4% matching
- Unlimited leave and 18+ company holidays per year
- Paid parental leave (20 weeks) and support for family planning
- An annual learning & development allowance ($1,500 per year)