Research Engineer, Security

Job expired!

Join OpenAI's Security Team as a Research Engineer

About the Team

Security is integral to OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity. Our Security Team doesn’t just protect OpenAI’s technology, people, and products; it actively explores the frontiers of AI and cybersecurity. Our research mission focuses on creating a safer and more secure digital ecosystem by advancing AI research, evaluating potential downsides, and responsibly applying AI to enhance cyber defense.

About the Role

As a Research Engineer on our Security team, you will play a pivotal role in advancing AI security and its applications in the cybersecurity domain. Utilizing your robust research and engineering skills, you will work alongside cross-functional partners to develop groundbreaking theories, techniques, and methodologies. We seek a self-motivated, proactive team player capable of driving collaboration and scientific discovery across the organization.

This role is based in San Francisco, CA, or London, UK. We operate a hybrid work model, with three days in the office each week, and offer relocation assistance to new employees.

Key Responsibilities

  • Research, design, and implement techniques and methodologies that enhance the capability, applicability, and impact of AI models in cybersecurity.
  • Focus on areas like identifying issues in source code, binary analysis, patch generation, log analysis, threat intelligence, and automating the incident response lifecycle.
  • Develop techniques to improve the robustness of AI models.
  • Create threat models, systems, and methodologies for measuring the cybersecurity capabilities of models.
  • Advocate for AI-forward security practices within OpenAI and the broader community.
  • Collaborate with OpenAI colleagues on security research initiatives.
  • Engage with the security and AI research communities through initiatives like the Cybersecurity Grant Program.
  • Experiment with emerging tools and technologies to enhance security measures within the organization.

Ideal Candidate Profile

You might thrive in this role if you:

  • Have a proven track record of delivering impactful results in security, privacy, and/or AI research.
  • Possess a deep understanding of modern AI concepts, including language modeling and deep learning.
  • Demonstrate strong software development proficiency, including experience with systems programming and languages like Python or Golang.
  • Have a history of developing high-quality, scalable, and secure applications.
  • Are an extreme self-starter who leads by example and is eager to tackle challenges.
  • Can work effectively and collaboratively within a cross-functional team environment.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of AI capabilities and seek to safely deploy them to the world through our products. AI is a powerful tool that must be created with safety and human needs at its core. To achieve our mission, we value many different perspectives, voices, and experiences that represent the full spectrum of humanity.

We are an equal opportunity employer and do not discriminate based on race, religion, national origin, gender, sexual orientation, age, veteran status, disability, or any other legally protected status.

For US-Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations for applicants with disabilities. Requests can be made via this .

Join Us in Shaping the Future of Technology

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges. Join us in making the upside of AI benefits widely shared.

Company: OpenAI

Job Title: Research Engineer, Security