Protection Scientist Engineer, Intelligence and Investigations

London, United Kingdom
7 months ago
Job Type
Permanent
Posted
8 Oct 2025 (7 months ago)

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.

The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.

About the Role

Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within Integrity and Investigations, you will be responsible for designing and building systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by our existing safety systems. This will require expert understanding of our products and data, and involves working cross-functionally with product, policy, and engineering teams.

This role is based in our London office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.

In this role, you will:

  • Scope and implement abuse monitoring requirements for new product launches.

  • Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.

  • Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.

  • Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.

You might thrive in this role if you:

  • Have at least 4 years of experience doing technical analysis and detection, especially using SQL and Python.

  • Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams. An investigative mindset is key.

  • Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution. Basic software development skills are a plus as this role writes productionised code.

  • Have experience scaling and automating processes, especially with language models.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Related Jobs

View all jobs
Spotlight

Machine Learning Engineer (Forward Deployed)

Mind Foundry Oxford/ Hybrid, Oxfordshire, United Kingdom
Spotlight

Forward Deployed Engineer

SolveAI London, United Kingdom
Hybrid

Data Science Manager - Logistics

Ocado London, United Kingdom
Hybrid

Senior Cyber Security Engineer (AI Safety)

Faculty AI London, United Kingdom
Hybrid

Software Engineer

Faculty AI London, United Kingdom
Hybrid

Data Scientist / Algorithm Engineer

PhysicsX United Kingdom

Data Scientist

Franklin Bates London, United Kingdom
£55,000 – £65,000 pa Hybrid

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise AI Jobs in the UK (2026 Guide)

Advertising AI jobs in the UK requires a different approach to most technical hiring. The candidate pool is small, highly informed and in demand across multiple sectors simultaneously. General job boards reach a broad audience but lack the specificity that AI professionals expect — and the filtering mechanisms they rely on. Specialist platforms, direct outreach and academic channels each serve a different part of the market. This guide, published by ArtificialIntelligenceJobs.co.uk, covers where to advertise AI roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about time-to-hire across different role types.

AI Jobs UK 2026: What to Expect Over the Next 3 Years

Artificial intelligence is creating jobs faster than the market can name them. New roles are appearing every quarter, existing titles are splitting into specialisms, and the technologies underpinning it all are evolving at a pace that makes even last year's job descriptions feel dated. For job seekers, this presents a genuinely unusual challenge. In most industries, career planning means understanding a relatively stable landscape and working out where you fit within it. In AI, the landscape itself is being redrawn in real time. The roles with the most hiring activity in 2028 may not yet have a widely agreed job title in 2026. That's not a reason to feel overwhelmed — it's a reason to get informed. The candidates who thrive in this market aren't necessarily those with the longest CVs or the most credentials. They're the ones who understand the direction of travel: which skills are gaining value, which technologies are driving employer decisions, and how the definition of an "AI job" is expanding well beyond the tech sector. This article breaks down what the UK AI jobs market is likely to look like over the next three years — covering emerging job titles, the technologies reshaping hiring, the skills employers are prioritising, and how to position yourself ahead of the curve rather than behind it.

New AI Employers to Watch in 2026: UK and Global Companies Reshaping AI Careers

The artificial intelligence job market in the UK is evolving at an extraordinary pace. With record-breaking investment, government backing, and a surge in enterprise adoption, the landscape of AI employers is shifting rapidly. For candidates exploring opportunities on ArtificialIntelligenceJobs.co.uk, understanding who is hiring next is just as important as understanding what skills are in demand. In this article, we explore the new and emerging AI employers to watch in 2026, focusing on organisations that have recently secured funding, won major contracts, or expanded their UK footprint. From cutting-edge startups to global giants doubling down on Britain, these companies represent the next wave of AI career opportunities.