Data Scientist, Integrity Measurement

OpenAI
London
1 day ago
Create job alert

The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability

The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. As the systems that address some of our most severe usage harms become more mature, we’re adding data scientists to help us measure robustly the prevalence of these problems and the quality of our response to them.

About the Role

We are looking for experienced trust and safety data scientists to help us improve, productionise and monitor measurement for complex, actor- and sometimes network-level harms. A data scientist in this role will own measurement and metrics across several established harm verticals, including estimating prevalence for on-platform (and sometimes off-platform!) harm, and analyses to identify gaps and opportunities in our responses.

This role is based out of our London office and may involve resolving urgent escalations outside of normal work hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise-disturbing material.

In this role, you will:

own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals

develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, which may necessarily include off-platform indicators or other non-standard datasets

build metrics that can be used for goaling or A/B tests when prevalence or other top line metrics are not suitable

own dashboards and metrics reporting for harm verticals

conduct analyses and generate insights that inform improvements to review, detection, or enforcement, and that influence roadmaps

optimise LLM prompts for the purpose of measurement

collaborate w/ other safety teams to understand key safety concerns and create relevant policies that will support safety needs

provide metrics for leadership and external reporting

develop automation to scale yourself, leveraging our agentic products

You might thrive in this role if you:

are a senior DS with trust and safety experience that can drive measurement direction

have deep statistics skills, specifically around sampling methods and prevalence estimation of complicated problem areas (ideally activity- rather than content-based)

have experience working with severe and sensitive harm areas like child safety or violence

are an excellent communicator, and have strong cross-functional collaboration skills

are capable in data programming languages (R or python, SQL)

(ideally) have experience with AI harms or leveraging AI for measurement

Related Jobs

View all jobs

Machine Learning Engineer (Recommendations)

Data Scientist

Data Scientist - Investigations

Data Scientist

Data Scientist (Mid level)

Data Scientist

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.