Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Data Scientist, Responsible Development and Innovation

The Rundown AI, Inc.
City of London
2 weeks ago
Create job alert
Snapshot

As a data scientist in Responsible Development and Innovation (ReDI) at Google DeepMind, you will be working with a diverse team to develop and deliver evaluations and analysis in established and emerging policy areas for Google DeepMind’s most groundbreaking models.

You will work with teams at Google DeepMind along with internal and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

As a data scientist working in ReDI, you’ll be part of a team developing and implementing key safety evaluations in both established and emerging policy areas. You will be working to develop and implement new evaluations and experiments, defining new metrics and analytical processes to support internal and external safety reporting of both quantitative and qualitative data, and generally supporting the team by embodying data and analytics best practices. You’ll be supporting the team across the full range of development, from running early analysis to developing higher-level frameworks and reports.

Note that this role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.

Key responsibilities
  • Developing new metrics and analytics approaches in key risk areas comprising both quantitative and qualitative data.
  • Assessing the quality and coverage of evaluation datasets and methods.
  • Influencing the design and development of future evaluations, and leading efforts to define novel testing and experimentation approaches.
  • Converting high-level problems into detailed analytics plans, implementing those plans, and influencing others to support as necessary.
  • Working with multidisciplinary specialists to measure and improve the quality of evaluation outputs.
  • Contributing to and running evaluations and reporting pipelines.
  • Communicating with wider stakeholders across Responsibility, Google DeepMind, Google, and third parties where appropriate.
  • Providing an expert perspective on data usage, narrative, and interpretation in diverse projects and contexts.

In order to set you up for success in this role, we are looking for the following skills and experience:

  • Strong analytical and statistical skills, with experience in metric design and development.
  • Strong command of Python.
  • Ability to work with both quantitative and qualitative data, understanding the strengths and weaknesses of each in specific contexts.
  • Ability to present analysis and findings to both technical and non-technical teams, including senior stakeholders.
  • A track record of transparency, with a demonstrated ability to identify limitations in datasets and analyses and communicate these effectively.
  • Familiarity with AI evaluations and broader experimentation principles.
  • Demonstrated ability to work within and lead cross-functional teams, fostering collaboration, and influencing outcomes.
  • Ability to thrive in a fast-paced environment with a willingness to pivot to support emerging needs.

In addition, the following would be an advantage:

  • Experience working with sensitive data, access control, and procedures for data worker wellbeing.
  • Experience working in safety or security contexts (for example content safety or cybersecurity).
  • Experience with safety evaluations and mitigations of advanced AI systems.
  • Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red‑teaming, and content rating processes.
  • Experience working with product development or in similar agile settings.
  • Familiarity with sociotechnical and safety considerations of generative AI, including systemic risk domains identified in the EU AI Act (chemical, biological, radiological, and nuclear; cyber offense; loss of control; harmful manipulation).

The US base salary range for this full‑time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Scientist

Lead Data Scientist

Lead Data Scientist

Data Science Industrial Placement Student

Staff Data Scientist - Fraud

Data Scientist - Engineer Data Engineering · London, UK · Hybrid

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write an AI CV that Beats ATS (UK examples)

Writing an AI CV for the UK market is about clarity, credibility, and alignment. Recruiters spend seconds scanning the top third of your CV, while Applicant Tracking Systems (ATS) check for relevant skills & recent impact. Your goal is to make both happy without gimmicks: plain structure, sharp evidence, and links that prove you can ship to production. This guide shows you exactly how to do that. You’ll get a clean CV anatomy, a phrase bank for measurable bullets, GitHub & portfolio tips, and three copy-ready UK examples (junior, mid, research). Paste the structure, replace the details, and tailor to each job ad.

AI Recruitment Trends 2025 (UK): What Job Seekers Must Know About Today’s Hiring Process

Summary: UK AI hiring has shifted from titles & puzzle rounds to skills, portfolios, evals, safety, governance & measurable business impact. This guide explains what’s changed, what to expect in interviews, and how to prepare—especially for LLM application, MLOps/platform, data science, AI product & safety roles. Who this is for: AI/ML engineers, LLM engineers, data scientists, MLOps/platform engineers, AI product managers, applied researchers & safety/governance specialists targeting roles in the UK.

Why AI Careers in the UK Are Becoming More Multidisciplinary

Artificial intelligence is no longer a single-discipline pursuit. In the UK, employers increasingly want talent that can code and communicate, model and manage risk, experiment and empathise. That shift is reshaping job descriptions, training pathways & career progression. AI is touching regulated sectors, sensitive user journeys & public services — so the work now sits at the crossroads of computer science, law, ethics, psychology, linguistics & design. This isn’t a buzzword-driven change. It’s happening because real systems are deployed in the wild where people have rights, needs, habits & constraints. As models move from lab demos to products that diagnose, advise, detect fraud, personalise education or generate media, teams must align performance with accountability, safety & usability. The UK’s maturing AI ecosystem — from startups to FTSE 100s, consultancies, the public sector & universities — is responding by hiring multidisciplinary teams who can anticipate social impact as confidently as they ship features. Below, we unpack the forces behind this change, spotlight five disciplines now fused with AI roles, show what it means for UK job-seekers & employers, and map practical steps to future-proof your CV.