Data Scientist, Responsible Development and Innovation

The Rundown AI, Inc.
City of London
4 months ago
Applications closed

Related Jobs

View all jobs

Data Scientist

Data Scientist

Data Scientist

Data Scientist

ML Engineer / Data Scientist, Applied AI

Data Scientist II - QuantumBlack Labs

Snapshot

As a data scientist in Responsible Development and Innovation (ReDI) at Google DeepMind, you will be working with a diverse team to develop and deliver evaluations and analysis in established and emerging policy areas for Google DeepMind’s most groundbreaking models.

You will work with teams at Google DeepMind along with internal and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

As a data scientist working in ReDI, you’ll be part of a team developing and implementing key safety evaluations in both established and emerging policy areas. You will be working to develop and implement new evaluations and experiments, defining new metrics and analytical processes to support internal and external safety reporting of both quantitative and qualitative data, and generally supporting the team by embodying data and analytics best practices. You’ll be supporting the team across the full range of development, from running early analysis to developing higher-level frameworks and reports.

Note that this role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.

Key responsibilities
  • Developing new metrics and analytics approaches in key risk areas comprising both quantitative and qualitative data.
  • Assessing the quality and coverage of evaluation datasets and methods.
  • Influencing the design and development of future evaluations, and leading efforts to define novel testing and experimentation approaches.
  • Converting high-level problems into detailed analytics plans, implementing those plans, and influencing others to support as necessary.
  • Working with multidisciplinary specialists to measure and improve the quality of evaluation outputs.
  • Contributing to and running evaluations and reporting pipelines.
  • Communicating with wider stakeholders across Responsibility, Google DeepMind, Google, and third parties where appropriate.
  • Providing an expert perspective on data usage, narrative, and interpretation in diverse projects and contexts.

In order to set you up for success in this role, we are looking for the following skills and experience:

  • Strong analytical and statistical skills, with experience in metric design and development.
  • Strong command of Python.
  • Ability to work with both quantitative and qualitative data, understanding the strengths and weaknesses of each in specific contexts.
  • Ability to present analysis and findings to both technical and non-technical teams, including senior stakeholders.
  • A track record of transparency, with a demonstrated ability to identify limitations in datasets and analyses and communicate these effectively.
  • Familiarity with AI evaluations and broader experimentation principles.
  • Demonstrated ability to work within and lead cross-functional teams, fostering collaboration, and influencing outcomes.
  • Ability to thrive in a fast-paced environment with a willingness to pivot to support emerging needs.

In addition, the following would be an advantage:

  • Experience working with sensitive data, access control, and procedures for data worker wellbeing.
  • Experience working in safety or security contexts (for example content safety or cybersecurity).
  • Experience with safety evaluations and mitigations of advanced AI systems.
  • Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red‑teaming, and content rating processes.
  • Experience working with product development or in similar agile settings.
  • Familiarity with sociotechnical and safety considerations of generative AI, including systemic risk domains identified in the EU AI Act (chemical, biological, radiological, and nuclear; cyber offense; loss of control; harmful manipulation).

The US base salary range for this full‑time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.