Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Operational Ethics and Safety Manager

DeepMind
London
11 months ago
Applications closed

Related Jobs

View all jobs

Senior Data Scientist

Data Scientist

Machine Learning Operations Engineer

Senior Data Scientist

Automation and Digital Products Developer

Financial Planning Analyst - Reporting and Analytics

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know. Snapshot We are looking for an Operational Ethics and Safety Manager to join our Responsible Development & Innovation (ReDI) team at Google DeepMind. In this individual contributor role, you will be responsible for partnering with research and product teams to consider the downstream impacts of Google DeepMind’s research and its applications. You will work with teams at Google DeepMind , to ensure that our work is conducted in line with ethics and safety best practices, helping Google DeepMind to progress towards its mission. You will review the safety performance of AI models and provide analysis and advice to various Google DeepMind stakeholders, including our Responsibility and Safety Council. About us Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. The role As an Operational Ethics & Safety Manager within the ReDI team, you’ll use your expertise on the societal implications of technology to deliver impactful assessment, advisory and review work through direct collaboration on groundbreaking research projects and to help develop the broader governance ecosystem at Google DeepMind. Key responsibilities Leading ethics and safety reviews of projects, in close collaboration with project teams, to assess the downstream societal implications of Google DeepMind’s technology. Closely collaborating with the ReDI evaluations and model policy teams to review the safety performance of AI models. Supporting the management of the Responsibility and Safety Council, presenting projects and communicating assessments to senior stakeholders on a frequent basis. Designing engagement models to tackle ethics and safety challenges, e.g. running workshops, engaging with external experts, to help teams consider the direct and indirect implications of their work. Identifying areas relevant to ethics and safety to advance research. Working with broader Google teams to monitor the outcomes of projects to understand their impact Developing and documenting best practices through working with internal Google DeepMind teams and experts, and where appropriate, external organisations, to develop best practices for Google DeepMind projects. About you In order to set you up for success as an Operational Ethics and Safety Manager at Google DeepMind, we look for the following skills and experience: Experience navigating and assessing complex ethical and societal questions related to technology development, including balancing the benefits and risks of research and applications. A strong understanding of the challenges and issues in the field of AI ethics and safety, through proven AI and society experience (e.g. relevant governance, policy, legal or research work). Strong executive stakeholder management skills, including the ability to communicate effectively in tight turnaround times. Significant experience collaborating with technical stakeholders and highly interdisciplinary teams. Proven ability to communicate complex concepts and ideas simply for a range of collaborators. Excellent technical understanding and communication ability, with the ability to distil sophisticated technical ideas to their essence. In addition, the following would be an advantage: Experience working with governance processes within a public or private institution. Experience working within the field of AI ethics and safety. Relevant research experience. Product management expertise or other similar experience. Application deadline: 5pm BST Friday 18th October 2024 Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy .

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Why AI Careers in the UK Are Becoming More Multidisciplinary

Artificial intelligence is no longer a single-discipline pursuit. In the UK, employers increasingly want talent that can code and communicate, model and manage risk, experiment and empathise. That shift is reshaping job descriptions, training pathways & career progression. AI is touching regulated sectors, sensitive user journeys & public services — so the work now sits at the crossroads of computer science, law, ethics, psychology, linguistics & design. This isn’t a buzzword-driven change. It’s happening because real systems are deployed in the wild where people have rights, needs, habits & constraints. As models move from lab demos to products that diagnose, advise, detect fraud, personalise education or generate media, teams must align performance with accountability, safety & usability. The UK’s maturing AI ecosystem — from startups to FTSE 100s, consultancies, the public sector & universities — is responding by hiring multidisciplinary teams who can anticipate social impact as confidently as they ship features. Below, we unpack the forces behind this change, spotlight five disciplines now fused with AI roles, show what it means for UK job-seekers & employers, and map practical steps to future-proof your CV.

AI Team Structures Explained: Who Does What in a Modern AI Department

Artificial Intelligence (AI) and Machine Learning (ML) are no longer confined to research labs and tech giants. In the UK, organisations from healthcare and finance to retail and logistics are adopting AI to solve problems, automate processes, and create new products. With this growth comes the need for well-structured teams. But what does an AI department actually look like? Who does what? And how do all the moving parts come together to deliver business value? In this guide, we’ll explain modern AI team structures, break down the responsibilities of each role, explore how teams differ in startups versus enterprises, and highlight what UK employers are looking for. Whether you’re an applicant or an employer, this article will help you understand the anatomy of a successful AI department.

Why the UK Could Be the World’s Next AI Jobs Hub

Artificial Intelligence (AI) has rapidly moved from research labs into boardrooms, classrooms, hospitals, and homes. It is already reshaping economies and transforming industries at a scale comparable to the industrial revolution or the rise of the internet. Around the world, countries are competing fiercely to lead in AI innovation and reap its economic, social, and strategic benefits. The United Kingdom is uniquely positioned in this race. With a rich heritage in computing, world-class universities, forward-thinking government policy, and a growing ecosystem of startups and enterprises, the UK has many of the elements needed to become the world’s next AI hub. Yet competition is intense, particularly from the United States and China. Success will depend on how effectively the UK can scale its strengths, close its gaps, and seize opportunities in the years ahead. This article explores why the UK could be the world’s next global hub for artificial intelligence, what challenges it must overcome, and what this means for businesses, researchers, and job seekers.