National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

Strategy and Delivery Adviser - AI Safety Institute

Department for Science, Innovation & Technology
London
10 months ago
Applications closed

Related Jobs

View all jobs

Director, Artificial Intelligence responsible for driving the strategic development, deployment, and scaling of AI capabilities across the organization and building Centre of Excellence ground up.

AIOps Principal Engineer

Data Scientist Consultant

Data Scientist Senior Consultant

Data Scientist Manager

Lead Analyst - Low Carbon Fuels

Job summary

AI is bringing about huge changes to society, and it is our job as a team to work out how Government should respond. It is a once-in-a-generation moment, and an incredibly fast-paced and exciting environment.�

AI Safety Institute

Advances in artificial intelligence (AI) over the last decade have been impactful, rapid, and unpredictable. AdvancedAIsystems have the potential to drive economic growth and productivity, boost health and wellbeing, improve public services, and increase security.�

But advancedAIsystems also pose significant risks, as detailed in the government�s paper onpublished in be misused � this could include usingAIto generate disinformation, conduct sophisticated cyberattacks or help develop chemical cause societal harms � there have been examples ofAIchatbots encouraging harmful actions, promoting skewed or radical views, and providing biased advice.AIgenerated content that is highly realistic but false could reduce public trust in information. Some experts are concerned that humanity could lose control of advanced systems, with potentially catastrophic and permanent consequences. We will only unlock the benefits ofAIif we can manage these risks. At present, our ability to develop powerful systems outpaces our ability to make them safe. The first step is to better understand the capabilities and risks of these advancedAIsystems. This will then inform our regulatory framework forAI, so we ensureAIis developed and deployed safely and responsibly.�

The UK is taking a leading role in driving this conversation forward internationally. We hosted the world�s first major and have launched the AI Safety Institute. Responsible government action in an area as new and fast-paced as advancedAIrequires governments to develop their own sophisticated technical and sociotechnical AI Safety Institute is advancing the world�s knowledge ofAIsafety by carefully examining, evaluating, and testing new types ofAI, so that we understand what each new model is capable of. The Institute is conducting fundamental research on how to keep people safe in the face of fast and unpredictable progress inAI. The Institute will make its work available to the world, enabling an effective global response to the opportunities and risks of advancedAI.�

Job description

As a Strategy and Delivery Adviser, you will be working with a team of research scientists and engineers to drive forward cutting-edge AI safety research on the highest priority issues.�

You�ll provide crucial support for a team working on a specific set of AI safety issues: cyber risks, chem-bio risks, or safety cases (see below for further details).�

You might work on building a research strategy for your team, writing submissions and briefs for seniors and ministers, setting up and managing research partnerships, organising events and workshops, forging strong relationships with external stakeholders like major AI companies and other governments, coordinating model tests, or engaging the cross-Whitehall community to ensure our work has impact.�

These are multi-faceted roles which involve a mixture of strategy, policy and project management. They will be suitable for people who love getting things done, but who also enjoy big picture thinking and engaging with technical detail.�

Successful applicants will work within one of the three following workstreams. If you have a strong preference for any of these, please do state as much in your personal statement:�

Cyber Misuse

The aim of the Cyber Misuse team is to deeply understand, assess and mitigate the risks from AI uplifting threat actors in conducting cyber-attacks. This involves developing risk and capability thresholds for cyber that focus on the greatest expected harm, building evaluations that assess for the priority capabilities identified and running these evaluations as part of pre-deployment and lifecycle testing exercises.��

In this role you will support with setting out the strategy and delivery of projects that develop our risk modelling or build new evaluations. These projects could range from research and human uplift studies to creating complex automated cyber evaluations. You will contribute to the development of our risk and capability thresholds and communicate our work to key stakeholders by producing briefings and building relationships across government and externally.� While we don�t expect you to have a technical or a cybersecurity background, we strongly encourage participants with relevant experience to apply.��

Safety Cases

Safety cases are already used as standard in other industries and are structured arguments that a system is unlikely to cause significant harm if deployed in a particular setting. As the AI frontier develops, we expect safety cases could become an important tool for mitigating AI safety risks, whereby AI companies set out detailed arguments for how they have ensured their models are safe. We believe it is possible to significantly develop our understanding of what a good safety case would look like now, even though the field is far from knowing how to write a detailed safety case.�

In this role, you�ll support the safety cases policy / strategy lead to ensure that this research has an impact on the safety of AI systems. Strong candidates will have a pre-existing interest in AI safety, and be able to clearly and thoughtfully analyse a safety case for an AI system (although we don�t expect candidates to have a technical background or ML expertise). Alongside strategy and delivery responsibilities, you might attend cross-government meetings on AI policy, or write policy or academic papers on the use of AI safety cases.�

Key responsibilities (indicative, with some variation across workstreams):�

Overseeing the delivery of a suite of research projects by our in-house technical team and select external research partners� Working with our technical researchers to devise and deliver new research projects, in line with AISI�s strategic objectives, and turning their outputs into useful outputs for policy makers� Helping shape and define the longer-term strategy of the team and contributing to the wider research vision of the AISI Acting as a point person on the AISI�s research agenda, communicating the work of the team to senior officials and ministers within AISI and across Whitehall� Working with National Security partners in organisations across the UK Government� Building and leveraging a network of research partners and policy stakeholders within and outside of government� Coordinating the delivery of pre- and post-deployment model tests

Person specification

These are fast-paced and challenging roles, with the potential to have a massive impact on the work of the AI Safety Institute. We are looking for exceptional operators who can drive things forward and take responsibility for achieving the objectives of the team. You will be excellent at building strong, trusting relationships, problem-solving, and co-ordinating complex projects.�

Essential criteria

Start-up mindset / entrepreneurial approach; this will involve navigating in a lot of uncertainty, being quick to adapt, taking a 'trial and get feedback quickly' approach to a lot of pieces of work, and being willing to get stuck in and add value�Passionate about the mission of the AI Safety Institute, and ideally with a good working knowledge of issues at the intersection between AI and cyber or issues related to AI alignmentAble to work effectively at pace, make decisions in the face of competing priorities, and remain calm and resilient under pressure Able to manage a wide-range of diverse stakeholders to achieve goalsProactive and able to identify solutions to complex problems� breaking down large, intractable issues into tangible, and effective next stepsCan operate with autonomyand �self-drive� workExcellent written and oral communication skills, able to communicate effectively, with a range of expert and non-expert stakeholdersExperience managing complex projects with multiple stakeholders

Behaviours

We'll assess you against these behaviours during the selection process:

Delivering at Pace Communicating and Influencing

Benefits

Alongside your salary of �42,495, Department for Science, Innovation & Technology contributes �12,310 towards you being a member of the Civil Service Defined Benefit Pension scheme.

The Department for Science, Innovation and Technology offers a competitive mix of benefits including:

A culture of flexible working, such as job sharing, homeworking and compressed hours. Automatic enrolment into the , with an employer contribution of A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30. An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue. Access to a range of retail, travel and lifestyle employee discounts.

Office attendance

The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

10 AI Recruitment Agencies in the UK You Should Know (2025 Job‑Seeker Guide)

Generative‑AI hype has translated into real hiring: Lightcast recorded +57 % year‑on‑year growth in UK adverts mentioning “machine learning”, “LLM” or “gen‑AI” during Q1 2025. Yet supply still lags. Roughly 18,000 core AI professionals work in the UK, but monthly live vacancies hover around 1,400–1,600. That mismatch makes specialist recruiters invaluable—opening stealth vacancies, advising on salary bands and fast‑tracking interview loops. But many tech agencies sprinkle “AI” on their website without an active desk. To save you time, we vetted 50 + consultancies and kept only those with: A registered UK head office (verified via Companies House). A named AI/Machine‑Learning or Data practice.

AI Jobs Skills Radar 2026: Emerging Frameworks, Languages & Tools to Learn Now

As the UK’s AI sector accelerates towards a £1 trillion tech economy, the job landscape is rapidly evolving. Whether you’re an aspiring AI engineer, a machine learning specialist, or a data-driven software developer, staying ahead of the curve means more than just brushing up on Python. You’ll need to master a new generation of frameworks, languages, and tools shaping the future of artificial intelligence. Welcome to the AI Jobs Skills Radar 2026—your definitive guide to the emerging AI tech stack that employers will be looking for in the next 12–24 months. Updated annually for accuracy and relevance, this guide breaks down the top tools, frameworks, platforms, and programming languages powering the UK’s most in-demand AI careers.

How to Find Hidden AI Jobs in the UK Using Professional Bodies like BCS, IET & the Turing Society

Stop Scrolling Job Boards and Start Tapping the Real AI Market Every week a new headline announces millions of pounds flowing into artificial-intelligence research, defence initiatives, or health-tech pilots. Read the news and you could be forgiven for thinking that AI vacancies must be everywhere—just grab your laptop, open LinkedIn, and pick a role. Yet anyone who has hunted seriously for an AI job in the United Kingdom knows the truth is messier. A large percentage of worthwhile AI positions—especially specialist or senior posts—never appear on public boards. They emerge inside university–industry consortia, defence labs, NHS data-science teams, climate-tech start-ups, and venture studios. Most are filled through referral or conversation long before a recruiter drafts a formal advert. If you wait for a vacancy link, you are already at the back of the queue. The surest way to beat that dynamic is to embed yourself in the professional bodies and grassroots communities where the work is conceived. The UK has a dense network of such organisations: the Chartered Institute for IT (BCS); the Institution of Engineering and Technology (IET) with its Artificial Intelligence Technical Network; the Alan Turing Institute and its student-driven Turing Society; the Royal Statistical Society (RSS); the Institution of Mechanical Engineers (IMechE) and its Mechatronics, Informatics & Control Group; public-funding engines like UK Research and Innovation (UKRI); and an ecosystem of Slack channels and Meetup groups that trade genuine, timely intel. This article is a practical, step-by-step guide to using those networks. You will learn: Why professional bodies matter more than algorithmic job boards Exactly which special-interest groups (SIGs) and technical networks to join How to turn CPD events into informal interviews How to monitor grant databases so you hear about posts months before they exist Concrete scripts, portfolio tactics, and outreach rhythms that convert visibility into offers Follow the playbook and you move from passive applicant to insider—the colleague who hears about a role before it is written down.