Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Research Scientist/Research Engineer

AI Safety Institute
London
8 months ago
Applications closed

Related Jobs

View all jobs

Research Scientist, 3D ML, AI & Computer Vision (PhD)

Research Engineer - Machine Learning

Computer Vision/Machine Learning Research Manager

Machine Learning Quantitative Researcher

Research Scientist, Machine Learning (PhD) London, UK • Software Engineering +1 more • Engineer[...]

Research Scientist -Machine Learning

Find out more about the daily tasks, overall responsibilities, and required experience for this opportunity by scrolling down now.Role DescriptionThe AI Safety Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team.

Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Safety Institute’s Safeguard Analysis Team researches such interventions, which it refers to as 'safeguards', evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future.

The Safeguard Analysis Team takes a broad view of security threats and interventions. It's keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed.

The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer, recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Team's priorities include research-oriented responsibilities – like assessing the threats to frontier systems and developing novel attacks – and engineering-oriented ones, such as building infrastructure for running evaluations.

In this role, you’ll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff, including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge.

In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience.

Person SpecificationYou may be a good fit if you have

some

of the following skills, experience and attitudes:

Experience working on machine learning, AI, AI security, computer security, information security, or some other security discipline in industry, in academia, or independently.

Experience working with a world-class research team comprised of both scientists and engineers (e.g. in a top-3 lab).

Red-teaming experience against any sort of system.

Strong written and verbal communication skills.

Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine-tuning LLMs.

Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.

Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.

Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done.

Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.

Writing production quality code.

Improving technical standards across a team through mentoring and feedback.

Designing, shipping, and maintaining complex tech products.

Salary & BenefitsWe are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:

L3: £65,000 - £75,000

L4: £85,000 - £95,000

L5: £105,000 - £115,000

L6: £125,000 - £135,000

L7: £145,000

There are a range of pension options available which can be found through the Civil Service website.

Selection ProcessIn accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required ExperienceThis job advert encompasses a range of possible research and engineering roles within the Safeguard Analysis Team. The 'required' experiences listed below should be interpreted as examples of the expertise we're looking for, as opposed to a list of everything we expect to find in one applicant:

Writing production quality code

Writing code efficiently

Python

Frontier model architecture knowledge

Frontier model training knowledge

Model evaluations knowledge

AI safety research knowledge

Security research knowledge

Research problem selection

Research science

Written communication

Verbal communication

Teamwork

Interpersonal skills

Tackle challenging problems

Learn through coaching

#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write an AI CV that Beats ATS (UK examples)

Writing an AI CV for the UK market is about clarity, credibility, and alignment. Recruiters spend seconds scanning the top third of your CV, while Applicant Tracking Systems (ATS) check for relevant skills & recent impact. Your goal is to make both happy without gimmicks: plain structure, sharp evidence, and links that prove you can ship to production. This guide shows you exactly how to do that. You’ll get a clean CV anatomy, a phrase bank for measurable bullets, GitHub & portfolio tips, and three copy-ready UK examples (junior, mid, research). Paste the structure, replace the details, and tailor to each job ad.

AI Recruitment Trends 2025 (UK): What Job Seekers Must Know About Today’s Hiring Process

Summary: UK AI hiring has shifted from titles & puzzle rounds to skills, portfolios, evals, safety, governance & measurable business impact. This guide explains what’s changed, what to expect in interviews, and how to prepare—especially for LLM application, MLOps/platform, data science, AI product & safety roles. Who this is for: AI/ML engineers, LLM engineers, data scientists, MLOps/platform engineers, AI product managers, applied researchers & safety/governance specialists targeting roles in the UK.

Why AI Careers in the UK Are Becoming More Multidisciplinary

Artificial intelligence is no longer a single-discipline pursuit. In the UK, employers increasingly want talent that can code and communicate, model and manage risk, experiment and empathise. That shift is reshaping job descriptions, training pathways & career progression. AI is touching regulated sectors, sensitive user journeys & public services — so the work now sits at the crossroads of computer science, law, ethics, psychology, linguistics & design. This isn’t a buzzword-driven change. It’s happening because real systems are deployed in the wild where people have rights, needs, habits & constraints. As models move from lab demos to products that diagnose, advise, detect fraud, personalise education or generate media, teams must align performance with accountability, safety & usability. The UK’s maturing AI ecosystem — from startups to FTSE 100s, consultancies, the public sector & universities — is responding by hiring multidisciplinary teams who can anticipate social impact as confidently as they ship features. Below, we unpack the forces behind this change, spotlight five disciplines now fused with AI roles, show what it means for UK job-seekers & employers, and map practical steps to future-proof your CV.