Research Engineer - Cyber Misuse

AI Safety Institute
London
10 months ago
Applications closed

Related Jobs

View all jobs

Research Engineer, Machine Learning - Paris/London/Zurich/Warsaw

Research Engineer: Graph Machine Learning

Research Engineer Machine Learning

Senior Data Research Engineer Computer Vision

Machine Learning Engineer - LLM post-training/mid-training

Machine Learning Engineer - LLM post-training/mid-training

Please read the information in this job post thoroughly to understand exactly what is expected of potential candidates.About the TeamAs AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.

The AI Safety Institute’s Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.

We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations, and to scale up our capacity to evaluate frontier AI systems as they are released.

Role DescriptionThe AI Safety Institute research unit is looking for exceptionally motivated and talented Research Engineers, to work with a range of Cyber Security and policy specialists to measure the capabilities of AI systems against scenarios covered by our risk models – with a focus on measuring their performance on tasks related to cyber security.

You will play a key role in running experiments on frontier models, communicating and interpreting results, and building better tooling for measuring and understanding risk due to increases in model capability. Your job may also involve designing experiments ranging from measuring the uplift that AI systems might provide to malicious attackers, to conducting research to develop mitigations to prevent misuse of AI systems or better defend against AI enabled cyber attacks.

In this role, you’ll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff (including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge).

In addition to Junior roles, Senior, Staff and Principle RE positions are available for candidates with the required seniority and experience.

Person SpecificationYou may be a good fit if you have

some

of the following skills, experience and attitudes:

Relevant experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security

Experience in building software systems to meet research requirements and have led or been a significant contributor to relevant software projects, demonstrating cross-functional collaboration skills.

Knowledge of training, fine-tuning, scaffolding, prompting, deploying, and/or evaluating current cutting-edge machine learning systems such as large language models.

Knowledge of statistics.

A strong curiosity in understanding AI systems and studying the security implications of this technology.

Motivated to conduct research that is not only curiosity driven but also solves concrete open questions in governance and policy making.

Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.

Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.

Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.

Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.

The following are also nice-to-have:

Relevant Cyber Security Expertise

Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.

Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).

Acting as a bar raiser for interviews

Salary & BenefitsWe are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:

L3: £65,000 - £75,000

L4: £85,000 - £95,000

L5: £105,000 - £115,000

L6: £125,000 - £135,000

L7: £145,000

There are a range of pension options available which can be found through the Civil Service website.

Selection ProcessIn accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required ExperienceWe select based on skills and experience regarding the following areas:

Writing production quality code

Writing code efficiently

Python

Frontier model architecture knowledge

Frontier model training knowledge

Model evaluations knowledge

AI safety research knowledge

Research problem selection

Research science

Written communication

Verbal communication

Teamwork

Interpersonal skills

Tackle challenging problems

Learn through coaching

#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Maths for AI Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are a software engineer, data scientist or analyst looking to move into AI or you are a UK undergraduate or postgraduate in computer science, maths, engineering or a related subject applying for AI roles, the maths can feel like the biggest barrier. Job descriptions say “strong maths” or “solid fundamentals” but rarely spell out what that means day to day. The good news is you do not need a full maths degree worth of theory to start applying. For most UK roles like Machine Learning Engineer, AI Engineer, Data Scientist, Applied Scientist, NLP Engineer or Computer Vision Engineer, the maths you actually use again & again is concentrated in a handful of topics: Linear algebra essentials Probability & statistics for uncertainty & evaluation Calculus essentials for gradients & backprop Optimisation basics for training & tuning A small amount of discrete maths for practical reasoning This guide turns vague requirements into a clear checklist, a 6-week learning plan & portfolio projects that prove you can translate maths into working code.

Neurodiversity in AI Careers: Turning Different Thinking into a Superpower

The AI industry moves quickly, breaks rules & rewards people who see the world differently. That makes it a natural home for many neurodivergent people – including those with ADHD, autism & dyslexia. If you’re neurodivergent & considering a career in artificial intelligence, you might have been told your brain is “too much”, “too scattered” or “too different” for a technical field. In reality, many of the strengths that come with ADHD, autism & dyslexia map beautifully onto AI work – from spotting patterns in data to creative problem-solving & deep focus. This guide is written for AI job seekers in the UK. We’ll explore: What neurodiversity means in an AI context How ADHD, autism & dyslexia strengths match specific AI roles Practical workplace adjustments you can ask for under UK law How to talk about your neurodivergence during applications & interviews By the end, you’ll have a clearer picture of where you might thrive in AI – & how to set yourself up for success.

AI Hiring Trends 2026: What to Watch Out For (For Job Seekers & Recruiters)

As we head into 2026, the AI hiring market in the UK is going through one of its biggest shake-ups yet. Economic conditions are still tight, some employers are cutting headcount, & AI itself is automating whole chunks of work. At the same time, demand for strong AI talent is still rising, salaries for in-demand skills remain high, & new roles are emerging around AI safety, governance & automation. Whether you are an AI job seeker planning your next move or a recruiter trying to build teams in a volatile market, understanding the key AI hiring trends for 2026 will help you stay ahead. This guide breaks down the most important trends to watch, what they mean in practice, & how to adapt – with practical actions for both candidates & hiring teams.