Research Engineer - Post-Training

AI Safety Institute
London
1 year ago
Applications closed

Related Jobs

View all jobs

Research Engineer, Machine Learning - Paris/London/Zurich/Warsaw

Research Engineer: Graph Machine Learning

Machine Learning Research Engineer ID46889

Machine Learning Research Engineer ID46889

Senior Data Scientist Research Engineer

Machine Learning Research Engineer

About the Team

The Post-Training Team is dedicated to optimising AI systems to achieve state-of-the-art performance across the various risk domains that AISI focuses on. This is accomplished through a combination of scaffolding, prompting, supervised and RL fine-tuning of the AI models which AISI has access to.

One of the main focuses of our evaluation teams is estimating how new models might affect the capabilities of AI systems in specific domains. To improve confidence in our assessments, we make significant effort to enhance the model's performance in the domains of interest.

For many of our evaluations, this means taking a model we have been given access to and embedding it as part of a wider AI system—for example, in our cybersecurity evaluations, we provide models with access to tools for interacting with the underlying operating system and repeatedly call models to act in such environment. In our evaluations which do not require agentic capabilities, we may use elicitation techniques like fine-tuning and prompt engineering to ensure assessing the model at its full capacity.

About the Role

As a member of this team, you will use cutting-edge machine learning techniques to improve model performance in our domains of interest. The work is split into two sub-teams: Agents and Finetuning. Our Agents sub-team focuses on developing the LLM tools and scaffolding to create highly capable LLM-based agents, while our fine-tuning team builds out fine-tuning pipelines to improve models on our domains of interest.

The Post-Training team is seeking strong Research Engineers to join the team. The priorities of the team include both research-oriented tasks—such as designing new techniques for scaling inference-time computation or developing methodologies for in-depth analysis of agent behaviour—and engineering-oriented tasks—like implementing new tools for our LLM agents or creating pipelines for supporting and fine-tuning large open-source models. We recognise that some technical staff may prefer to span or alternate between engineering and research responsibilities, and this versatility is something we actively look for in our hires.

You’ll receive mentorship and coaching from your manager and the technical leads on your team, and regularly interact with world-class researchers and other exceptional staff, including alumni from Anthropic, DeepMind, OpenAI.

In addition to junior roles, we offer Senior, Staff, and Principal Research Engineer positions for candidates with the requisite seniority and experience.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes:

  • Experience conducting empirical machine learning research (e.g. PhD in a technical field and/or papers at top ML conferences), particularly on LLMs.
  • Experience with machine learning engineering, or extensive experience as a software engineer with a strong demonstration of relevant skills/knowledge in the machine learning.
  • An ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.

Particularly strong candidates also have the following experience:

  • Building LLM agents in industry or open-source collectives, particularly in areas adjacent to the main interests of one of our workstreams e.g. in-IDE coding assistants, research assistants etc. (for our Agents subteam)
  • Leading research on improving and measuring the capabilities of LLM agents (for our Agents sub-team)
  • Building pipelines for fine-tuning (or pretraining LLMs). Finetuning with RL techniques is particularly relevant (for our Finetuning subteam).
  • Finetuning or pretraining LLMs in a research context, particularly to achieve increased performance in specific domains (for our Finetuning subteam).

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:

  • L3: £65,000 - £75,000
  • L4: £85,000 - £95,000
  • L5: £105,000 - £115,000
  • L6: £125,000 - £135,000
  • L7: £145,000

There are a range of pension options available which can be found through the Civil Service website.

Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

We select based on skills and experience regarding the following areas:

  • Research problem selection
  • Research Engineering
  • Writing code efficiently
  • Python
  • Frontier model architecture knowledge
  • Frontier model training knowledge
  • Model evaluations knowledge
  • AI safety research knowledge
  • Written communication
  • Verbal communication
  • Teamwork
  • Interpersonal skills
  • Tackle challenging problems
  • Learn through coaching

Desired Experience

We additionally may factor in experience with any of the areas that our work-streams specialise in:

  • Cyber security
  • Chemistry or Biology
  • Safeguards
  • Safety Cases
  • Societal Impacts

#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.