Senior Research Scientist - AI Safety

Faculty
London, United Kingdom
5 months ago
Job Type
Permanent
Work Location
Hybrid
Seniority
Senior
Posted
12 Nov 2025 (5 months ago)

Why Faculty?


We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.

We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.

Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.

AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.

About the Team

Faculty’s Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1.

Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer-reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety-relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.

About the role

We are seeking a Senior Research Scientist to join our high-impact R&D. You will lead novel research that advances scientific understanding and fuels our ambition to build safe AI systems. This is a crucial opportunity to join a small, high-agency team conducting vital red teaming and evaluations for frontier models in sensitive areas like cybersecurity and national security. You'll shape the future of safe AI deployment in the real world.

What you'll be doing:

  • Owning and driving forward high-impact AI research themes in AI safety.

  • Contributing to the wider vision and development of Faculty’s AI safety research agenda.

  • Supporting Faculty’s positioning as a leader in AI safety through thought leadership and stakeholder engagement.

  • Shaping our research agenda by identifying impactful opportunities and balancing scientific and practical priorities.

  • Leading technical research within the AI Safety space, from concept to publication.

  • Supporting the delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with government and commercial partners.

Who we're looking for:

  • You have a track record of working with high-impact AI research, evidenced by top-tier academic publications or equivalent experience.

  • You bring proven experience or a clear passion for Applied AI safety, perhaps from labs, academia, or evaluation and red-teaming roles.

  • You possess deep domain knowledge in language models and generative AI model architectures, including fine-tuning techniques beyond API-level implementation.

  • You have practical machine learning experience, with a focus on areas such as robustness, explainability, or uncertainty estimation.

  • You are proficient with deep learning frameworks (PyTorch, TensorFlow, or similar) and familiar with the HuggingFace ecosystem or equivalent ML tooling.

  • You have demonstrable Python engineering experience to build and support robust research projects.

  • You have the ability to conduct and oversee complex technical research projects and possess excellent verbal and written communication skills.

Our Recruitment Ethos

We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.

Some of our standout benefits:

  • Unlimited Annual Leave Policy

  • Private healthcare and dental

  • Enhanced parental leave

  • Family-Friendly Flexibility & Flexible working

  • Sanctus Coaching

  • Hybrid Working

If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.

Related Jobs

View all jobs

(Alignment) Research Engineer/Research Scientist - Red Team

AI Safety Institute London, United Kingdom

Research Engineer/Research Scientist - Red Team (Misuse)

AI Safety Institute London, United Kingdom

Senior Engineering Lead, Chem-Bio

AI Safety Institute London, United Kingdom

Senior Frontend Software Engineer - Core Services

PhysicsX London, United Kingdom

Senior Software Engineer - AI Workbench

PhysicsX London, United Kingdom

ML Research Engineer, London

Isomorphic Labs London, United Kingdom

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise AI Jobs in the UK (2026 Guide)

Advertising AI jobs in the UK requires a different approach to most technical hiring. The candidate pool is small, highly informed and in demand across multiple sectors simultaneously. General job boards reach a broad audience but lack the specificity that AI professionals expect — and the filtering mechanisms they rely on. Specialist platforms, direct outreach and academic channels each serve a different part of the market. This guide, published by ArtificialIntelligenceJobs.co.uk, covers where to advertise AI roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about time-to-hire across different role types.

New AI Employers to Watch in 2026: UK and Global Companies Reshaping AI Careers

The artificial intelligence job market in the UK is evolving at an extraordinary pace. With record-breaking investment, government backing, and a surge in enterprise adoption, the landscape of AI employers is shifting rapidly. For candidates exploring opportunities on ArtificialIntelligenceJobs.co.uk, understanding who is hiring next is just as important as understanding what skills are in demand. In this article, we explore the new and emerging AI employers to watch in 2026, focusing on organisations that have recently secured funding, won major contracts, or expanded their UK footprint. From cutting-edge startups to global giants doubling down on Britain, these companies represent the next wave of AI career opportunities.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.