Artificial Intelligence Researcher

Caspian One
City of London
2 days ago
Create job alert

Day Rate Contract - Option To Convert To Permanent In The Future


Join one of the UK's largest banks building next‑generation AI capabilities with a strong commitment to safe, explainable, and trusted AI. This team is developing cutting‑edge guardrail technologies to ensure AI systems behave reliably across text, voice, and emerging multimodal modalities.


This role is ideal for a curious, high‑end thinker (such as a recent Master’s or PhD graduate) with a passion for responsible AI, agentic systems, and the scientific foundations behind guardrail effectiveness. You will work at the intersection of research, model development, and deep validation, contributing to safety frameworks that shape the organisation’s AI strategy.


What You’ll Do

Research & Explore

  • Conduct advanced research into AI guardrails, agentic behaviours, and safe model‑interaction patterns.
  • Explore state‑of‑the‑art methods across LLMs, multimodal models, and emerging agent systems.
  • Investigate niche areas of AI safety such as unintended behaviours, boundary testing, and robustness.


Build & Experiment

  • Develop prototype models, safety mechanisms, and evaluation tools.
  • Build and refine guardrail mechanisms that operate across these modalities.
  • Experiment with multimodal inputs including:
  • Text
  • Voice
  • Video


Deep Testing & Validation

  • Design and run high‑depth validation experiments to confirm guardrail effectiveness.
  • Stress‑test models for security, misuse, red‑teaming scenarios, and failure boundaries.
  • Support development of automated testing frameworks for AI controls.


Contribute to Responsible AI Strategy

  • Help validate controls ensuring AI systems meet internal responsible AI standards.
  • Collaborate with engineers, safety specialists, and governance teams.
  • Produce high‑quality research insights to guide product and platform direction.


What We’re Looking For

  • Strong research credentials (PhD, MPhil, MSc, or equivalent research experience).
  • Familiarity with Python‑based research frameworks.
  • Strong foundational knowledge in machine learning, foundation models, or multimodal AI.
  • Enthusiasm for AI safety, guardrails, and responsible‑AI frameworks.
  • Experience building or fine‑tuning models (open‑source or proprietary).
  • Ability to design experiments, measure model behaviour, and interpret results.
  • Curiosity about AI alignment, agentic behaviour, and interpretability.
  • Exposure to LLM or multimodal model evaluation.


Nice to have:

  • Experience working with synthetic data, evaluation sets, or adversarial testing.
  • Interest in governance, risk, or AI assurance.


Why Join?

This is a rare opportunity to work on advanced AI research within a major organisation deploying AI at enterprise scale. You’ll join a growing research capability, exploring cutting‑edge topics while ensuring AI is developed ethically, responsibly, and with world‑class guardrails.


You’ll benefit from:

  • Access to advanced tools and emerging models.
  • Opportunities to publish internal research and influence strategic direction.
  • Mentorship from experienced AI and safety specialists.
  • A collaborative environment that values experimentation and novel thinking.

Related Jobs

View all jobs

Artificial Intelligence Researcher

Artificial Intelligence Researcher

Postdoctoral Researcher in Computational Neuroanatomy & Artificial Intelligence

Researcher – Remote Sensing and Artificial Intelligence - Manaaki Whenua

Postdoctoral Researcher in Computational Neuroanatomy and Artificial Intelligence

Artificial Intelligence Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.