National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

Machine Learning Ops Engineer - AI

Opus 2
London
3 weeks ago
Create job alert

As Opus 2 continues to embed AI into our platform, we need robust, scalable data systems that power intelligent workflows and support advanced model behaviours. We’re looking for an MLOps Engineer to build and maintain the infrastructure that powers our AI systems. You will be the bridge between our data science and engineering teams, ensuring that our machine learning models are deployed, monitored, and scaled efficiently and reliably. You’ll be responsible for the entire lifecycle of our ML models in production, from building automated deployment pipelines to ensuring their performance and stability. This role is ideal for a hands-on engineer who is passionate about building robust, scalable, and automated systems for machine learning, particularly for cutting-edge LLM-powered applications.

What you'll be doing

Design, build, and maintain our MLOps infrastructure, establishing best practices for CI/CD for machine learning, including model testing, versioning, and deployment. Develop and manage scalable and automated pipelines for training, evaluating, and deploying machine learning models, with a specific focus on LLM-based systems. Implement robust monitoring and logging for models in production to track performance, drift, and data quality, ensuring system reliability and uptime. Collaborate with Data Scientists to containerize and productionize models and algorithms, including those involving RAG and Graph RAG approaches. Manage and optimize our cloud infrastructure for ML workloads on platforms like Amazon Bedrock or similar, focusing on performance, cost-effectiveness, and scalability. Automate the provisioning of ML infrastructure using Infrastructure as Code (IaC) principles and tools. Work closely with product and engineering teams to integrate ML models into our production environment and ensure seamless operation within the broader product architecture. Own the operational aspects of the AI lifecycle, from model deployment and A/B testing to incident response and continuous improvement of production systems. Contribute to our AI strategy and roadmap by providing expertise on the operational feasibility and scalability of proposed AI features. Collaborate closely with Principal Data Scientists and Principal Engineers to ensure that the MLOps framework supports the full scope of AI workflows and model interaction layers.

What excites us?

We’ve moved past experimentation. We have live AI features and a strong pipeline of customers excited to get access to more improved AI-powered workflows. Our focus is on delivering real, valuable AI-powered features to customers and doing it responsibly. You’ll be part of a team that owns the entire lifecycle of these systems, and your role is critical to ensuring they are not just innovative, but also stable, scalable, and performant in the hands of our users.

Requirements

What we're looking for in you

You are a practical and automation-driven engineer. You think in terms of reliability, scalability, and efficiency. You have hands-on experience building and managing CI/CD pipelines for machine learning. You're comfortable writing production-quality code, reviewing PR's, and are dedicated to delivering a reliable and observable production environment. You are passionate about MLOps and have a proven track record of implementing MLOps best practices in a production setting. You’re curious about the unique operational challenges of LLMs and want to build robust systems to support them.

Qualifications

Experience with model lifecycle management and experiment tracking. Ability to reason about and implement infrastructure for complex AI systems, including those leveraging vector stores and graph databases. Proven ability to ensure the performance and reliability of systems over time. 3+ years of experience in an MLOps, DevOps, or Software Engineering role with a focus on machine learning infrastructure. Proficiency in Python, with experience in building and maintaining infrastructure and automation, not just analyses. Experience working in Java or TypeScript environments is beneficial. Deep experience with at least one major cloud provider (AWS, GCP, Azure) and their ML services (, SageMaker, Vertex AI). Experience with Amazon Bedrock is a significant plus. Strong familiarity with containerization (Docker) and orchestration (Kubernetes). Experience with Infrastructure as Code (, Terraform, CloudFormation). Experience in deploying and managing LLM-powered features in production environments.Bonus: experience with monitoring tools (, Prometheus, Grafana), agent orchestration, or legaltech domain knowledge.

Benefits

Working for Opus 2

Opus 2 is a global leader in legal software and services, trusted partner of the world’s leading legal teams. All our achievements are underpinned by our unique culture where our people are our most valuable asset. Working at Opus 2, you’ll receive:

Contributory pension plan. 26 days annual holidays, hybrid working, and length of service entitlement. Health Insurance. Loyalty Share Scheme. Enhanced Maternity and Paternity. Employee Assistance Programme. Electric Vehicle Salary Sacrifice. Cycle to Work Scheme. Calm and Mindfulness sessions. A day of leave to volunteer for charity or dependent cover. Accessible and modern office space and regular company social events.

Related Jobs

View all jobs

Machine Learning Ops Engineer

Machine Learning Operations Engineer

Machine Learning Operations Engineer

Senior Machine Learning Operations Engineer

Senior Machine Learning Operations Engineer

Data Engineer

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

10 AI Recruitment Agencies in the UK You Should Know (2025 Job‑Seeker Guide)

Generative‑AI hype has translated into real hiring: Lightcast recorded +57 % year‑on‑year growth in UK adverts mentioning “machine learning”, “LLM” or “gen‑AI” during Q1 2025. Yet supply still lags. Roughly 18,000 core AI professionals work in the UK, but monthly live vacancies hover around 1,400–1,600. That mismatch makes specialist recruiters invaluable—opening stealth vacancies, advising on salary bands and fast‑tracking interview loops. But many tech agencies sprinkle “AI” on their website without an active desk. To save you time, we vetted 50 + consultancies and kept only those with: A registered UK head office (verified via Companies House). A named AI/Machine‑Learning or Data practice.

AI Jobs Skills Radar 2026: Emerging Frameworks, Languages & Tools to Learn Now

As the UK’s AI sector accelerates towards a £1 trillion tech economy, the job landscape is rapidly evolving. Whether you’re an aspiring AI engineer, a machine learning specialist, or a data-driven software developer, staying ahead of the curve means more than just brushing up on Python. You’ll need to master a new generation of frameworks, languages, and tools shaping the future of artificial intelligence. Welcome to the AI Jobs Skills Radar 2026—your definitive guide to the emerging AI tech stack that employers will be looking for in the next 12–24 months. Updated annually for accuracy and relevance, this guide breaks down the top tools, frameworks, platforms, and programming languages powering the UK’s most in-demand AI careers.

How to Find Hidden AI Jobs in the UK Using Professional Bodies like BCS, IET & the Turing Society

Stop Scrolling Job Boards and Start Tapping the Real AI Market Every week a new headline announces millions of pounds flowing into artificial-intelligence research, defence initiatives, or health-tech pilots. Read the news and you could be forgiven for thinking that AI vacancies must be everywhere—just grab your laptop, open LinkedIn, and pick a role. Yet anyone who has hunted seriously for an AI job in the United Kingdom knows the truth is messier. A large percentage of worthwhile AI positions—especially specialist or senior posts—never appear on public boards. They emerge inside university–industry consortia, defence labs, NHS data-science teams, climate-tech start-ups, and venture studios. Most are filled through referral or conversation long before a recruiter drafts a formal advert. If you wait for a vacancy link, you are already at the back of the queue. The surest way to beat that dynamic is to embed yourself in the professional bodies and grassroots communities where the work is conceived. The UK has a dense network of such organisations: the Chartered Institute for IT (BCS); the Institution of Engineering and Technology (IET) with its Artificial Intelligence Technical Network; the Alan Turing Institute and its student-driven Turing Society; the Royal Statistical Society (RSS); the Institution of Mechanical Engineers (IMechE) and its Mechatronics, Informatics & Control Group; public-funding engines like UK Research and Innovation (UKRI); and an ecosystem of Slack channels and Meetup groups that trade genuine, timely intel. This article is a practical, step-by-step guide to using those networks. You will learn: Why professional bodies matter more than algorithmic job boards Exactly which special-interest groups (SIGs) and technical networks to join How to turn CPD events into informal interviews How to monitor grant databases so you hear about posts months before they exist Concrete scripts, portfolio tactics, and outreach rhythms that convert visibility into offers Follow the playbook and you move from passive applicant to insider—the colleague who hears about a role before it is written down.