Machine Learning Engineer

TXP
Birmingham
1 month ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer (Forward Deployed)

Machine Learning Engineer

Machine Learning Engineer - £110k – £130k – Geospatial Tech 4 Good

Senior Machine Learning Engineer

Senior Machine Learning Engineer

Senior Machine Learning Engineer

Location: Hybrid (Birmingham, or London)


Permanent


We are TXP. We help businesses and organisations move forward, at pace and at scale. We believe in the transformative power of combining technology and people. By providing consulting expertise, development services and resourcing, we work closely with organisations to solve their most complex business problems.


Our work transforms organisations – and we take that responsibility seriously. We focus on success, pursue excellence and take ownership of everything we do.


But achieving that level of performance requires an inclusive and supportive working environment. We believe in the power of technology and people, and we help everyone here to succeed. At TXP, you can multiply your potential.


Role Purpose


The Machine Learning Engineer is a client-facing delivery role responsible for taking machine learning models from prototype to production. Operating within a consulting model, this role bridges data science and platform engineering, ensuring that models developed during engagements are deployed as reliable, scalable and maintainable services. The role owns the full productionisation lifecycle, from environment configuration and pipeline orchestration through to performance monitoring and scaling. Close collaboration with data scientists is fundamental, translating their analytical work into robust systems that deliver sustained business value.


Key Responsibilities



  • Take machine learning models developed by data scientists and re-engineer them for production deployment.
  • Refactor prototype code into clean, modular, tested Python packages with clear separation of concerns.
  • Implement inference pipelines that handle data validation, preprocessing, prediction and post-processing as a single deployable unit.
  • Ensure models meet non-functional requirements including latency, throughput, reliability and resource efficiency before release.
  • Manage the transition from notebook-based experimentation to production-grade services with minimal loss of analytical intent.

Environment Tuning and Scaling



  • Configure and optimise compute environments on Azure AI Foundry, including managed endpoints, compute clusters and containerised deployments.
  • Right-size infrastructure for model serving workloads, balancing cost against performance and availability requirements.
  • Implement auto-scaling strategies for inference endpoints to handle variable demand patterns.
  • Tune runtime configurations including batch sizes, concurrency settings, memory allocation and GPU utilisation where applicable.
  • Conduct load testing and performance benchmarking to validate deployment readiness under expected and peak conditions.

Code and Pipeline Management



  • Design and maintain CI/CD pipelines for model training, validation and deployment using Azure DevOps or GitHub Actions.
  • Implement automated model retraining pipelines triggered by schedule, data drift or performance degradation.
  • Manage model versioning, artefact storage and promotion workflows through Azure AI Foundry model registry.
  • Enforce code quality standards through automated linting, unit testing, integration testing and code review processes.

Model Monitoring and Operations



  • Implement monitoring for deployed models covering prediction drift, data drift, feature distribution shifts and performance degradation.
  • Build alerting and escalation workflows that trigger investigation or automated retraining when thresholds are breached.
  • Maintain logging and observability across inference pipelines to support debugging, audit and compliance requirements.
  • Produce operational runbooks and incident response procedures for model services.
  • Track model performance against business metrics, not just statistical metrics, to ensure continued value delivery.

Collaboration with Data Scientists



  • Work alongside data scientists from early in the model development lifecycle to ensure production readiness is designed in, not retrofitted.
  • Provide guidance on coding standards, testing practices and architectural patterns that accelerate the path from prototype to deployment.
  • Review data science code for production suitability, identifying scalability risks, dependency issues and maintainability concerns.
  • Jointly define model interfaces, input/output schemas and contract testing approaches to decouple model development from serving infrastructure.
  • Facilitate structured handover processes that capture model assumptions, training procedures, known limitations and retraining requirements.

Client Delivery



  • Operate within consulting delivery frameworks, managing scope, timelines and stakeholder expectations.
  • Contribute to estimation, solution architecture and proposal development for MLOps and model deployment workstreams.
  • Present deployment architectures, operational plans and trade-off analyses to client technical and business stakeholders.
  • Conduct knowledge transfer sessions and produce handover documentation for client engineering teams.
  • Identify opportunities to improve client ML maturity through better tooling, processes and automation.

Required Skills and Experience



  • Strong software engineering skills in Python, with experience writing production-grade code including packaging, testing and documentation.
  • Demonstrable experience productionising machine learning models, taking them from research or prototype stage into live, monitored services.
  • Proficiency with CI/CD tooling for ML pipelines, including Azure DevOps, GitHub Actions or equivalent.
  • Working knowledge of Microsoft Azure AI Foundry, including managed endpoints, model registry, compute management and experiment tracking.
  • Experience with containerisation (Docker) and container orchestration for model serving.
  • Understanding of ML monitoring practices including data drift detection, prediction drift and model performance tracking.
  • Familiarity with infrastructure-as-code tools such as Terraform or Bicep for Azure resource provisioning.
  • Experience with performance tuning, load testing and capacity planning for inference workloads.
  • Strong collaboration skills, with proven ability to work effectively alongside data scientists to bridge the gap between experimentation and production.
  • Experience working in a consulting, professional services or client‑facing delivery environment.
  • Experience with Microsoft Fabric for upstream data pipeline integration and feature store patterns.
  • Familiarity with Kubernetes (AKS) for model serving at scale.
  • Exposure to GPU‑accelerated inference and optimisation techniques such as ONNX, TensorRT or model quantisation.
  • Experience with feature stores, experiment tracking platforms or ML metadata management tools.
  • Knowledge of A/B testing, canary deployments or shadow mode strategies for safe model rollout.
  • Relevant certifications such as DP‑100 (Azure Data Scientist Associate), AI‑102 (Azure AI Engineer) or equivalent.
  • 25 days annual leave (plus bank holidays).
  • An additional day of paid leave for your birthday (or Christmas eve).
  • Salary sacrifice, matched employer contributed pension (4%).
  • Life assurance (3x).
  • Access to an Employee Assistance Programme (EAP).
  • Private medical insurance through our partner Aviva.
  • Cycle to work scheme.
  • Access to an independent financial advisor.
  • 2 x social value days per year to give back to local communities.

Grow with us:


Work on exciting new projects.


If you want to avoid getting stuck with the mundane, you’re in the right place. We work in many sectors with fantastic clients, so you’ll always be working on something exciting and challenging.


We recognise that you might have a career path planned out and you might need some support to help you move forward. We’re here to support you and make the most out of your time with us, through challenging work, opportunities to grow and learning and development opportunities.


Be part of the TXP growth journey.


We are a high growth, fast paced environment. We currently have 200+ employees and work with clients across the UK. Joining TXP means you’ll be part of that.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise AI Jobs in the UK (2026 Guide)

Advertising AI jobs in the UK requires a different approach to most technical hiring. The candidate pool is small, highly informed and in demand across multiple sectors simultaneously. General job boards reach a broad audience but lack the specificity that AI professionals expect — and the filtering mechanisms they rely on. Specialist platforms, direct outreach and academic channels each serve a different part of the market. This guide, published by ArtificialIntelligenceJobs.co.uk, covers where to advertise AI roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about time-to-hire across different role types.

New AI Employers to Watch in 2026: UK and Global Companies Reshaping AI Careers

The artificial intelligence job market in the UK is evolving at an extraordinary pace. With record-breaking investment, government backing, and a surge in enterprise adoption, the landscape of AI employers is shifting rapidly. For candidates exploring opportunities on ArtificialIntelligenceJobs.co.uk, understanding who is hiring next is just as important as understanding what skills are in demand. In this article, we explore the new and emerging AI employers to watch in 2026, focusing on organisations that have recently secured funding, won major contracts, or expanded their UK footprint. From cutting-edge startups to global giants doubling down on Britain, these companies represent the next wave of AI career opportunities.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.