Social network you want to login/join with:
Software Engineer (MLOps / LLMOps), london
col-narrow-left
Client:
Codesearch AI
Location:
london, United Kingdom
Job Category:
Other
-
EU work permit required:
Yes
col-narrow-right
Job Views:
3
Posted:
05.05.2025
Expiry Date:
19.06.2025
col-wide
Job Description:
Help to revolutionise a fast-moving industry with cutting-edge AI:
Our client is a globally recognised brand with deep-rooted expertise. They are heavily invested in leveraging AI to combine their domain expertise with SOTA techniques, solidifying their position as a leader in the field. You'll join a global team with a distributed set of skills including Research, Applied AI and Engineering.
They are seeking MLOps Engineers to help architect the future of communication through AI. This isn't just another engineering role – it's an opportunity to pioneer systems that transform how companies connect with their customers
What You’ll Be Doing
You'll be designing and optimising production-grade MLOps pipelines that bring cutting-edge Generative AI and LLMs from experimentation to real-world impact. Your expertise will directly influence how some of the world's leading brands enhance their strategies.
What You'll Build
- Production-Ready GenAI Infrastructure: Design and deploy scalable MLOps pipelines specifically optimized for GenAI applications and large language models
- State-of-the-Art Model Deployment: Implement and fine-tune advanced models like GPT and similar architectures in production environments
- Hybrid AI Systems: Create solutions that integrate traditional ML techniques with cutting-edge LLMs to deliver powerful insights
- Automated MLOps Workflows: Build robust CI/CD pipelines for ML, enabling seamless testing, validation, and deployment
- Cost-Efficient Cloud Infrastructure: Optimize cloud resources to maximize performance while maintaining cost efficiency
- Governance and Versioning Systems: Establish best practices for model versioning, reproducibility, and responsible AI deployment
- Integrated Data Pipelines: Utilize Databricks to construct and manage sophisticated data and ML pipelines
- Monitoring Ecosystems: Implement comprehensive monitoring systems to ensure reliability and performance
What You’ll Need
- 5+ years of hands-on experience in MLOps, DevOps, or ML Engineering roles
- Proven expertise deploying and scaling Generative AI models (GPT, Stable Diffusion, BERT)
- Proficiency with Python and ML frameworks (TensorFlow, PyTorch, Hugging Face)
- Strong cloud platform experience (AWS, GCP, Azure) and managed AI/ML services
- Practical experience with Docker, Kubernetes, and container orchestration
- Databricks expertise, including ML workflows and data pipeline integration
- Familiarity with MLflow, DVC, Prometheus, and Grafana for versioning and monitoring
- Bachelor's or Master's degree in Computer Science, Engineering, or related field (or equivalent experience)
- Fluency in written and spoken English
The Person We're Looking For
- You're abuilder at heart– someone who loves creating scalable, production-ready systems
- You balancetechnical excellencewithpragmatic delivery
- You're excited aboutpushing boundariesin GenAI and LLM technologies
- You cancommunicate complex conceptseffectively to diverse stakeholders
- You enjoymentoring junior team membersand elevating the entire technical organization
What Makes This Opportunity Special
You'll be working with a modern data stack designed to process large-scale information, automate analysis pipelines, and integrate seamlessly with AI-driven workflows. This is your chance to make a significant impact on projects that push the boundaries of AI-powered insights and automation in industry.
#J-18808-Ljbffr