Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

ML Ops Engineer, Central Tech

Chan Zuckerberg Initiative
London
11 months ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer

Lead MLOps Engineer

Senior Machine Learning Engineer

Senior Machine Learning Engineer

Staff Computer Vision Engineer

Founding Engineer (Machine Learning)

The Chan Zuckerberg Initiative was founded by Priscilla Chan and Mark Zuckerberg in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education to addressing the needs of our local communities. Our mission is to build a more inclusive, just, and healthy future for everyone.

The Team

Across our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central Operations & Partners team provides the support needed to push this work forward. 

Central Operations & Partners consists of our Brand & Communications, Community, Facilities, Finance, Infrastructure/IT Operations/Business Systems, Initiative Operations, People, Real Estate/Workplace/Facilities/Security, Research & Learning, and Ventures teams. These teams provide the essential operations, services, and strategies needed to support CZI’s progress toward achieving its mission to build a better future for everyone.

The AI/ML Infrastructure team works on building shared tools and platforms to be used across the Chan Zuckerberg Initiative, partnering and supporting the work of an extensive group of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale.

The Opportunity

By pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive AI powered solutions that accelerate Biomedical research. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. We are supporting researchers and scientists around the world by developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. 

As a member of the AI Infrastructure and MLOps Engineering team, you will be responsible for a variety of MLOps and AI development projects that empower users across the AI lifecycle. You will take an active role in building and operating our AI Systems Infrastructure and MLOps efforts focused on our GPU Cloud Cluster operations, ensuring our systems are highly utilized and stable across the AI lifecycle of usage. 

We are building a world-class shared services model, and being based in New York helps us achieve our service goals. We require all interested candidates to be based out of New York City and available to work onsite 2-3 days a week.

What You'll Do

As a member of the MLOps team responsible for the operations of our large scale GPU Research cluster, you will be intimately involved in the end to end AI lifecycle working directly with our AI Research and AI Engineers, pre-training through training through fine tuning and through inference for the models we deploy and host. Take an active role in building out our model deployment automation, alerting, and monitoring systems, allowing us to automate and operate our GPU Cluster in a proactive way that reduces reactive on-call efforts to a minimum.  Work on the integration and usability of our MLFlow based model versioning and experiment tracking as part of the platform and integral across the AI lifecycle.  As part of the on-call responsibilities, you will be working with our vendor partners in troubleshooting and resolving issues in as short of a time frame as is possible on our Kubernetes based GPU Cluster. Actively collaborate in the technical design and build of our AI/ML and Data infrastructure engineering solutions, such as deep MLFlow integration. Be an active part of optimizing our GPU platform and model training processes, from the hardware level on up through our Deep Learning code and libraries. Collaborate with team members in the design and build of our Cloud based AI/ML data platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes. Collaborate with our AI Researchers on data management solutions for our heterogeneous collection of complex very large scale training datasets. As a team take part in defining and implementing our SRE style service level indicator instrumentation and metrics gathering, alongside defining SLOs and SLAs for our model platform end to end.

What You'll Bring

BS, MS, or PhD degree in Computer Science or a related technical discipline or equivalent experience. MLOps experience working with medium to large scale GPU clusters, in Kubernetes (preferred) or HPC environments, or large scale Cloud based ML deployments. Experience using DevOps tooling with data and machine learning use cases. Experience with scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes (preferred) or Mesos. 5+ years of relevant coding experience with a scripting language such as Python, PHP, or Ruby. Experience coding with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala. Data platform operations experience in an environment with challenging data and systems platform challenges - such as Kafka, Spark, and Airflow. Experience with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure - experience with On-Prem and Colocation Service hosting environments a plus. Knowledge of Linux systems optimization and administration. Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.

Compensation

The New York, NY base pay range for this role is $190,000 - $238,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside New York are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.

Benefits for the Whole You 

We’re thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. 

CZI provides a generous employer match on employee 401(k) contributions to support planning for the future. Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs. CZI Life of Service Gifts are awarded to employees to “live the mission” and support the causes closest to them. Paid time off to volunteer at an organization of your choice.  Funding for select family-forming benefits.  Relocation support for employees who need assistance moving to the Bay Area !

Commitment to Diversity

We believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. about our diversity, equity, and inclusion efforts. 

If you’re interested in a role but your previous experience doesn’t perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.

Explore our , , and at .

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

The Best Free Tools & Platforms to Practise AI Skills in 2025/26

Artificial Intelligence (AI) is one of the fastest-growing career fields in the UK and worldwide. Whether you are a student exploring AI for the first time, a graduate looking to build your portfolio, or an experienced professional upskilling for career growth, having access to free tools and platforms to practise AI skills can make a huge difference. In this comprehensive guide, we’ll explore the best free resources available in 2025, covering AI coding platforms, datasets, cloud tools, no-code AI platforms, online communities, and learning hubs. These tools allow you to practise everything from machine learning models and natural language processing (NLP) to computer vision, reinforcement learning, and large language model (LLM) fine-tuning—without needing a huge budget. By the end of this article, you’ll have a clear roadmap of where to start practising your AI skills for free, how to build real-world projects, and which platforms can help you land your next AI job.

Top 10 Skills in Artificial Intelligence According to LinkedIn & Indeed Job Postings

Artificial intelligence is no longer a niche field reserved for research labs or tech giants—it has become a cornerstone of business strategy across the UK. From finance and healthcare to manufacturing and retail, employers are rapidly expanding their AI teams and competing for talent. But here’s the challenge: AI is evolving so quickly that the skills in demand today may look different from those of just a few years ago. Whether you’re a graduate looking to enter the industry, a mid-career professional pivoting into AI, or an experienced engineer wanting to stay ahead, it’s essential to know what employers are actually asking for in their job ads. That’s where platforms like LinkedIn and Indeed provide valuable insight. By analysing thousands of job postings across the UK, they reveal the most frequently requested skills and emerging trends. This article distils those findings into the Top 10 AI skills employers are prioritising in 2025—and shows you how to present them effectively on your CV, in interviews, and in your portfolio.

Translucent Careers: Senior Artificial Intelligence Engineer in London

The global landscape of artificial intelligence is evolving rapidly, and nowhere is this transformation felt more strongly than in the financial and accounting industries. AI is no longer just a supporting technology; it is becoming the backbone of innovation, efficiency, and decision-making. One of the most exciting companies at the forefront of this movement is Translucent, a dynamic business redefining how accounting professionals work with AI-driven tools. For professionals seeking to combine technical excellence with meaningful industry impact, Translucent represents a rare career destination. At the heart of their current expansion is an opening for a Senior Artificial Intelligence Engineer in London. This role combines high-level technical leadership, hands-on development, and an opportunity to influence the direction of an emerging force in AI. In this article, we’ll explore: Who Translucent are and why they matter. The significance of AI in financial technology (fintech) and accounting. A deep dive into the Senior AI Engineer role. Skills and requirements needed for success. Career growth and opportunities at Translucent. How artificialintelligencejobs.co.uk helps professionals connect with transformative employers like Translucent.