ML Ops Engineer, Central Tech

Chan Zuckerberg Initiative
London
6 months ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer · ·

Machine Learning Ops Engineer

Senior Machine Learning Ops Engineer

Lead MLOps Engineer

Senior Machine Learning Engineer

Senior Software Engineer - AI & Machine Learning

The Chan Zuckerberg Initiative was founded by Priscilla Chan and Mark Zuckerberg in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education to addressing the needs of our local communities. Our mission is to build a more inclusive, just, and healthy future for everyone.

The Team

Across our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central Operations & Partners team provides the support needed to push this work forward. 

Central Operations & Partners consists of our Brand & Communications, Community, Facilities, Finance, Infrastructure/IT Operations/Business Systems, Initiative Operations, People, Real Estate/Workplace/Facilities/Security, Research & Learning, and Ventures teams. These teams provide the essential operations, services, and strategies needed to support CZI’s progress toward achieving its mission to build a better future for everyone.

The AI/ML Infrastructure team works on building shared tools and platforms to be used across the Chan Zuckerberg Initiative, partnering and supporting the work of an extensive group of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale.

The Opportunity

By pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive AI powered solutions that accelerate Biomedical research. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. We are supporting researchers and scientists around the world by developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. 

As a member of the AI Infrastructure and MLOps Engineering team, you will be responsible for a variety of MLOps and AI development projects that empower users across the AI lifecycle. You will take an active role in building and operating our AI Systems Infrastructure and MLOps efforts focused on our GPU Cloud Cluster operations, ensuring our systems are highly utilized and stable across the AI lifecycle of usage. 

We are building a world-class shared services model, and being based in New York helps us achieve our service goals. We require all interested candidates to be based out of New York City and available to work onsite 2-3 days a week.

What You'll Do

As a member of the MLOps team responsible for the operations of our large scale GPU Research cluster, you will be intimately involved in the end to end AI lifecycle working directly with our AI Research and AI Engineers, pre-training through training through fine tuning and through inference for the models we deploy and host. Take an active role in building out our model deployment automation, alerting, and monitoring systems, allowing us to automate and operate our GPU Cluster in a proactive way that reduces reactive on-call efforts to a minimum.  Work on the integration and usability of our MLFlow based model versioning and experiment tracking as part of the platform and integral across the AI lifecycle.  As part of the on-call responsibilities, you will be working with our vendor partners in troubleshooting and resolving issues in as short of a time frame as is possible on our Kubernetes based GPU Cluster. Actively collaborate in the technical design and build of our AI/ML and Data infrastructure engineering solutions, such as deep MLFlow integration. Be an active part of optimizing our GPU platform and model training processes, from the hardware level on up through our Deep Learning code and libraries. Collaborate with team members in the design and build of our Cloud based AI/ML data platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes. Collaborate with our AI Researchers on data management solutions for our heterogeneous collection of complex very large scale training datasets. As a team take part in defining and implementing our SRE style service level indicator instrumentation and metrics gathering, alongside defining SLOs and SLAs for our model platform end to end.

What You'll Bring

BS, MS, or PhD degree in Computer Science or a related technical discipline or equivalent experience. MLOps experience working with medium to large scale GPU clusters, in Kubernetes (preferred) or HPC environments, or large scale Cloud based ML deployments. Experience using DevOps tooling with data and machine learning use cases. Experience with scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes (preferred) or Mesos. 5+ years of relevant coding experience with a scripting language such as Python, PHP, or Ruby. Experience coding with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala. Data platform operations experience in an environment with challenging data and systems platform challenges - such as Kafka, Spark, and Airflow. Experience with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure - experience with On-Prem and Colocation Service hosting environments a plus. Knowledge of Linux systems optimization and administration. Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.

Compensation

The New York, NY base pay range for this role is $190,000 - $238,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside New York are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.

Benefits for the Whole You 

We’re thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. 

CZI provides a generous employer match on employee 401(k) contributions to support planning for the future. Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs. CZI Life of Service Gifts are awarded to employees to “live the mission” and support the causes closest to them. Paid time off to volunteer at an organization of your choice.  Funding for select family-forming benefits.  Relocation support for employees who need assistance moving to the Bay Area !

Commitment to Diversity

We believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. about our diversity, equity, and inclusion efforts. 

If you’re interested in a role but your previous experience doesn’t perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.

Explore our , , and at .

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Contract vs Permanent AI Jobs: Which Pays Better in 2025?

n the ever-evolving world of technology, the competition for top talent in artificial intelligence (AI) is intense—and the rewards are significant. By 2025, AI roles in machine learning, natural language processing, data science, and robotics are expected to be among the highest-paid professions within the UK technology sector. As an AI professional, deciding between contracting (either as a day‑rate contractor or via fixed-term contracts) and permanent employment could drastically impact your take‑home pay, job security, and career trajectory. In this article, we will delve into the various types of AI roles in 2025—particularly focusing on day‑rate contracting, fixed-term contract (FTC) roles, and permanent positions. We will compare the earning potential across these three employment types, discuss the key pros and cons, and provide practical examples of how your annual take‑home pay might differ under each scenario. Whether you are already working in AI or looking to break into this booming field, understanding these employment options will help you make an informed decision on your next move.

AI Jobs for Non‑Technical Professionals: Where Do You Fit In?

Your Seat at the AI Table Artificial Intelligence (AI) has left the lab and entered boardrooms, high‑street banks, hospitals and marketing agencies across the United Kingdom. Yet a stubborn myth lingers: “AI careers are only for coders and PhDs.” If you can’t write TensorFlow, surely you have no place in the conversation—right? Wrong. According to PwC’s UK AI Jobs Barometer 2024, vacancies mentioning AI rose 61 % year‑on‑year, but only 35 % of those adverts required advanced programming skills (pwc.co.uk). The Department for Culture, Media & Sport (DCMS) likewise reports that Britain’s fastest‑growing AI employers are “actively recruiting non‑technical talent to scale responsibly” (gov.uk). Put simply, the nation needs communicators, strategists, ethicists, marketers and project leaders every bit as urgently as it needs machine‑learning engineers. This 2,500‑word guide shows where you fit in—and how to land an AI role without touching a line of Python.

ElevenLabs AI Jobs in 2025: Your Complete UK Guide to Crafting Human‑Level Voice Technology

"Make any voice sound infinitely human." That tagline catapulted ElevenLabs from hack‑day prototype to unicorn‑status voice‑AI platform in under three years. The London‑ and New York‑based start‑up’s text‑to‑speech, dubbing and voice‑cloning APIs now serve publishers, film studios, ed‑tech giants and accessibility apps across 45 languages. After an $80 m Series B round in January 2024—which pushed valuation above $1 bn—ElevenLabs is scaling fast, doubling revenue every quarter and hiring aggressively. If you’re an ML engineer who dreams in spectrograms, an audio‑DSP wizard or a product storyteller who can translate jargon into creative workflows, this guide explains how to land an ElevenLabs AI job in 2025.