National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

ML Ops Engineer, Central Tech

Chan Zuckerberg Initiative
London
8 months ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer

Machine Learning Engineer

Founding Machine Learning Engineer

Machine Learning Workflow Engineer

Artificial Intelligence Engineer

Artificial Intelligence Engineer

The Chan Zuckerberg Initiative was founded by Priscilla Chan and Mark Zuckerberg in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education to addressing the needs of our local communities. Our mission is to build a more inclusive, just, and healthy future for everyone.

The Team

Across our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central Operations & Partners team provides the support needed to push this work forward. 

Central Operations & Partners consists of our Brand & Communications, Community, Facilities, Finance, Infrastructure/IT Operations/Business Systems, Initiative Operations, People, Real Estate/Workplace/Facilities/Security, Research & Learning, and Ventures teams. These teams provide the essential operations, services, and strategies needed to support CZI’s progress toward achieving its mission to build a better future for everyone.

The AI/ML Infrastructure team works on building shared tools and platforms to be used across the Chan Zuckerberg Initiative, partnering and supporting the work of an extensive group of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale.

The Opportunity

By pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive AI powered solutions that accelerate Biomedical research. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. We are supporting researchers and scientists around the world by developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. 

As a member of the AI Infrastructure and MLOps Engineering team, you will be responsible for a variety of MLOps and AI development projects that empower users across the AI lifecycle. You will take an active role in building and operating our AI Systems Infrastructure and MLOps efforts focused on our GPU Cloud Cluster operations, ensuring our systems are highly utilized and stable across the AI lifecycle of usage. 

We are building a world-class shared services model, and being based in New York helps us achieve our service goals. We require all interested candidates to be based out of New York City and available to work onsite 2-3 days a week.

What You'll Do

As a member of the MLOps team responsible for the operations of our large scale GPU Research cluster, you will be intimately involved in the end to end AI lifecycle working directly with our AI Research and AI Engineers, pre-training through training through fine tuning and through inference for the models we deploy and host. Take an active role in building out our model deployment automation, alerting, and monitoring systems, allowing us to automate and operate our GPU Cluster in a proactive way that reduces reactive on-call efforts to a minimum.  Work on the integration and usability of our MLFlow based model versioning and experiment tracking as part of the platform and integral across the AI lifecycle.  As part of the on-call responsibilities, you will be working with our vendor partners in troubleshooting and resolving issues in as short of a time frame as is possible on our Kubernetes based GPU Cluster. Actively collaborate in the technical design and build of our AI/ML and Data infrastructure engineering solutions, such as deep MLFlow integration. Be an active part of optimizing our GPU platform and model training processes, from the hardware level on up through our Deep Learning code and libraries. Collaborate with team members in the design and build of our Cloud based AI/ML data platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes. Collaborate with our AI Researchers on data management solutions for our heterogeneous collection of complex very large scale training datasets. As a team take part in defining and implementing our SRE style service level indicator instrumentation and metrics gathering, alongside defining SLOs and SLAs for our model platform end to end.

What You'll Bring

BS, MS, or PhD degree in Computer Science or a related technical discipline or equivalent experience. MLOps experience working with medium to large scale GPU clusters, in Kubernetes (preferred) or HPC environments, or large scale Cloud based ML deployments. Experience using DevOps tooling with data and machine learning use cases. Experience with scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes (preferred) or Mesos. 5+ years of relevant coding experience with a scripting language such as Python, PHP, or Ruby. Experience coding with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala. Data platform operations experience in an environment with challenging data and systems platform challenges - such as Kafka, Spark, and Airflow. Experience with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure - experience with On-Prem and Colocation Service hosting environments a plus. Knowledge of Linux systems optimization and administration. Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.

Compensation

The New York, NY base pay range for this role is $190,000 - $238,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside New York are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.

Benefits for the Whole You 

We’re thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. 

CZI provides a generous employer match on employee 401(k) contributions to support planning for the future. Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs. CZI Life of Service Gifts are awarded to employees to “live the mission” and support the causes closest to them. Paid time off to volunteer at an organization of your choice.  Funding for select family-forming benefits.  Relocation support for employees who need assistance moving to the Bay Area !

Commitment to Diversity

We believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. about our diversity, equity, and inclusion efforts. 

If you’re interested in a role but your previous experience doesn’t perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.

Explore our , , and at .

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

AI Jobs UK 2025: 50 Companies Hiring Now

Bookmark this guide – we refresh it every quarter so you always know who’s really scaling their artificial‑intelligence teams. Artificial intelligence hiring has roared back in 2025. The UK’s boosted National AI Strategy funding, record‑breaking private investment (£18.1 billion so far) & a fresh wave of generative‑AI product launches mean employers are jockeying for data scientists, ML engineers, MLOps specialists, AI product managers, prompt engineers & applied researchers. Below are 50 organisations that have advertised UK‑based AI vacancies in the past eight weeks or formally announced growth plans. They’re grouped into five easy‑scan categories so you can jump straight to the kind of employer – & culture – that suits you. For each company you’ll find: Main UK hub Example live or recent vacancy Why it’s worth a look (tech stack, culture, mission) Use the internal links to browse current vacancies on ArtificialIntelligenceJobs.co.uk – or set up a free job alert so fresh roles land in your inbox.

Return-to-Work Pathways: Relaunch Your AI Career with Returnships, Flexible & Hybrid Roles

Stepping back into the workplace after a career break can feel like embarking on a whole new journey—especially in a cutting-edge field such as artificial intelligence (AI). For parents and carers, the challenge isn’t just refreshing your technical know-how but also securing a role that respects your family commitments. Fortunately, the UK’s tech sector now boasts a wealth of return-to-work programmes—from formal returnships to flexible and hybrid opportunities. These pathways are designed to bridge the gap, equipping you with refreshed skills, confidence and a supportive network. In this comprehensive guide, you’ll discover how to: Understand the booming demand for AI talent in the UK Leverage transferable skills honed during your break Overcome common re-entry challenges Build your AI skillset with targeted training Tap into returnship and re-entry programmes Find flexible, hybrid and full-time AI roles that suit your lifestyle Balance professional growth with caring responsibilities Master applications, interviews and networking Whether you’re returning after maternity leave, eldercare duties or another life chapter, this article will equip you with practical steps, resources and insider tips.

LinkedIn Profile Checklist for AI Jobs: 10 Tweaks That Triple Recruiter Views

In today’s fiercely competitive AI job market, simply having a LinkedIn profile isn’t enough. Recruiters and hiring managers routinely scout for top talent in machine learning, data science, natural language processing, computer vision and beyond—sometimes before roles are even posted. With hundreds of applicants vying for each role, you need a profile that’s optimised for search, speaks directly to AI-specific skills, and showcases measurable impact. By following this step-by-step LinkedIn for AI jobs checklist, you’ll make ten strategic tweaks that can triple recruiter views and position you as a leading AI professional. Whether you’re a fresh graduate aiming for your first AI position or a seasoned expert targeting a senior role, these actionable changes will ensure your profile stands out in feeds, search results and recruiter queues. Let’s dive in.