Member of Technical Staff – Machine Learning, AI Safety

Microsoft
London
6 months ago
Create job alert

Overview

As a Member of Technical Staff – Machine Learning, AI Safety, you will develop and implement cutting-edge safety methodologies and mitigations for products that are served to millions of users through Copilot every day. Users turn to Copilot for support in all types of endeavors, making it critical that we ensure our AI systems behave safely and align with organizational values. You may be responsible for developing new methods to evaluate LLMs, experimenting with data collection techniques, implementing safety orchestration methods and mitigations, and training content classifiers to support the Copilot experience. We’re looking for outstanding individuals with experience in machine learning or machine learning infrastructure who are also strong communicators and great teammates. The right candidate takes the initiative and enjoys building world-class, trustworthy AI experiences and products in a fast-paced environment. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.ResponsibilitiesLeverage expertise to uncover potential risks and develop novel mitigation strategies, including data mining, prompt engineering, LLM evaluation, and classifier training. Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes. 
Build automated safety testing systems, generalize safety solutions into repeatable frameworks, and write efficient code for safety model pipelines and intervention systems. Maintain a user-oriented perspective by understanding safety needs from user perspectives, validating safety approaches through user research, and serving as a trusted advisor on AI safety matters 
Track advances in AI safety research, identify relevant state-of-the-art techniques, and adapt safety algorithms to drive innovation in production systems serving millions of users. 
Embody our culture and values. QualificationsRequired QualificationsBachelor’s Degree in Computer Science, or related technical discipline AND technical engineering experience with coding in languages including, but not limited to, C, C , C#, Java, JavaScript, or Python OR equivalent experience. Experience prompting and working with large language models. Experience writing production-quality Python code. Preferred QualificationsDemonstrated interest in Responsible AI.

Related Jobs

View all jobs

Data Scientist

Principal Data Scientist

Data Scientist Biologicals Research

Principal/Senior Data Scientist

Staff AI Agent Engineer (Machine Learning)

Staff Data Scientist

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.

AI Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Changing career into artificial intelligence in your 30s, 40s or 50s is no longer unusual in the UK. It is happening quietly every day across fintech, healthcare, retail, manufacturing, government & professional services. But it is also surrounded by hype, fear & misinformation. This article is a realistic, UK-specific guide for career switchers who want the truth about AI jobs: what roles genuinely exist, what skills employers actually hire for, how long retraining really takes & whether age is a barrier (spoiler: not in the way people think). If you are considering a move into AI but want facts rather than Silicon Valley fantasy, this is for you.