Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Member of Technical Staff – Machine Learning, AI Safety

Microsoft
City of London
1 week ago
Create job alert
Overview

As a Member of Technical Staff - Machine Learning, AI Safety, you will develop and implement cutting-edge safety methodologies and mitigations for products that are served to millions of users through Copilot every day. Users turn to Copilot for support in all types of endeavors, making it critical that we ensure our AI systems behave safely and align with organizational values. You may be responsible for developing new methods to evaluate LLMs, experimenting with data collection techniques, implementing safety orchestration methods and mitigations, and training content classifiers to support the Copilot experience. We're looking for outstanding individuals with experience in machine learning or machine learning infrastructure who are also strong communicators and great teammates. The right candidate takes the initiative and enjoys building world-class, trustworthy AI experiences and products in a fast-paced environment.


Microsoft\'s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.


Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.


Responsibilities

  • Leverage expertise to uncover potential risks and develop novel mitigation strategies, including data mining, prompt engineering, LLM evaluation, and classifier training.
  • Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes.
  • Build automated safety testing systems, generalize safety solutions into repeatable frameworks, and write efficient code for safety model pipelines and intervention systems.
  • Maintain a user-oriented perspective by understanding safety needs from user perspectives, validating safety approaches through user research, and serving as a trusted advisor on AI safety matters.
  • Track advances in AI safety research, identify relevant state-of-the-art techniques, and adapt safety algorithms to drive innovation in production systems serving millions of users.
  • Embody our culture and values.

Qualifications

Required Qualifications



  • Bachelor\'s Degree in Computer Science, or related technical discipline AND technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • OR equivalent experience.
  • Experience prompting and working with large language models.
  • Experience writing production-quality Python code.

Preferred Qualifications



  • Demonstrated interest in Responsible AI.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.


Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.


#J-18808-Ljbffr

Related Jobs

View all jobs

Research Data Scientist

Research Data Scientist

Engineering Program Manager, Machine Learning

Business & Data Governance Associate - Data Science & AI

Business & Data Governance Associate - Data Science & AI

Business & Data Governance Associate - Data Science & AI

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write an AI CV that Beats ATS (UK examples)

Writing an AI CV for the UK market is about clarity, credibility, and alignment. Recruiters spend seconds scanning the top third of your CV, while Applicant Tracking Systems (ATS) check for relevant skills & recent impact. Your goal is to make both happy without gimmicks: plain structure, sharp evidence, and links that prove you can ship to production. This guide shows you exactly how to do that. You’ll get a clean CV anatomy, a phrase bank for measurable bullets, GitHub & portfolio tips, and three copy-ready UK examples (junior, mid, research). Paste the structure, replace the details, and tailor to each job ad.

AI Recruitment Trends 2025 (UK): What Job Seekers Must Know About Today’s Hiring Process

Summary: UK AI hiring has shifted from titles & puzzle rounds to skills, portfolios, evals, safety, governance & measurable business impact. This guide explains what’s changed, what to expect in interviews, and how to prepare—especially for LLM application, MLOps/platform, data science, AI product & safety roles. Who this is for: AI/ML engineers, LLM engineers, data scientists, MLOps/platform engineers, AI product managers, applied researchers & safety/governance specialists targeting roles in the UK.

Why AI Careers in the UK Are Becoming More Multidisciplinary

Artificial intelligence is no longer a single-discipline pursuit. In the UK, employers increasingly want talent that can code and communicate, model and manage risk, experiment and empathise. That shift is reshaping job descriptions, training pathways & career progression. AI is touching regulated sectors, sensitive user journeys & public services — so the work now sits at the crossroads of computer science, law, ethics, psychology, linguistics & design. This isn’t a buzzword-driven change. It’s happening because real systems are deployed in the wild where people have rights, needs, habits & constraints. As models move from lab demos to products that diagnose, advise, detect fraud, personalise education or generate media, teams must align performance with accountability, safety & usability. The UK’s maturing AI ecosystem — from startups to FTSE 100s, consultancies, the public sector & universities — is responding by hiring multidisciplinary teams who can anticipate social impact as confidently as they ship features. Below, we unpack the forces behind this change, spotlight five disciplines now fused with AI roles, show what it means for UK job-seekers & employers, and map practical steps to future-proof your CV.