Engineer the Quantum RevolutionYour expertise can help us shape the future of quantum computing at Oxford Ionics.

View Open Roles

Member of Technical Staff – Machine Learning, AI Safety

Microsoft
London
4 weeks ago
Create job alert

Overview

As a Member of Technical Staff – Machine Learning, AI Safety, you will develop and implement cutting-edge safety methodologies and mitigations for products that are served to millions of users through Copilot every day. Users turn to Copilot for support in all types of endeavors, making it critical that we ensure our AI systems behave safely and align with organizational values. You may be responsible for developing new methods to evaluate LLMs, experimenting with data collection techniques, implementing safety orchestration methods and mitigations, and training content classifiers to support the Copilot experience. We’re looking for outstanding individuals with experience in machine learning or machine learning infrastructure who are also strong communicators and great teammates. The right candidate takes the initiative and enjoys building world-class, trustworthy AI experiences and products in a fast-paced environment. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. ResponsibilitiesLeverage expertise to uncover potential risks and develop novel mitigation strategies, including data mining, prompt engineering, LLM evaluation, and classifier training. Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes. 
Build automated safety testing systems, generalize safety solutions into repeatable frameworks, and write efficient code for safety model pipelines and intervention systems. Maintain a user-oriented perspective by understanding safety needs from user perspectives, validating safety approaches through user research, and serving as a trusted advisor on AI safety matters 
Track advances in AI safety research, identify relevant state-of-the-art techniques, and adapt safety algorithms to drive innovation in production systems serving millions of users. 
Embody our culture and values. QualificationsRequired QualificationsBachelor’s Degree in Computer Science, or related technical discipline AND technical engineering experience with coding in languages including, but not limited to, C, C , C#, Java, JavaScript, or Python OR equivalent experience. Experience prompting and working with large language models. Experience writing production-quality Python code. Preferred QualificationsDemonstrated interest in Responsible AI.

Related Jobs

View all jobs

Machine Learning Applied Scientist (Machine Learning Observability & Governance)

Machine Learning Applied Scientist (Machine Learning Observability & Governance)

Senior Staff Machine Learning Scientist, Operations

Data Scientist (AML)

Data Scientist (AML)

Data Scientist (AML)

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Breaking Into Generative AI: A Beginner's Complete Guide to Starting Your Career in 2025/26

Are you fascinated by AI tools like ChatGPT, DALL-E, or Midjourney but unsure how to turn that interest into a career? You're not alone. The generative AI revolution has created thousands of new job opportunities across the UK, and many don't require a computer science degree or years of coding experience. Whether you're a recent graduate, considering a career change, or simply curious about this exciting field, this comprehensive guide will show you exactly how to break into generative AI jobs.

Pre-Employment Checks for AI Jobs: DBS, References & Right-to-Work and more Explained

The artificial intelligence sector in the UK is experiencing unprecedented growth, with companies across industries seeking talented professionals to drive digital transformation. However, securing a position in this competitive field involves more than just demonstrating technical expertise. Pre-employment checks have become an integral part of the hiring process for AI jobs, ensuring organisations maintain security, compliance, and trust whilst building their teams. Whether you're a data scientist, machine learning engineer, AI researcher, or technology consultant, understanding the pre-employment screening process is crucial for navigating your career journey successfully. This comprehensive guide explores the various types of background checks you may encounter when applying for AI positions in the UK, from basic right-to-work verification to enhanced security clearance requirements.

Why Now Is the Perfect Time to Retrain and Launch Your Career in Artificial Intelligence

The artificial intelligence revolution isn't coming—it's here. From the bustling tech hubs of London and Manchester to the emerging AI clusters in Edinburgh and Cambridge, the UK is experiencing an unprecedented demand for skilled AI professionals. If you've been considering a career change or looking to future-proof your professional trajectory, there has never been a better time to retrain and enter the field of artificial intelligence.