Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Postdoctoral Research Assistant in AI Threat Detection

University of Oxford
Oxford
11 months ago
Applications closed

Related Jobs

View all jobs

Postdoctoral Research Associate in Transdiagnostic Artificial Intelligence

Assistant/Associate Professor in Statistical Data Science

Postdoctoral Research Assistant in Computer Vision

Postdoctoral Research Associate in Transdiagnostic Artificial Intelligence

MRC Postdoctoral Research Scientist in Machine Learning LMS - 2722

Assistant Professor in Statistics and Data Science (Research and Education) - School of Mathema[...]

We are seeking

a full-time postdoctoral researcher to join the Machine Learning Research Group at the Department of Engineering Science (central Oxford). The post is funded by the Oxford Martin School and is fixed-term to the 31st August 2026. The successful candidate will work as part of a project team, consisting of researchers in the departments of Engineering and Computer Science, supporting the Oxford Martin Programme on AI Threat Detection, as well as engaging across the wide local network of experts in AI, cybersecurity, AI safety & governance. The Oxford Martin Programme on AI Threat Detection aims to fill a critical gap in AI security by developing advanced methods to detect attacks on AI systems. You will be responsible for developing a test framework including a library of target AI models and training datasets. You will also help research the spectrum of threat and vulnerability models for the AI systems. You should have a relevant PhD/DPhil or be near completion (submitted your thesis) together with relevant experience. You should also have previous experience with abnormality detection, or related machine learning techniques, for detecting unexpected patterns in large data sets. Only online applications received before midday on the 6th January 2025 can be considered.

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write an AI CV that Beats ATS (UK examples)

Writing an AI CV for the UK market is about clarity, credibility, and alignment. Recruiters spend seconds scanning the top third of your CV, while Applicant Tracking Systems (ATS) check for relevant skills & recent impact. Your goal is to make both happy without gimmicks: plain structure, sharp evidence, and links that prove you can ship to production. This guide shows you exactly how to do that. You’ll get a clean CV anatomy, a phrase bank for measurable bullets, GitHub & portfolio tips, and three copy-ready UK examples (junior, mid, research). Paste the structure, replace the details, and tailor to each job ad.

AI Recruitment Trends 2025 (UK): What Job Seekers Must Know About Today’s Hiring Process

Summary: UK AI hiring has shifted from titles & puzzle rounds to skills, portfolios, evals, safety, governance & measurable business impact. This guide explains what’s changed, what to expect in interviews, and how to prepare—especially for LLM application, MLOps/platform, data science, AI product & safety roles. Who this is for: AI/ML engineers, LLM engineers, data scientists, MLOps/platform engineers, AI product managers, applied researchers & safety/governance specialists targeting roles in the UK.

Why AI Careers in the UK Are Becoming More Multidisciplinary

Artificial intelligence is no longer a single-discipline pursuit. In the UK, employers increasingly want talent that can code and communicate, model and manage risk, experiment and empathise. That shift is reshaping job descriptions, training pathways & career progression. AI is touching regulated sectors, sensitive user journeys & public services — so the work now sits at the crossroads of computer science, law, ethics, psychology, linguistics & design. This isn’t a buzzword-driven change. It’s happening because real systems are deployed in the wild where people have rights, needs, habits & constraints. As models move from lab demos to products that diagnose, advise, detect fraud, personalise education or generate media, teams must align performance with accountability, safety & usability. The UK’s maturing AI ecosystem — from startups to FTSE 100s, consultancies, the public sector & universities — is responding by hiring multidisciplinary teams who can anticipate social impact as confidently as they ship features. Below, we unpack the forces behind this change, spotlight five disciplines now fused with AI roles, show what it means for UK job-seekers & employers, and map practical steps to future-proof your CV.