Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Threat Detection & Analysis Engineer

Bumble
London
7 months ago
Applications closed

Related Jobs

View all jobs

Staff AI Engineer - AIOps

IoT and Machine Learning Security Engineer KTP Associate

Senior Engineer, Machine Learning United Kingdom, London

Staff Data Scientist – Experimentation: Innovation & Research United Kingdom, London

Data Scientist - Year in Industry Programme 2026

Head of Data Science

Bumble is looking for aThreat Detection and Analysis Engineerto join our team and play a key role in fulfilling our mission to create a world where all relationships are healthy and equitable. Concretely this means you will be analysing data to prevent unintended uses for products that provide a safe and engaging experience for our users, and improve the way Bumble operates.

As a member of theBumble Trust Engineeringteam, you are the first line of detection of bad actors using Bumble in unwanted and unexpected ways. As Bumble’s customer base and suite of products grow, protecting customers from threats and scams becomes an ever more important problem. The Trust Engineering team develops and uses tooling to tease out high-quality signal from all the noise, to detect unwanted behaviours. Your work directly impacts customers.

Trust & Safety

We are part of the Trust & Safety Engineering group, a cross-functional team of 40+ engineers, scientists and machine learning professionals that help grow kind, healthy & equitable connections by designing and operationalizing the safest and most trusted connections platform in the industry. We partner with the wider business to create and share tooling, knowledge, and best practices around Trust and Safety technology while undertaking special product and development initiatives designed to improve the actual safety and felt experience of safety of members across our products. We’re known by the wider industry as experts in our field, and give back to the community through thought leadership, information sharing, and open-sourcing.

What you will be doing:

  • Analyse data across Bumble Inc products and implement logic for proactive discovery and prevention of threat actors and unwanted activity
  • Develop new analytics and dashboards to visualise and surface data for analysis, reporting, and planning
  • Develop and execute code to: modify data tables, automate database queries, surface and analyse logs, perform password resets and sanction users
  • Investigate complex instances of abuse cases, working cross functionally from initiation of a case through to providing a solution
  • Surface unwanted activity in the customer space, both proactively and reactively, using relevant log sources
  • Utilise existing data and tools to hunt for threats in our environment whilst advocating for changes to our ecosystem to continuously update tooling
  • Collaborate within team to surface requirements for trust capabilities
  • Work with Bumble Inc products to understand functionality, and where bad actors could take advantage, to support improved detective tools
  • Expose and present measurable data to internal and external partners to improve Bumble’s ability to detect future threats

What you should have:

  • Proficiency working with data technologies that power analytics (e.g. MySQL, Kafka, Presto, Pinot, or similar technologies)
  • Practical experience mining and cleaning large, unstructured datasets, then extracting meaningful and actionable insights, presenting results to an audience of various backgrounds
  • Experience with a high-level programming language such as Python or Go
  • Bachelor's degree in Computer Science, Engineering, Mathematics or a related field, equivalent training, fellowship, or work experience is required
  • Experience understanding bad actors, threat intelligence, and abuse; involvement in remediating abuse or security-related incidents is a plus
  • Experience with Linux, Kibana, and engineering fundamentals at scale such as AWS, Chef, and Terraform is a plus
  • Experience with behavioural analytics preferably in a trust, security, or privacy environment, with a focus on customer-facing environments is a plus

#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write an AI CV that Beats ATS (UK examples)

Writing an AI CV for the UK market is about clarity, credibility, and alignment. Recruiters spend seconds scanning the top third of your CV, while Applicant Tracking Systems (ATS) check for relevant skills & recent impact. Your goal is to make both happy without gimmicks: plain structure, sharp evidence, and links that prove you can ship to production. This guide shows you exactly how to do that. You’ll get a clean CV anatomy, a phrase bank for measurable bullets, GitHub & portfolio tips, and three copy-ready UK examples (junior, mid, research). Paste the structure, replace the details, and tailor to each job ad.

AI Recruitment Trends 2025 (UK): What Job Seekers Must Know About Today’s Hiring Process

Summary: UK AI hiring has shifted from titles & puzzle rounds to skills, portfolios, evals, safety, governance & measurable business impact. This guide explains what’s changed, what to expect in interviews, and how to prepare—especially for LLM application, MLOps/platform, data science, AI product & safety roles. Who this is for: AI/ML engineers, LLM engineers, data scientists, MLOps/platform engineers, AI product managers, applied researchers & safety/governance specialists targeting roles in the UK.

Why AI Careers in the UK Are Becoming More Multidisciplinary

Artificial intelligence is no longer a single-discipline pursuit. In the UK, employers increasingly want talent that can code and communicate, model and manage risk, experiment and empathise. That shift is reshaping job descriptions, training pathways & career progression. AI is touching regulated sectors, sensitive user journeys & public services — so the work now sits at the crossroads of computer science, law, ethics, psychology, linguistics & design. This isn’t a buzzword-driven change. It’s happening because real systems are deployed in the wild where people have rights, needs, habits & constraints. As models move from lab demos to products that diagnose, advise, detect fraud, personalise education or generate media, teams must align performance with accountability, safety & usability. The UK’s maturing AI ecosystem — from startups to FTSE 100s, consultancies, the public sector & universities — is responding by hiring multidisciplinary teams who can anticipate social impact as confidently as they ship features. Below, we unpack the forces behind this change, spotlight five disciplines now fused with AI roles, show what it means for UK job-seekers & employers, and map practical steps to future-proof your CV.