National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

Remote Machine Learning Compiler Engineer - Gensyn (Basé à London)

Jobleads
Greater London
4 weeks ago
Applications closed

Related Jobs

View all jobs

Senior Machine Learning Engineer

Senior Machine Learning Engineering Manager, GenAI

Research Machine Learning Engineer

Research Machine Learning Engineer

Staff Machine Learning Engineer

Senior/Staff Machine Learning Engineer

The world will be unrecognisable in 5 years.

Machine learning models are driving our cars, testing our eyesight, detecting our cancer, giving sight to the blind, giving speech to the mute, and dictating what we consume, enjoy, and think. These AI systems are already an integral part of our lives and will shape our future as a species.

Soon, we'll conjure unlimited content: from never-ending TV series (where we’re the main character) to personalised tutors that are infinitely patient and leave no student behind. We’ll augment our memories with foundation models—individually tailored to us through RLHF and connected directly to our thoughts via Brain-Machine Interfaces—blurring the lines between organic and machine intelligence and ushering in the next generation of human development.

This future demands immense, globally accessible, uncensorable, computational power. Gensyn is the machine learning compute protocol that translates machine learning compute into an always-on commodity resource—outside of centralised control and as ubiquitous as electricity—accelerating AI progress and ensuring that this revolutionary technology is accessible to all of humanity through a free market.

Our Principles:

AUTONOMY

  • Don’t ask for permission - we have a constraint culture, not a permission culture.
  • Claim ownership of any work stream and set its goals/deadlines, rather than waiting to be assigned work or relying on job specs.
  • Push & pull context on your work rather than waiting for information from others and assuming people know what you’re doing.
  • No middle managers - we don’t (and will likely never) have middle managers.

FOCUS

  • Small team - misalignment and politics scale super-linearly with team size. Small protocol teams rival much larger traditional teams.
  • Thin protocol - build and design thinly.
  • Reject waste - guard the company’s time, rather than wasting it in meetings without clear purpose/focus, or bikeshedding.

REJECT MEDIOCRITY

  • Give direct feedback to everyone immediately rather than avoiding unpopularity, expecting things to improve naturally, or trading short-term pain for extreme long-term pain.
  • Embrace an extreme learning rate rather than assuming limits to your ability/knowledge.

Responsibilities:

Lower deep learning graphs—from common frameworks (PyTorch, TensorFlow, Keras, etc.) down to an IR representation for training—with particular focus on ensuring reproducibility.

Write novel algorithms for transforming intermediate representations of compute graphs between different operator representations.

Ownership of two of the following compiler areas:

  • Front-end - handle the handshaking of common Deep Learning Frameworks with Gensyn's IR for internal IR usage. Write transformation passes in ONNX to alter IR for middle-end consumption.
  • Middle-end - write compiler passes for training-based compute graphs, integrate reproducible Deep Learning kernels into the code generation stage, and debug compilation passes and transformations as you go.
  • Back-end - lower IR from middle-end to GPU target machine code.

Minimum Requirements:

Compiler knowledge—base-level understanding of a traditional compiler (LLVM, GCC) and graph traversals required for writing code for such a compiler.

Solid software engineering skills—practicing software engineer, having significantly contributed to/shipped production code.

Understanding of parallel programming—specifically as it pertains to GPUs.

Strong willingness to learn Rust—as a Rust by default company, we require everyone to learn Rust so that they can work across the entire codebase.

Ability to operate on:

  • High-Level IR/Clang/LLVM up to middle-end optimization; and/or
  • Low Level IR/LLVM targets/target-specific optimizations—particularly GPU-specific optimizations.

Highly self-motivated with excellent verbal and written communication skills.

Comfortable working in an applied research environment—with extremely high autonomy.

Nice to haves:

Architecture understanding—full understanding of a computer architecture specialized for training NN graphs (Intel Xeon CPU, GPUs, TPUs, custom accelerators).

Rust experience—systems level programming experience in Rust.

Open-source contributions to Compiler Stacks.

Compilation understanding—strong understanding of compilation in regards to one or more High-Performance Computer architectures (CPU, GPU, custom accelerator, or a heterogeneous system of all such components).

Proven technical foundation—in CPU and GPU architectures, numeric libraries, and modular software design.

Deep Learning understanding—both in terms of recent architecture trends + fundamentals of how training works, and experience with machine learning frameworks and their internals (e.g., PyTorch, TensorFlow, scikit-learn, etc.).

Exposure to Deep Learning Compiler frameworks—e.g., TVM, MLIR, TensorComprehensions, Triton, JAX.

Kernel Experience—experience writing and optimizing highly-performant GPU kernels.

Note:For potential candidates outside these criteria, we still encourage you to apply as there may be openings with higher/lower levels than listed above.

Compensation / Benefits:

Competitive salary + share of equity and token pool.

Fully remote work—we hire between the West Coast (PT) and Central Europe (CET) time zones.

4x all expenses paid company retreats around the world, per year.

Whatever equipment you need.

️Paid sick leave.

Private health, vision, and dental insurance—including spouse/dependents.


#J-18808-Ljbffr

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

10 AI Recruitment Agencies in the UK You Should Know (2025 Job‑Seeker Guide)

Generative‑AI hype has translated into real hiring: Lightcast recorded +57 % year‑on‑year growth in UK adverts mentioning “machine learning”, “LLM” or “gen‑AI” during Q1 2025. Yet supply still lags. Roughly 18,000 core AI professionals work in the UK, but monthly live vacancies hover around 1,400–1,600. That mismatch makes specialist recruiters invaluable—opening stealth vacancies, advising on salary bands and fast‑tracking interview loops. But many tech agencies sprinkle “AI” on their website without an active desk. To save you time, we vetted 50 + consultancies and kept only those with: A registered UK head office (verified via Companies House). A named AI/Machine‑Learning or Data practice.

AI Jobs Skills Radar 2026: Emerging Frameworks, Languages & Tools to Learn Now

As the UK’s AI sector accelerates towards a £1 trillion tech economy, the job landscape is rapidly evolving. Whether you’re an aspiring AI engineer, a machine learning specialist, or a data-driven software developer, staying ahead of the curve means more than just brushing up on Python. You’ll need to master a new generation of frameworks, languages, and tools shaping the future of artificial intelligence. Welcome to the AI Jobs Skills Radar 2026—your definitive guide to the emerging AI tech stack that employers will be looking for in the next 12–24 months. Updated annually for accuracy and relevance, this guide breaks down the top tools, frameworks, platforms, and programming languages powering the UK’s most in-demand AI careers.

How to Find Hidden AI Jobs in the UK Using Professional Bodies like BCS, IET & the Turing Society

Stop Scrolling Job Boards and Start Tapping the Real AI Market Every week a new headline announces millions of pounds flowing into artificial-intelligence research, defence initiatives, or health-tech pilots. Read the news and you could be forgiven for thinking that AI vacancies must be everywhere—just grab your laptop, open LinkedIn, and pick a role. Yet anyone who has hunted seriously for an AI job in the United Kingdom knows the truth is messier. A large percentage of worthwhile AI positions—especially specialist or senior posts—never appear on public boards. They emerge inside university–industry consortia, defence labs, NHS data-science teams, climate-tech start-ups, and venture studios. Most are filled through referral or conversation long before a recruiter drafts a formal advert. If you wait for a vacancy link, you are already at the back of the queue. The surest way to beat that dynamic is to embed yourself in the professional bodies and grassroots communities where the work is conceived. The UK has a dense network of such organisations: the Chartered Institute for IT (BCS); the Institution of Engineering and Technology (IET) with its Artificial Intelligence Technical Network; the Alan Turing Institute and its student-driven Turing Society; the Royal Statistical Society (RSS); the Institution of Mechanical Engineers (IMechE) and its Mechatronics, Informatics & Control Group; public-funding engines like UK Research and Innovation (UKRI); and an ecosystem of Slack channels and Meetup groups that trade genuine, timely intel. This article is a practical, step-by-step guide to using those networks. You will learn: Why professional bodies matter more than algorithmic job boards Exactly which special-interest groups (SIGs) and technical networks to join How to turn CPD events into informal interviews How to monitor grant databases so you hear about posts months before they exist Concrete scripts, portfolio tactics, and outreach rhythms that convert visibility into offers Follow the playbook and you move from passive applicant to insider—the colleague who hears about a role before it is written down.