Founding Machine Learning Engineer

Bjak
City of London
2 months ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer

Lead Machine Learning Engineer

Machine Learning Engineer PyTorch LLM

Machine Learning Engineer PyTorch LLM

Western Europe Practice Head - Data Science (Machine Learning/Artificial Intelligence (ML/AI)

Director/Snr Director, Data Science Consulting - Machine Learning/Artificial Intelligence (ML/AI)

Transform language models into real-world, high-impact product experiences.

A1 is a self-funded AI group, operating in full stealth. We’re building a new global consumer AI application focused on an important but underexplored use case.


You will shape the core technical direction of A1 - model selection, training strategy, infrastructure, and long-term architecture. This is a founding technical role: your decisions will define our model stack, our data strategy, and our product capabilities for years ahead.


You won’t just fine-tune models - you’ll design systems: training pipelines, evaluation frameworks, inference stacks, and scalable deployment architectures. You will have full autonomy to experiment with frontier models (LLaMA, Mistral, Qwen, Claude-compatible architectures) and build new approaches where existing ones fall short.


Why This Role Matters
  • You are creating the intelligence layer of A1’s first product, defining how it understands, reasons, and interacts with users.

  • Your decisions shape our entire technical foundation — model architectures, training pipelines, inference systems, and long-term scalability.

  • You will push beyond typical chatbot use cases, working on a problem space that requires original thinking, experimentation, and contrarian insight.

  • You influence not just how the product works, but what it becomes, helping steer the direction of our earliest use cases.

  • You are joining as a founding builder, setting engineering standards, contributing to culture, and helping create one of the most meaningful AI applications of this wave.


What You’ll Do
  • Build end-to-end training pipelines: data → training → eval → inference


  • Design new model architectures or adapt open-source frontier models


  • Fine-tune models using state-of-the-art methods (LoRA/QLoRA, SFT, DPO, distillation)


  • Architect scalable inference systems using vLLM / TensorRT-LLM / DeepSpeed


  • Build data systems for high-quality synthetic and real-world training data


  • Develop alignment, safety, and guardrail strategies


  • Design evaluation frameworks across performance, robustness, safety, and bias


  • Own deployment: GPU optimization, latency reduction, scaling policies


  • Shape early product direction, experiment with new use cases, and build AI-powered experiences from zero


  • Explore frontier techniques: retrieval-augmented training, mixture-of-experts, distillation, multi-agent orchestration, multimodal models


What It’s Like to Work Here
  • You take ownership - you solve problems end-to-end rather than wait for perfect instructions


  • You learn through action - prototype → test → iterate → ship


  • You’re calm in ambiguity - zero-to-one building energises you


  • You bias toward speed with discipline - V1 now > perfect later


  • You see failures and feedback as essential to growth


  • You work with humility, curiosity, and a founder’s mindset


  • You lift the bar for yourself and your teammates every day


Requirements
  • Strong background in deep learning and transformer architectures


  • Hands-on experience training or fine-tuning large models (LLMs or vision models)


  • Proficiency with PyTorch, JAX, or TensorFlow


  • Experience with distributed training frameworks (DeepSpeed, FSDP, Megatron, ZeRO, Ray)


  • Strong software engineering skills — writing robust, production-grade systems


  • Experience with GPU optimization: memory efficiency, quantization, mixed precision


  • Comfortable owning ambiguous, zero-to-one technical problems end-to-end


Nice to Have
  • Experience with LLM inference frameworks (vLLM, TensorRT-LLM, FasterTransformer)


  • Contributions to open-source ML libraries


  • Background in scientific computing, compilers, or GPU kernels


  • Experience with RLHF pipelines (PPO, DPO, ORPO)


  • Experience training or deploying multimodal or diffusion models


  • Experience in large-scale data processing (Apache Arrow, Spark, Ray)


  • Prior work in a research lab (Google Brain, DeepMind, FAIR, Anthropic, OpenAI)


What You’ll Get
  • Extreme ownership and autonomy from day one - you define and build key model systems.


  • Founding-level influence over technical direction, model architecture, and product strategy.


  • Remote-first flexibility


  • High-impact scope—your work becomes core infrastructure of a global consumer AI product.


  • Competitive compensation and performance-based bonuses


  • Backing of a profitable US$2B group, with the speed of a startup


  • Insurance coverage, flexible time off, and global travel insurance


  • Opportunity to shape a new global AI product from zero


  • A small, senior, high-performance team where you collaborate directly with founders and influence every major decision.


Our Team & Culture

We operate as a dense, senior, high-performance team. We value clarity, speed, craftsmanship, and relentless ownership. We behave like founders — we build, ship, iterate, and hold ourselves to a high technical bar.


If you value excellence, enjoy building real systems, and want to be part of a small team creating something globally impactful, you’ll thrive here.


About A1

A1 is a self-funded, independent AI group backed by BJAK, focused on building a new consumer AI product with global impact. We’re assembling a small, elite team of ML and engineering builders who want to work on meaningful, high-impact problems.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.