Machine Learning Evaluation Engineer

Marker
City of London
2 months ago
Applications closed

Related Jobs

View all jobs

Machine Learning Engineer

Machine Learning Engineer (CS)

Machine Learning Engineer (Manager)

Machine Learning Engineer III

Machine Learning Engineer

Machine Learning Engineer

AI Evaluation, Research Methods, Python, LLMObservability

Salary range

£60,000-£80,000 p.a. + equity, depending on experience (up to £100,000 forcandidates with exceptional relevant experience)

Apply

Email us at and tell us a little bit about yourselfand your interest in the future of writing, along with your CV or a link to your CV site.

What is Marker?

Marker is an AI-native Word Processor – a reimagining of Google Docs and Microsoft Word.

Join us in building the next generation of agentic AI assistants supporting serious writers in their work.

We are a small, ambitious company using cutting-edge technology to give everybody writing superpowers.

What you'll do at Marker

We are looking for someone with a couple of years experience in academia or industry who can help us bringrigour and insight to our AI systems through evaluation,research, and observability. You'll work directly with Ryan Bowman (CPO) to help us understand and improvehow our AI assists writers. Here are some examples of areas you will be working in:

  • Design and implement evaluation frameworks for complex, subjective AI outputs (like writing feedbackthat's meant to inspire rather than just correct)
  • Build flexible evaluation pipelines that can assess quality across multiple dimensions - from humanpreference to actual writing improvement
  • Research and prototype new evaluation methodologies for creative and subjective AI tasks
  • Collaborate with our engineering team to integrate evaluation insights into our development process
  • Help define what "quality" means for different AI outputs and create metrics that actually matter forour users
  • Work on challenging problems like: "How do we automatically evaluate whether an AI comment successfullyencourages thoughtful revision?"

What we can offer

  • A calm, human-friendly work environment among kind and experienced professionals
  • Fun, creative, novel, and interesting technical work at the intersection of AI research and productdevelopment
  • An opportunity to work with and learn about the latest advancements in AI evaluation and language models
  • Direct collaboration with leadership to shape how we understand and improve our AI systems
  • As much responsibility and growth opportunities as you want to take on

Are you a good fit for this role?

In order to be successful in this role, you will recognise yourself in the following:

  • You have experience with AI/ML evaluation methodologies and can speak the language of AI research
  • You've worked hands-on with language models and understand the challenges of evaluating subjective,creative outputs
  • You are a self-starter willing to work independently and at speed - we imagine a 2-week experimentcadence at most.
  • You are familiar with and have worked on related technical systems (evaluation pipelines, datacollection tools) but don't need to be a full-stack engineer. You won't be expected to build these alone!
  • You think critically about what metrics actually matter and aren't satisfied with vanity metrics
  • You're comfortable working with ambiguous problems where the "right answer" isn't obvious
  • You have some programming experience (Python preferred) and can work independently on technical projects
  • You're interested in the intersection of AI capabilities and human creativity

An exceptional candidate for this role would be able to demonstrate some of thefollowing:

  • Experience building evaluation systems for generative AI in production environments
  • Knowledge of TypeScript and ability to integrate with our existing systems
  • Background in human-computer interaction, computational creativity, or writing research
  • Experience with A/B testing, statistical analysis, and experimental design
  • Familiarity with modern AI observability and monitoring tools
  • Published research or deep interest in AI evaluation methodologies
  • Interest in writing (fiction, non-fiction, essays)

However, you are NOT expected to:

  • Be a senior software engineer - we're looking for someone who can build evaluation systems, notarchitect our entire backend
  • Have solved every evaluation problem before - this is cutting-edge work and we're figuring it outtogether
  • Be experienced with every library in our stack from day one - you'll work closely with Ryan and ourengineering team
  • Have a specific degree - we value practical experience and research ability over credentials

Our stack

You'll be working with the following technologies:

  • Our AI engine uses a range of models, including self-hosted and fine-tuned open source models, as wellas latest reasoning models from Anthropic and OpenAI
  • Evaluation and research tools built primarily in Python, with integration into our TypeScriptinfrastructure
  • Our agentic AI execution platform is written in TypeScript, hosted on Cloudflare Workers
  • Standard ML tooling: various evaluation frameworks, data analysis tools, and monitoring systems
  • Our text editor frontend is a web application built with React, TypeScript and ProseMirror

Apply now!

Interested? Email us at with your CV (or a link to your CV site).Tell us a little bit about yourself and why you'd like to work at Marker!

Please note that this role is currently only available based in ourLondon hub, and at this time we are not able to sponsor work visas in the UK.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.