Research Scientist, Learning & Cognitive Outcomes

London, United Kingdom
Today
£40,000 – £80,000 pa

Salary

£40,000 – £80,000 pa

Job Type
Permanent
Work Pattern
Full-time
Work Location
Hybrid
Seniority
Mid
Education
Degree
Posted
4 May 2026 (Today)

About the Role

As a Research Scientist focused on Learning & Cognitive Outcomes, you will help build the scientific and evaluation infrastructure needed to understand how AI systems affect learning, cognition, and capability development over time.

We are looking for someone who can design rigorous studies, develop scalable evaluation methods, and help answer a central question:do AI systems help people become more capable over time? This means going beyond engagement, satisfaction, or task completion to measure whether users develop better reasoning, stronger metacognition, greater autonomy, deeper understanding, improved transfer, and more durable skills.

This role sits at the intersection of learning science, cognitive science, experimental design, LLM evaluation, and applied product research. You will help develop cognitive outcome measures, design and manage RCTs and field studies, build classifiers and graders, guide external research partners, and translate findings into model and product improvements.

The initial focus of this work will include young users and education settings, while contributing to a broader research agenda on how AI affects cognition and capability development across populations. You should be comfortable working with schools, universities, education systems, research organizations, and other external partners, while also collaborating closely with internal product, research, engineering, data science, and policy teams.

This is an applied, empirical role. It is not a traditional academic research role optimized primarily for publication, nor is it a curriculum design or production engineering role. Success means building evidence systems that are scientifically credible, operationally useful, and influential in how models and products are developed.

A strong candidate will be able to move quickly in ambiguous environments, make pragmatic scientific tradeoffs, and maintain high standards while working with messy real-world data, external partners, and fast-moving AI systems.

We expect you to:

  • Have strong grounding in learning science, cognitive science, educational psychology, behavioral science, HCI, or a related empirical field, with a clear understanding of how people acquire, retain, transfer, and apply knowledge and skills.

  • Have experience designing and executing rigorous empirical research, including RCTs, field experiments, large-scale behavioral studies, or other causal evaluation methods.

  • Be able to design studies that measure meaningful cognitive and learning outcomes, not just engagement, preference, completion, or short-term performance.

  • Build and validate evaluation systems for learning and cognitive outcomes, including rubrics, classifiers, graders, benchmarks, behavioral metrics, and model-based evaluators.

  • Develop methods for detecting both positive and negative effects of AI use, including improved reasoning, better metacognition, durable learning, transfer, overreliance, shallow fluency, answer-copying, reduced agency, or unproductive cognitive offloading.

  • Be technically fluent enough to work with data directly, prototype analyses, inspect model outputs, reason about classifier and grader performance, and collaborate effectively with data scientists, engineers, and research teams.

  • Understand the practical strengths and limitations of LLM-based evaluation methods, including model-as-judge systems, rubric design, validation, calibration, inter-rater reliability, and precision/recall tradeoffs.

  • Help design, launch, and manage external RCTs and field studies with partners such as schools, universities, education systems, research groups, vendors, and other institutions.

  • Guide external research partners on study design, protocol quality, measurement strategy, implementation fidelity, analysis plans, and interpretation of results.

  • Operate independently in ambiguous environments, turning broad research goals into concrete study designs, execution plans, evaluation artefacts, and decision-relevant outputs.

  • Communicate clearly with technical, scientific, partner, and executive audiences, including through internal memos, research reports, partner guidance, protocols, presentations, and external publications.

  • Translate research findings into actionable recommendations for model behavior, product design, evaluation standards, and future research priorities.

  • Move quickly while maintaining scientific rigor, especially in real-world settings with imperfect data, operational constraints, and multiple stakeholders.

  • Represent OpenAI credibly and responsibly in partner-facing research conversations, while knowing when to escalate scientific, operational, ethical, or strategic judgement calls.

  • Be excited about OpenAI’s approach to research and deployment, especially the opportunity to study and improve the effects of AI systems on human capability at scale.

Nice to have:

  • Experience working in frontier AI, big tech research, edtech, learning platforms, tutoring systems, assessment, or other technically sophisticated product environments.

  • Experience building or evaluating LLM-based graders, classifiers, model-as-judge systems, benchmark datasets, automated assessment tools, or behavioral measurement pipelines.

  • Familiarity with outcomes such as reasoning quality, transfer, metacognition, self-regulated learning, motivation, autonomy, cognitive offloading, overreliance, help-seeking, feedback use, or durable skill acquisition.

  • Experience running multi-site studies or managing external research programmes with schools, universities, governments, ministries, labs, institutional partners, or large-scale vendors.

  • Familiarity with psychometrics, measurement validation, causal inference, longitudinal study design, mixed-methods research, or large-scale behavioral data analysis.

  • Experience with research involving young users, educational institutions, consent processes, privacy constraints, ethics review, or other responsible research practices in sensitive settings.

  • A track record of translating research into product, model, policy, or organisational decisions.

  • Publications or public research outputs in learning science, cognitive science, HCI, behavioral science, education research, AI evaluation, computational social science, or related fields.

  • Experience working cross-functionally with product managers, engineers, data scientists, research scientists, policy teams, legal teams, or communications teams.

  • Ability to balance scientific ambition with practical execution, especially when working in fast-moving environments where perfect study conditions are rarely available.

  • Evidence of high ownership, sound judgement, and ability to manage multiple complex research workstreams without heavy oversight.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Related Jobs

View all jobs

Learning Engineer

Faculty AI London, United Kingdom
Hybrid

Research Assistant - Human Influence

AI Security Institute London, United Kingdom
£40,000 – £60,000 pa On-site

Research Scientist (Machine Learning), London

Isomorphic Labs London, United Kingdom

Research Scientist (Machine Learning), Lausanne

Isomorphic Labs United Kingdom

Research Scientist (Applied LLMs), London

Isomorphic Labs London, United Kingdom

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise AI Jobs in the UK (2026 Guide)

Advertising AI jobs in the UK requires a different approach to most technical hiring. The candidate pool is small, highly informed and in demand across multiple sectors simultaneously. General job boards reach a broad audience but lack the specificity that AI professionals expect — and the filtering mechanisms they rely on. Specialist platforms, direct outreach and academic channels each serve a different part of the market. This guide, published by ArtificialIntelligenceJobs.co.uk, covers where to advertise AI roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about time-to-hire across different role types.

AI Jobs UK 2026: What to Expect Over the Next 3 Years

Artificial intelligence is creating jobs faster than the market can name them. New roles are appearing every quarter, existing titles are splitting into specialisms, and the technologies underpinning it all are evolving at a pace that makes even last year's job descriptions feel dated. For job seekers, this presents a genuinely unusual challenge. In most industries, career planning means understanding a relatively stable landscape and working out where you fit within it. In AI, the landscape itself is being redrawn in real time. The roles with the most hiring activity in 2028 may not yet have a widely agreed job title in 2026. That's not a reason to feel overwhelmed — it's a reason to get informed. The candidates who thrive in this market aren't necessarily those with the longest CVs or the most credentials. They're the ones who understand the direction of travel: which skills are gaining value, which technologies are driving employer decisions, and how the definition of an "AI job" is expanding well beyond the tech sector. This article breaks down what the UK AI jobs market is likely to look like over the next three years — covering emerging job titles, the technologies reshaping hiring, the skills employers are prioritising, and how to position yourself ahead of the curve rather than behind it.

New AI Employers to Watch in 2026: UK and Global Companies Reshaping AI Careers

The artificial intelligence job market in the UK is evolving at an extraordinary pace. With record-breaking investment, government backing, and a surge in enterprise adoption, the landscape of AI employers is shifting rapidly. For candidates exploring opportunities on ArtificialIntelligenceJobs.co.uk, understanding who is hiring next is just as important as understanding what skills are in demand. In this article, we explore the new and emerging AI employers to watch in 2026, focusing on organisations that have recently secured funding, won major contracts, or expanded their UK footprint. From cutting-edge startups to global giants doubling down on Britain, these companies represent the next wave of AI career opportunities.