GRC Tech Risk and Controls Lead – Vice President

JPMorgan Chase & Co.
London
8 months ago
Applications closed

In this role, the successful candidate will spearhead the delivery of GRC transformation and change initiatives, leveraging technical skills and deep understanding of controls to enable and manage the risks associated with emerging technologies. Your expertise will be instrumental in designing, implementing, and continuously enhancing risk management frameworks, ensuring our technology controls are effectively designed and adhere to regulatory, legal, and industry standards.

As a GRC Tech Risk and Controls Lead within our Cybersecurity and Technology Controls team, you will play a pivotal role in mitigating tech risks and upholding operational excellence. You will promote strategic Governance, Risk, and Controls initiatives, and mature our risk governance frameworks for Artificial Intelligence. Your expertise will be instrumental in designing, implementing, and continuously enhancing risk management frameworks, ensuring our technology controls adhere to regulatory, legal, and industry standards. This role provides an opportunity to lead and execute complex, cross-functional GRC programs and initiatives, ensuring they align with business objectives.

Job responsibilities

Lead and execute complex, cross-functional GRC programs and initiatives, ensuring they achieve strategic outcomes and align with business objectives Communicate program status, execution risks/issues, and key decisions to senior stakeholders, maintaining transparency and fostering informed decision-making Identify, manage, and mitigate delivery risks, proactively addressing potential roadblocks and implementing contingency plans to maintain program momentum Partner with key stakeholders to iterate on design, implement, and continuously operate and enhance technology risk and control frameworks, ensuring they meet industry standards and regulatory requirements Promote a culture of high performance, operational excellence, and innovation within the GRC team, driving continuous improvement in risk management practices

Required qualifications, capabilities, and skills

6+ years of experience or equivalent expertise in technical program management, cybersecurity, and/or technology controls roles Proficiency in cybersecurity domains, including policies and standards, risk and control assessments, and regulatory compliance Proficiency in AI including Agentic regulatory, legal, and industry standards such as NIST, EU AI Act, etc. Experience in developing, implementing, and operating robust risk and control frameworks to mitigate technology failure and cybersecurity risks Ability to ensure decisions or constraints affecting program delivery are effectively escalated and addressed in a timely manner Strong verbal and written communication skills to translate technical risks into business impacts and engage with stakeholders at all levels Strong analytical skills to dissect complex challenges, conduct thorough root cause analysis, and develop effective solutions Proven ability to apply critical thinking and structured problem-solving techniques to address issues and drive continuous improvement in risk management practices

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Neurodiversity in AI Careers: Turning Different Thinking into a Superpower

The AI industry moves quickly, breaks rules & rewards people who see the world differently. That makes it a natural home for many neurodivergent people – including those with ADHD, autism & dyslexia. If you’re neurodivergent & considering a career in artificial intelligence, you might have been told your brain is “too much”, “too scattered” or “too different” for a technical field. In reality, many of the strengths that come with ADHD, autism & dyslexia map beautifully onto AI work – from spotting patterns in data to creative problem-solving & deep focus. This guide is written for AI job seekers in the UK. We’ll explore: What neurodiversity means in an AI context How ADHD, autism & dyslexia strengths match specific AI roles Practical workplace adjustments you can ask for under UK law How to talk about your neurodivergence during applications & interviews By the end, you’ll have a clearer picture of where you might thrive in AI – & how to set yourself up for success.

AI Hiring Trends 2026: What to Watch Out For (For Job Seekers & Recruiters)

As we head into 2026, the AI hiring market in the UK is going through one of its biggest shake-ups yet. Economic conditions are still tight, some employers are cutting headcount, & AI itself is automating whole chunks of work. At the same time, demand for strong AI talent is still rising, salaries for in-demand skills remain high, & new roles are emerging around AI safety, governance & automation. Whether you are an AI job seeker planning your next move or a recruiter trying to build teams in a volatile market, understanding the key AI hiring trends for 2026 will help you stay ahead. This guide breaks down the most important trends to watch, what they mean in practice, & how to adapt – with practical actions for both candidates & hiring teams.

How to Write an AI CV that Beats ATS (UK examples)

Writing an AI CV for the UK market is about clarity, credibility, and alignment. Recruiters spend seconds scanning the top third of your CV, while Applicant Tracking Systems (ATS) check for relevant skills & recent impact. Your goal is to make both happy without gimmicks: plain structure, sharp evidence, and links that prove you can ship to production. This guide shows you exactly how to do that. You’ll get a clean CV anatomy, a phrase bank for measurable bullets, GitHub & portfolio tips, and three copy-ready UK examples (junior, mid, research). Paste the structure, replace the details, and tailor to each job ad.