VP-Artificial Intelligence (AI) Risk Management

LGBT Great
City of London
2 months ago
Applications closed

Related Jobs

View all jobs

Applied Research - Artificial Intelligence - London - VP

Applied Research - Artificial Intelligence - London - VP

Software Engineer - (Machine Learning Engineer) - Hybrid

Cloud (AWS) – Cloud AiOps Engineer

ML Engineer / Data Scientist, Applied AI

Head of Data Science

Skills and Competencies

  • 10+ years of experience in risk management, digital economy, AI/ML, and blockchain, with a related concentration in Technology governance, risk and control self-assessment (RCSA), identifying and evaluating control measures, and compliance with financial services.
  • Hands-on experience developing AI and GenAI-powered applications
  • Deep expertise in AI model lifecycle governance (validation, transparency, explainability) combined with a track record of assessing and managing
    risk appetite in emerging technology domains, and practical experience with DeFi and blockchain operational risk
  • Broad-based technology experience at substantial scale and complexity in a global, highly regulated environment
  • Establishing and maintaining relationships between business and technical stakeholders
  • Evaluating and prioritizing strategic initiatives, balancing the needs of different stakeholders, and driving alignment
  • Ability to influence cross-functionally and enterprise-wide and assert second line risk responsibility to challenge and influence in a highly consultative and effective manner
  • Clear Thinker with strong analytical skills to review complex processes
  • Effective communication skills, both verbal and written
  • Demonstrated ability to effectively interface with a diverse, global, and cross-functional team and led large-scale project
  • Ability to prioritize and multitask, flexibility and adaptability in work approach

Education / Certifications

  • B.S. in a technology discipline (Computer Science, Information Management, Computer Engineering, Cyber Security or equivalent).
    Relevant certification is desirable, e.g., CISSP, CISM, CISA. Working knowledge of Risk Management life cycles based on established frameworks: NIST, COBIT, ORX, ISO 27001

Responsibilities

The Digital Economy and AI Risk Management VP is a key member of the 2nd Line of Defence Risk Management team, tasked with assisting the 1st Line of Defence manage operational risk emanating from the rapidly evolving landscape of artificial intelligence (AI), decentralized finance (DeFi), blockchain technologies, and the digital economy. The role is responsible for supporting the ORM framework designed to identify, assess, mitigate, and report on operational risks as it relates to AI development and deployment, the complexities of Web 3.0 (i.e., blockchain), and digital asset innovation.



  • Review and Challenge: leveraging their subject matter expertise, provide independent review and credible challenge to the Digital Economy risk profile and associated implementation of the ORM framework.
  • Governance: actively engage at various committees/forums representing 2nd LoD Risk and provide subsequent updates on changes to the Digital Economy risk profile
  • Risk Appetite: develop, maintain, and communicate risk appetite for digital and AI-driven initiatives, ensuring alignment with organizational goals and regulatory expectations
  • Risk and Control Self-Assessments (RCSA): initial challenge of the 1st LoD RCSA’s in-line with the ORM standards including timely completion, challenging risks, controls, and assessments, and supporting escalation/reporting, including at governance committees
  • Operational Risk Events (ORE’s): initial challenge that the appropriate response, escalation, documentation, and reporting is in-line with the ORM framework, including post event root cause analysis to identify lessons learned and required actions to prevent recurrence
  • Key Risk Indicators (KRIs): initial challenge of the development and reporting of KRIs, including establishment of tolerance levels, 1LoD rationales where KRI’s are out of tolerance or have changed significantly.
  • Emerging & Evolving Risks: initial challenge and monitoring of emerging and evolving risks, identifying where new risks need to be reported or current risks are significantly changing.
  • Risk Initiatives: provide 2nd LoD initial challenge of various initiatives from a design, requirements, and go-live criteria perspective to reduce impact of transformation risk.
  • Relationship Management: respected point of contact and trusted advisor to stakeholders across the business and technology functions in providing ORM coverage the Digital Economy, Technology, and Information Security risk.
  • Policies, Standards & Procedures: review and credibly challenge adherence by the Digital Economy function to their Policies, Standards and Procedures, as well as adherence to MR ORM framework.
  • GRC Usage & Reporting: oversee effective and comprehensive usage of the GRC tool for all ORM risk activities by the 1st LoD ensuring it’s complete, timely, and accurate.

About the Team

Moody's Ratings(MR) Risk Management team was established in 2020 as the 2nd LoD risk function across MR, establishing risk policies and providing advice, guidance and challenge to the implementation and on-going adherence to these standards. The MR Risk Management team is a global team acting as a risk management centre of excellence within MR.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many AI Tools Do You Need to Know to Get an AI Job?

If you are job hunting in AI right now it can feel like you are drowning in tools. Every week there is a new framework, a new “must-learn” platform or a new productivity app that everyone on LinkedIn seems to be using. The result is predictable: job seekers panic-learn a long list of tools without actually getting better at delivering outcomes. Here is the truth most hiring managers will quietly agree with. They do not hire you because you know 27 tools. They hire you because you can solve a problem, communicate trade-offs, ship something reliable and improve it with feedback. Tools matter, but only in service of outcomes. So how many AI tools do you actually need to know? For most AI job seekers: fewer than you think. You need a tight core toolkit plus a role-specific layer. Everything else is optional. This guide breaks it down clearly, gives you a simple framework to choose what to learn and shows you how to present your toolset on your CV, portfolio and interviews.

What Hiring Managers Look for First in AI Job Applications (UK Guide)

Hiring managers do not start by reading your CV line-by-line. They scan for signals. In AI roles especially, they are looking for proof that you can ship, learn fast, communicate clearly & work safely with data and systems. The best applications make those signals obvious in the first 10–20 seconds. This guide breaks down what hiring managers typically look for first in AI applications in the UK market, how to present it on your CV, LinkedIn & portfolio, and the most common reasons strong candidates get overlooked. Use it as a checklist to tighten your application before you click apply.

The Skills Gap in AI Jobs: What Universities Aren’t Teaching

Artificial intelligence is no longer a future concept. It is already reshaping how businesses operate, how decisions are made, and how entire industries compete. From finance and healthcare to retail, manufacturing, defence, and climate science, AI is embedded in critical systems across the UK economy. Yet despite unprecedented demand for AI talent, employers continue to report severe recruitment challenges. Vacancies remain open for months. Salaries rise year on year. Candidates with impressive academic credentials often fail technical interviews. At the heart of this disconnect lies a growing and uncomfortable truth: Universities are not fully preparing graduates for real-world AI jobs. This article explores the AI skills gap in depth—what is missing from many university programmes, why the gap persists, what employers actually want, and how jobseekers can bridge the divide to build a successful career in artificial intelligence.