Job Purpose
Provide end-to-end AI governance oversight to ensure AI initiatives are compliant, ethical, secure, and risk-managed, while enabling responsible innovation.
Key Responsibilities
- Own and maintain the organization's AI Governance framework, policies, and standards aligned with regulatory and business expectations.
- Act as the central authority for AI risk decisions, including approval, rejection, and escalation of AI use cases.
- Establish and operate an AI risk classification and assessment process aligned with EU-style risk-based governance.
- Maintain a centralized AI inventory covering internal, external, and vendor-provided AI systems.
- Define minimum Responsible AI controls covering human oversight, transparency, data governance, security, and fairness.
- Review AI use cases across the lifecycle (design, build, deploy, operate) and ensure required controls are implemented before go-live.
- Coordinate with Legal, Compliance, Security, and Data Governance on AI-related risks and regulatory obligations.
- Oversee third-party and vendor AI risk assessments and contractual governance requirements
- Monitor deployed AI systems for ongoing risk, incidents, and material changes, and manage escalation and remediation.
- Ensure AI documentation, record-keeping, and evidence are audit-ready and regulator-ready
- Prepare and present AI risk posture, key issues, and decisions to management and relevant committees.
- Promote awareness and understanding of Responsible AI principles across business and technical teams.
Qualifications
- Bachelor's degree in Computer Science, Information Technology, Information Systems, Engineering, Cybersecurity, or a related field; a Master's degree in Cybersecurity, Technology Risk, Data Science, or IT Governance is an advantage
- Minimum 7 years of hands-on experience in IT risk management, technology risk, information security, compliance, data governance, or audit within a regulated or financial services environment
- Practical, end-to-end understanding of the AI/ML lifecycle (use-case intake, data preparation, model development, deployment, monitoring) and associated risks such as bias, explainability, data quality, model drift, and misuse.
- Proven experience working with regulators, internal/external audits, and regulatory frameworks (e.g. banking regulations, data protection, model risk, technology risk).
- Strong capability in producing governance artifacts including policies, risk assessments, control frameworks, inventories, approval records, and audit evidence.
- Demonstrated ability to coordinate across technical and non-technical stakeholders (business, data science, legal, compliance, security, vendors).
- Experience in financial services, banking, fintech, insurance, or other highly regulated industries.
- Solid familiarity with EU-style, risk-based AI governance concepts (e.g. prohibited vs high-risk AI, human oversight, transparency, post-deployment monitoring), even if implementation experience is emerging.
- Relevant professional certifications such as CISA, CISM, CISSP, CRISC, ISO/IEC 27001 LA, CDPSE, or AI-focused credentials such as IAPP AIGP, ISO/IEC 42001 (AI Management System), or equivalent.
- Ability to translate complex technical AI risks into clear business, risk, and regulatory language for senior management and decision-making committees.
Key Competencies
- Risk-based thinking.
- Independent judgment.
- Clear communication.
- Balanced mindset: governance without blocking innovation.