
Director/Principal Counsel, AI Governance
Posted 1 day ago

Posted 1 day ago
• Lead the governance and policy framework for AI programs by assessing the relevance of international AI laws, spearheading policy creation, serving as the Framework Owner, and setting ethical standards alongside clear guidelines for AI development.
• Oversee AI Incident Response by evaluating and enhancing incident response protocols to integrate AI-specific triggers and regulatory reporting channels, ensuring alignment with current roles and Security’s AI Acceptable Use Policy.
• Supervise the management of AI Risks by upholding the AI Risk Management Policy and establishing a standard risk taxonomy differentiating rights-based risks from technical risks.
• Set Data Governance Standards by outlining criteria for dataset governance that covers design, origin assessment, labeling, bias evaluation, and data quality assurance.
• Ensure comprehensive Documentation and Record Keeping by managing templates for AI technical and transparency documentation, while updating retention schedules to encompass AI-related records.
• Guarantee Accuracy, Robustness, and Cybersecurity by providing guidance on the adaptation of security and compliance programs to address AI-specific legal requirements, and collaborating with technical teams to ensure models meet defined accuracy metrics and compliance benchmarks.
• Promote Accessibility by summarizing AI-related accessibility requirements and collaborating with UX/UI and Product teams to ensure outputs and interfaces conform to legal standards.
• Implement Quality Management and Monitoring by reviewing and modifying existing Quality Management System (QMS) protocols for AI design, testing, validation, and post-market oversight.
• Manage User Transparency and Disclosures by creating standard guidelines regarding language and timing for disclosures (such as “AI Generated”) and coordinating implementation within the UI alongside Product teams.
• Advocate for AI Ethics by upholding the organization’s ethical principles for AI, managing the AI/Model Ethics Assessment template, and contributing to the operation of an AI Ethics Board.
• Provide advice on Contractual Provisions by offering insights on contractual protections and drafting standard clauses and pass-through terms for High-Risk components to assist in vendor/customer negotiations.
• Oversee Registration and Training by managing necessary regulatory filings and registrations, as well as developing and delivering role-based training on AI policies and compliance.
• Facilitate Specialist Consultation by coordinating specialized legal analyses (e.g., IP, Employment) with the relevant co-counsel.
• A minimum of seven (7) years of experience in privacy, compliance, or governance roles, with at least three (3) years concentrated on AI or data-centric products.
• Extensive knowledge of international AI regulatory frameworks (e.g., EU AI Act), global privacy legislation (e.g., GDPR, CCPA), and standards relating to discrimination and bias.
• Experience in designing policies, templates, Quality Management Systems (QMS), and documentation for regulated technologies.
• Familiarity with incident response, risk management taxonomies, and monitoring model performance.
• Demonstrated ability to work collaboratively across functions with Product, Engineering, Legal teams, and Security; exceptional communication skills are essential.
• Comprehensive health, life, and disability insurance
• Commute subsidy
• Employee stock ownership
• Competitive retirement/pension plans
• Generous vacation and personal days
• Support for new parents through leave and family-care programs
• Office food snacks
• Mental Health and Wellbeing programs and support
• Employee Resource Groups
• Global Employee Assistance Program
• Training and development programs
• Volunteering and donation matching program
National Police Federation/Fédération de la Police Nationale
Baylor Genetics
Knowtion Health
RATP Dev USA
Get handpicked remote jobs straight to your inbox weekly.