
Adversarial AI, Research Engineer
Posted 22 hours ago

Posted 22 hours ago
• Oversee adversarial testing of AI by conceptualizing, defining, and implementing red team evaluations against our AI ecosystem, utilizing a risk-based prioritization strategy to identify and mitigate vulnerabilities prior to exploitation.
• Pioneer innovative AI attack methods by merging cutting-edge academic insights with established offensive security practices to develop new Tactics, Techniques, and Procedures, ensuring that assessments remain in line with the latest advancements in the field.
• Develop and enhance security tools following an automation-first approach, leading efforts to integrate security testing earlier in the development process by distributing specialized tools to AI security stakeholders in Engineering, Research, and Ethics.
• Act as a strategic ally throughout the organization, offering an offensive security viewpoint to inform product development, aid corporate governance, and contribute to policies such as Salesforce's Generative AI Security Standard.
• A minimum of 6 years of experience in offensive security (including red teaming, application security, penetration testing, vulnerability research, etc.).
• At least 1 year of hands-on experience assessing the security of AI/ML systems, with a thorough understanding of vulnerabilities associated with LLMs.
• Strong proficiency in Python for tool development, automation of assessments, and data analysis.
• Demonstrated experience in leading intricate technical projects and/or mentoring security teams, with an outstanding ability to convey critical technical risks to both engineering and executive levels.
• Time off programs
• Medical
• Dental
• Vision
• Mental health support
• Paid parental leave
• Life and disability insurance
• 401(k)
• Employee stock purchasing program
Get handpicked remote jobs straight to your inbox weekly.