Remotery

LLM Security Evaluation Expert

atSilverEdge Government SolutionsUS flagMarylandFull-timeCybersecurity / Security EngineerMid-levelSenior

Posted 4 hours ago

📋 Description

• Conduct thorough testing on the security and integrity of Large Language Models (LLMs).

• Design and implement advanced adversarial prompt attacks to uncover potential vulnerabilities.

• Evaluate the model's resistance to exploitation, ensuring it exhibits consistent and secure behavior.

• Create and execute a comprehensive suite of adversarial prompts aimed at known and potential LLM vulnerabilities.

• Develop prompts intended to bypass security filters and content moderation protocols.

• Encourage the LLM to disclose sensitive, confidential, or proprietary information.

• Manipulate the LLM's outputs to produce harmful, biased, or unintended content.

• Test for prompt injection, jailbreaking, and other emerging attack vectors.

• Methodically assess LLMs against the crafted adversarial prompts and analyze the responses to identify successful exploits and security flaws.


⛳️ Requirements

• In-depth understanding of LLM operations, including architecture, training processes, capabilities, and intrinsic limitations.

• Familiarity with major LLM families (e.g., GPT series, Claude, Llama, PaLM) and their shared characteristics.

• Proven track record in crafting and refining prompts to achieve specific behaviors or circumvent restrictions in LLMs.

• Clear understanding of techniques such as jailbreaking, prompt injection, role-playing attacks, and exploiting model biases.

• Solid grasp of cybersecurity principles and prevalent attack vectors, especially as related to AI/ML systems.

• Ability to adopt the mindset of an attacker and foresee potential exploits.

• Exceptional analytical skills for complex systems, with the ability to identify subtle vulnerabilities and rigorously test hypotheses.

• Strong written and verbal communication abilities, with a commitment to thoroughly documenting technical findings.

• Awareness of the ethical considerations in AI security and a commitment to responsible testing methodologies.

• Previous experience in AI red teaming, penetration testing of AI/ML systems, or a specialized LLM security research position.

• Familiarity with established LLM security evaluation frameworks or benchmarks (e.g., those created by NIST, Stanford HELM, or other research institutions).

• Knowledge of standard LLM fine-tuning and alignment techniques (e.g., RLHF) and their potential impact on security.

• Contributions to the AI security community (e.g., research publications, open-source projects, conference talks).

• Offensive Security Certified Professional (OSCP) certification.

• Certified Ethical Hacker (CEH) certification.


🏝️ Benefits

• N/A

People also viewed

Akamai Technologies1 hour ago

Senior Technical Account Manager, Security

US flagMassachusetts OnlyFull-timeCybersecurity / Security Engineer$112.5k – $202.5k/year
ApplyView job
General Dynamics Information Technology4 hours ago

Security Engineer

US flagUnited States OnlyFull-timeCybersecurity / Security Engineer$131.8k – $178.3k/year
ApplyView job
Mysten Labs4 hours ago

Security Engineer

US flagUnited States OnlyFull-timeCybersecurity / Security Engineer$140k – $190k/year
ApplyView job
Coder4 hours ago

Security Engineer – Product

GB flagUnited Kingdom OnlyFull-timeCybersecurity / Security Engineer£92k – £124k/year
ApplyView job
CANPACK Group4 hours ago

Global IT Security Expert – OT

RO flagRomania OnlyFull-timeCybersecurity / Security Engineer
ApplyView job
Akamai Technologies4 hours ago

Security Architect

PL flagPoland OnlyFull-timeCybersecurity / Security Engineer
ApplyView job

Never miss a great job!

Get handpicked remote jobs straight to your inbox weekly.

Trusted by 7,400+ designers