Google logo
Security Engineer, AI Agent Security
full-timeZurich

Summary

Location

Zurich

Type

full-time

Explore Jobs

About this role

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience with security assessments or security design reviews or threat modeling.
  • 2 years of experience with security engineering, computer and network security and security protocols.
  • 2 years of coding experience in one or more general purpose languages.

Preferred qualifications:

  • Master's or PhD degree in Computer Science or a related technical field with a specialization in Security, AI/ML, or a related area.
  • Experience in Artificial Intelligence/Machine Learning (AI/ML) security research, including areas like adversarial machine learning, prompt injection, model extraction, or privacy-preserving ML.
  • Track record of security research contributions (e.g., publications in relevant security/ML venues, CVEs, conference talks, open-source tools).
  • Familiarity with the architecture and potential failure modes of LLMs and AI agent systems.

About the job:

Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.

Google's Secure AI Framework (SAIF) team is at the forefront of AI Agent Security. You'll pioneer defenses for systems like Gemini and Workspace AI, addressing novel threats unique to autonomous agents and Large Language Models (LLMs), such as advanced prompt injection and adversarial manipulation.

In this role, your responsibilities include researching vulnerabilities, designing innovative security architectures, prototyping mitigations, and collaborating to implement solutions. This role requires security research/engineering skills, an attacker mindset, and systems security proficiency. You will help define secure development practices for AI agents within Google and influence the broader industry in this evolving field.

Responsibilities:

  • Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs (e.g., advanced prompt injection, data exfiltration, adversarial manipulation, attacks on reasoning/planning).
  • Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats, spanning model-based defenses, runtime controls, and detection techniques.
  • Develop proof-of-concept exploits and testing methodologies to validate vulnerabilities and assess the effectiveness of proposed defenses and stay current within AI security, adversarial ML, and related security fields through literature review, conference attendance, and community engagement.
  • Collaborate with engineering and research teams to translate research findings into practical, security solutions deployable across Google's agent ecosystem.
  • Document research findings, contribute to internal knowledge sharing, security guidelines, and potentially external publications or presentations.

Other facts

Tech stack
Security Assessments,Threat Modeling,Security Engineering,Computer Security,Network Security,Security Protocols,Coding,Artificial Intelligence,Machine Learning,Adversarial Machine Learning,Prompt Injection,Model Extraction,Privacy-Preserving ML,Security Research,LLMs,AI Agent Systems

About Google

As there is no specific information available about the company from the provided sources, I am unable to generate a tailored company description. Please provide additional details about the company's industry, services, or unique value proposition for a more accurate description.

Team size: 10,001+ employees
LinkedIn: Visit
Industry: Software Development

What you'll do

  • The role involves researching vulnerabilities and designing innovative security architectures for AI agents and LLMs. Additionally, it includes collaborating with engineering teams to implement practical security solutions.

Ready to join Google?

Take the next step in your career journey

Frequently Asked Questions

What does a Security Engineer, AI Agent Security do at Google?

As a Security Engineer, AI Agent Security at Google, you will: the role involves researching vulnerabilities and designing innovative security architectures for AI agents and LLMs. Additionally, it includes collaborating with engineering teams to implement practical security solutions..

Why join Google as a Security Engineer, AI Agent Security?

Google is a leading Software Development company.

Is the Security Engineer, AI Agent Security position at Google remote?

The Security Engineer, AI Agent Security position at Google is based in Zurich, Zurich, Switzerland. Contact the company through Clera for specific work arrangement details.

How do I apply for the Security Engineer, AI Agent Security position at Google?

You can apply for the Security Engineer, AI Agent Security position at Google directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Google on their website.