Anthropic logo
Product Counsel, Safety
full-time$205k - $265k

Summary

Salary

$205k - $265k

Type

full-time

Explore Jobs

About this role

About Anthropic


Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role 


As Product Counsel, Safety, you will provide legal support across Anthropic's safety and security functions. You will serve as the first-line legal contact when security, privacy, product functionality, or AI behavior events arise, and will partner with Safeguards (Trust & Safety) and Security teams on policy development, enforcement frameworks, user safety matters, and regulatory compliance and engagement (in coordination with regulatory legal).

This is an excellent opportunity for a motivated attorney to grow as a product lawyer and develop counseling skills in a fast moving environment while contributing directly to responsible AI development.


Responsibilities:



  • Serve as first-line legal contact for security, privacy, product functions, and AI behavior events, providing real-time guidance during incident response

  • Partner with Safeguards on content policy enforcement, user safety, and misuse investigations and response

  • Assess notification obligations under applicable legal frameworks and contractual commitments

  • Advise Security on legal requirements for vulnerability handling and threat response

  • Coordinate with Security, Privacy, Product, Safeguards, and Communications teams during active incidents

  • Draft customer notifications, regulatory filings, and external communications

  • Develop and maintain incident response playbooks, templates, and escalation protocols

  • Advise on evidence preservation, documentation, and investigation procedures

  • Stay current on and analyze emerging AI governance frameworks and regulatory expectations relevant to safety and security as relevant to our evolving product suite


You may be a good fit if you have:



  • A JD and active membership in at least one U.S. state bar (California preferred)

  • At least 3 years of legal experience years of relevant legal experience, including exposure to fast-moving, high-stakes matters (litigation, investigations, regulatory matters, crisis management, or enforcement work)

  • Working knowledge of breach notification requirements across multiple jurisdictions

  • Technical fluency sufficient to understand system architecture, data flows, and AI model behavior and ability to learn new concepts quickly 

  • Experience coordinating multi-stakeholder responses to time-sensitive matters

  • Strong judgment about risk assessment and decision-making under uncertainty

  • Ability to remain calm and effective during high-pressure situations


Strong candidates may have:



  • Experience with content moderation, platform safety, trust & safety, and consumer protection legal work

  • Background in privacy, data protection, or cybersecurity law

  • In-house experience at a technology company, or law firm experience advising tech clients on regulatory investigations or enforcement matters

  • Familiarity with AI systems, machine learning concepts, or experience advising on novel technology products


Application Deadline: 4pm PT on February 9, 2025. Applications will be reviewed after the deadline, and our hiring team plans to begin contacting potential candidates shortly after. We encourage all interested and qualified applicants to submit their materials before this date to ensure full consideration


Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.

The annual compensation range for this role is below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Our total compensation package for full-time employees includes equity and benefits.

Annual Salary:
$205,000$265,000 USD

Logistics


Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.


Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.


We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.


How we're different


We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.


The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.


Come work with us!


Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Other facts

Tech stack
Legal Support,Incident Response,Policy Development,Regulatory Compliance,Risk Assessment,Crisis Management,Content Moderation,Data Protection,Cybersecurity Law,AI Governance,Technical Fluency,User Safety,Evidence Preservation,Documentation,Investigation Procedures,Communication Skills

About Anthropic

We're an AI research company that builds reliable, interpretable, and steerable AI systems. Our first product is Claude, an AI assistant for tasks at any scale.

Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.

Team size: 501-1,000 employees
LinkedIn: Visit
Industry: Research Services

What you'll do

  • As Product Counsel, Safety, you will provide legal support across safety and security functions, serving as the first-line legal contact during incidents. You will partner with various teams on policy development, user safety matters, and regulatory compliance.

Ready to join Anthropic?

Take the next step in your career journey

Frequently Asked Questions

What does Anthropic pay for a Product Counsel, Safety?

Anthropic offers a competitive compensation package for the Product Counsel, Safety role. The salary range is USD 205k - 265k per year. Apply through Clera to learn more about the full compensation details.

What does a Product Counsel, Safety do at Anthropic?

As a Product Counsel, Safety at Anthropic, you will: as Product Counsel, Safety, you will provide legal support across safety and security functions, serving as the first-line legal contact during incidents. You will partner with various teams on policy development, user safety matters, and regulatory compliance..

Why join Anthropic as a Product Counsel, Safety?

Anthropic is a leading Research Services company. The Product Counsel, Safety role offers competitive compensation.

How do I apply for the Product Counsel, Safety position at Anthropic?

You can apply for the Product Counsel, Safety position at Anthropic directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Anthropic on their website.