Lila Sciences logo
Scientist/Sr. Scientist, AI Safety
full-timeSan Francisco, Cambridge, London$228k - $358k

Summary

Location

San Francisco, Cambridge, London

Salary

$228k - $358k

Type

full-time

Explore Jobs

About this role

Your Impact at Lila


We're building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.), to prepare for risks from scientific superintelligence. The initial focus of this team will be to build and implement a bespoke safety strategy for Lila, tailored to its specific goals and deployment strategies. This will involve technical safety strategy development, broader ecosystem engagement, as well as developing technical collateral including risk- and capability-focused evaluations and safeguards.


What You'll Be Building



  • Evaluations to test for scientific risks (both known but especially novel) from cutting edge scientific models integrated with automated physical labs

  • Initial proof-of-concept safeguards, such as ML models to detect and block unsafe behavior from scientific AI models, as well as from physical lab outputs.

  • Understanding of a range of model capabilities, across primarily scientific but also non-scientific domains (e.g. persuasion, deception) to inform Lila's broader safety strategy.

  • Broader, high-quality research efforts - as and when needed - for scientific capability evaluation and restriction.


What You’ll Need to Succeed



  • Bachelor's degree in a technical field (e.g., computer science, engineering, machine learning, mathematics, physics, statistics), or related experience.

  • Strong programming skills in Python, and experience with ML frameworks (including, for instance, Inspect) for large-scale evaluation and scaffolded testing.

  • Experience in building evaluations, or conducting red-teaming exercises, for CBRN / cyber risks (or for frontier model capabilities more generally, including both unsafe and benign capabilities)

  • Experience in designing and/or implementing (directly or through consultation) AI safety frameworks for frontier AI companies.

  • Ability to communicate complex technical concepts and concerns to non-expert audiences effectively.


Bonus Points For



  • Masters or PhD in a field relevant to safety evaluations of AI models in scientific domains, or a technical field.

  • Publications in AI safety / evaluations / model behaviour in top ML / AI conferences (NeurIPS, ICML, ICLR, ACL) or model release system cards.

  • Experience researching risks from novel science (e.g. biosecurity, computational biology, etc.) or working with narrow scientific tools (e.g. large scale foundation models for science).


Location


  • This position may be based in any of Lila's offices, including Cambridge (MA), San Francisco (CA), or London (UK).


About Lila


Lila Sciences is the world’s first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science.  We are pioneering a new age of boundless discovery by building the capabilities to apply AI to every aspect of the scientific method.  We are introducing scientific superintelligence to solve humankind's greatest challenges, enabling scientists to bring forth solutions in human health, climate, and sustainability at a pace and scale never experienced before. Learn more about this mission at  www.lila.ai


If this sounds like an environment you’d love to work in, even if you only have some of the experience listed below, we encourage you to apply.


Compensation


  • For US-based candidates (Cambridge or San Francisco), we expect the base salary for this role to fall between $228,000–$358,000 USD per year, along with bonus potential and generous early equity. The final offer will reflect your unique background, expertise, and impact.

  • For UK-based candidates, compensation will be determined separately and will be aligned with local market benchmarks and internal leveling.


We’re All In


Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.


Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.


A Note to Agencies


Lila Sciences does not accept unsolicited resumes from any source other than candidates. The submission of unsolicited resumes by recruitment or staffing agencies to Lila Sciences or its employees is strictly prohibited unless contacted directly by Lila Science’s internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Lila Sciences, and Lila Sciences will not owe any referral or other fees with respect thereto.

Other facts

Tech stack
Python,Machine Learning,Technical Safety Strategy,Risk Evaluation,AI Safety Frameworks,Communication,Red-Teaming,Scientific Research,Model Capabilities,Automated Labs,Safeguards,Evaluations,Cyber Risks,Biosecurity,Computational Biology

About Lila Sciences

Lila Sciences is the world’s first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science. We are building the foundation to apply AI to every aspect of the scientific method, enabling scientists to bring forth solutions in human health and sustainability at a pace and scale never experienced before.

Team size: 201-500 employees
LinkedIn: Visit
Industry: Technology, Information and Internet

What you'll do

  • The role involves building and implementing a safety strategy tailored to Lila's goals and deployment strategies. This includes developing evaluations for scientific risks and initial proof-of-concept safeguards.

Ready to join Lila Sciences?

Take the next step in your career journey

Frequently Asked Questions

What does Lila Sciences pay for a Scientist/Sr. Scientist, AI Safety?

Lila Sciences offers a competitive compensation package for the Scientist/Sr. Scientist, AI Safety role. The salary range is USD 228k - 358k per year. Apply through Clera to learn more about the full compensation details.

What does a Scientist/Sr. Scientist, AI Safety do at Lila Sciences?

As a Scientist/Sr. Scientist, AI Safety at Lila Sciences, you will: the role involves building and implementing a safety strategy tailored to Lila's goals and deployment strategies. This includes developing evaluations for scientific risks and initial proof-of-concept safeguards..

Why join Lila Sciences as a Scientist/Sr. Scientist, AI Safety?

Lila Sciences is a leading Technology, Information and Internet company. The Scientist/Sr. Scientist, AI Safety role offers competitive compensation.

Is the Scientist/Sr. Scientist, AI Safety position at Lila Sciences remote?

The Scientist/Sr. Scientist, AI Safety position at Lila Sciences is based in San Francisco, United States and Cambridge, Massachusetts, United States. Contact the company through Clera for specific work arrangement details.

How do I apply for the Scientist/Sr. Scientist, AI Safety position at Lila Sciences?

You can apply for the Scientist/Sr. Scientist, AI Safety position at Lila Sciences directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Lila Sciences on their website.