This position is no longer available

This job listing has been removed by the employer and is no longer accepting applications.
Browse Similar JobsAbout Tamarind Bio
We enable any scientist to access AI-powered drug discovery. Thousands of scientists from large pharma companies, top biotechs, and academic institutions use Tamarind to design protein drugs, improve industrial enzymes, and create cutting edge molecules that weren’t feasible until now. New AI models are quickly eclipsing physics-based tools in computational drug discovery. Scientists often struggle to fine-tune, deploy, and scale these models, leaving breakthroughs on the table. Tamarind provides a simple interface to the vast array of tools being released daily.
About the role
💻About the Role We're looking for two AI/LLM Software Engineers to lead the development of AI-enabled workflows across our platform. You'll be responsible for expanding our copilot for computational biology to improve its reliability and capabilities. You’ll develop complex and scalable workflows for planning and executing pipelines, data analysis, and simulation for biotech and large pharma researchers. You’ll work closely with the founders and researchers to tailor your solution to customer needs. You should thrive in a fast-paced startup environment where you'll wear multiple hats, learn new technologies quickly, and help solve novel technical challenges. We value engineering judgment, problem-solving ability, and the capacity to build systems that can evolve with our growing needs.
Techstack:
Python, React, AWS (EC2, S3, DynamoDB), Docker, CUDA, Conda, TensorFlow/PyTorch; notebooks; bash/Slurm; APIs & web apps.
Week in the Life
Benchmark agents on protein design or molecule generation tasks
Build and own end-to-end scalable AI systems, support hundreds of tenants, and operate at scale in production
Integrate new LLMs into the platform and measure improvements
Front-end development using React/Next.js
Participate in AI strategy discussions directly with the founders
Onsite expectation
Team currently onsite in SF ~6 days/week and often works late (startup pace). This may evolve as the team grows.
Ideal Qualifications
Experience building and deploying AI/LLM-powered systems in production
Proficiency in Python and modern ML frameworks (PyTorch, TensorFlow, HuggingFace)
Strong grasp of prompt engineering, fine-tuning, and evaluation of LLMs
Familiarity with distributed systems, APIs, and cloud infrastructure (AWS preferred)
Comfort working across the stack — from model integration to frontend visualization
Located in or willing to relocate to the San Francisco Bay Area
Pluses
Experience with DevOps/MLOps tools (Docker, Kubernetes, Conda)
Technology
Our technology sits at the intersection of DevOps, MLOps, and Computational Biology. We deal with problems ranging from scaling ML inference on AWS for hundreds of GPUs to dissecting pdb files with Biopython. We deploy a wide range of open source ML models for customers, navigating between Docker containers, Colab notebooks, bash scripts, slurm jobs, and more.
Our Interview Process
We keep our process focused, transparent, and designed to give both sides a clear sense of fit.
1. Recruiter Screen (15–30 minutes) — Virtual via Google Meet Meet with our recruiter to dive into your background, interests, and what you’re looking for next. We’ll also walk you through the company, team, and role.
2. Technical Interview (90 minutes) — Virtual via Google Meet Co-Founder Interview (30 minutes): A conversation with Deniz Kavi (CEO & Co-founder) about product, collaboration, and how you approach building in an early-stage environment. Technical Deep Dive (60 minutes): A live coding and problem-solving session with Sherry Liu (CTO & Co-founder) or a member of our engineering team.
3. Onsite (1 day) — San Francisco Spend a day with us working on a mini project and meeting the team. For candidates outside the Bay Area, we’ll cover travel and lodging expenses.

Browse other open positions that match your skills