Clera - Your AI talent agent
LoginStart
Start

This position is no longer available

Mem0 logo
Mem0

Applied AI Engineer

full-time•on-site•San Francisco•$150k - $180k+ 0.1% - 0.3%

Summary

Location

San Francisco

Salary

$150k - $180k

Equity

0.1% - 0.3%

Type

full-time

Workplace

On-site

Experience

2-5 years

Company links

WebsiteLinkedInLinkedIn

This position is no longer available

This job listing has been removed by the employer and is no longer accepting applications.

Browse Similar Jobs

About this role

Role Summary:

Own the 0→1. You’ll turn vague customer use cases into working proofs-of-concept that showcase what Mem0 can do. This means rapid full-stack prototyping, stitching together AI tools, and aggressively experimenting with memory retrieval approaches until the use case works end-to-end. You’ll partner closely with Research and Backend, communicate trade-offs clearly, and hand off winning prototypes that can be hardened for production.

What You'll Do:

  • Build POCs for real use cases: Stand up end-to-end demos (UI + APIs + data) that integrate Mem0 in the customer’s flow.

  • Experiment with memory retrieval: Try different embeddings, indexing, hybrid search, re-ranking, chunking/windowing, prompts, and caching to hit task-level quality and latency targets.

  • Prototype with Research: Implement paper ideas and new techniques from scratch, compare baselines, and keep what wins.

  • Create eval harnesses: Define small gold sets and lightweight metrics to judge POC success; instrument demos with basic telemetry.

  • Integrate AI tooling: Combine LLMs, vector DBs, Mem0 SDKs/APIs, and third-party services into coherent workflows.

  • Collaborate tightly: Work with Backend on clean contracts and data models; with Research on hypotheses; share learnings and next steps.

  • Package & handoff: Write concise docs, scripts, and templates so Engineering can productionize quickly.

Minimum Qualifications

  • Full-stack fluency: Next.js/React on the front end and Python backends (FastAPI/Django/Flask) or Node where needed.

  • Strong Python and TypeScript/JavaScript; comfortable building APIs, wiring data models, and deploying quick demos.

  • Hands-on with the LLM/RAG stack: embeddings, vector databases, retrieval strategies, prompt engineering.

  • Track record of rapid prototyping: moving from idea → demo in days, not months; clear documentation of results and trade-offs.

  • Ability to design small, meaningful evaluations for a use case (quality + latency) and iterate based on evidence.

  • Excellent communication with Research and Backend; crisp specs, readable code, and honest status updates.

Nice to Have:

  • Model serving/fine-tuning experience (vLLM, LoRA/PEFT) and lightweight batch/async pipelines.

  • Deployments on Vercel/serverless, Docker, basic k8s familiarity; CI for demo apps.

  • Data visualization and UX polish for compelling demos.

  • Prior Forward-Deployed/Solutions/Prototyping role turning customer needs into working software.

About Mem0

We're building the memory layer for AI agents. Think long-term memory that enables AI to remember conversations, learn from interactions, and build context over time. We're already powering millions of AI interactions. We are backed by top-tier investors and are well capitalized.

Our Culture

  • Office-first collaboration - We're an in-person team in San Francisco. Hallway chats, impromptu whiteboard sessions, and shared meals spark ideas that remote calls can't.

  • Velocity with craftsmanship - We build for the long term, not just shipping features. We move fast but never sacrifice reliability or thoughtful design - every system needs to be fast, reliable, and elegant.

  • Extreme ownership - Everyone at Mem0 is a builder-owner. If you spot a problem or opportunity, you have the agency to fix it. Titles are light; impact is heavy.

  • High bar, high trust - We hire for talent and potential, then give people room to run. Code is reviewed, ideas are challenged, and wins are celebrated—always with respect and curiosity.

  • Data-driven, not ego-driven – The best solution wins, whether it comes from a founder or an engineer who joined yesterday. We let results and metrics guide our decisions.

About Mem0

Mem0 is a universal, self‑improving memory layer for LLM applications, powering personalised AI experiences that cut costs and enhance user delight.

Looking for similar opportunities?

Browse other open positions that match your skills

Frequently Asked Questions

What does Mem0 pay for a Applied AI Engineer?

Toggle
Mem0 offers a competitive compensation package for the Applied AI Engineer role. The salary range is USD 150k - 180k per year, plus 0.1% - 0.3% equity. Apply through Clera to learn more about the full compensation details.

What does a Applied AI Engineer do at Mem0?

Toggle
The Applied AI Engineer role at Mem0 involves Role Summary: Own the 0→1 . You’ll turn vague customer use cases into working proofs-of-concept that showcase what Mem0 can do.

Is the Applied AI Engineer position at Mem0 remote?

Toggle
The Applied AI Engineer position at Mem0 is based in San Francisco, United States and is on-site. Contact the company through Clera for specific work arrangement details.

How do I apply for the Applied AI Engineer position at Mem0?

Toggle
You can apply for the Applied AI Engineer position at Mem0 directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process.
Clera - Your AI talent agent
© 2026 Clera Labs, Inc.TermsPrivacyHelp