
We are looking for a Full Stack Engineer who is obsessed with the intersection of Generative AI and Entertainment. This role is designed as a sprint to identify a true technical partner. You will own entire features from Day 1 and build the core tech infrastructure that condenses a 6-month overseas studio workflow into a few hours.
The Character Consistency Pipeline
Fine-tuning FLUX LoRA models on comic IP reference images to maintain exact character DNA — facial features, costume details, art style — across hundreds of generated frames. This is the hardest problem we are solving and the most important thing you will build.
The Agentic AI Pipeline
Architecting the backend workflows that ingest raw manga panels. You will use Computer Vision to extract boundaries, lighting, and style, routing them through visual diffusion models while maintaining strict frame-by-frame character consistency.
The IP Memory Bank
Optimizing the vector search algorithms and memory architectures that allow the AI to "remember" a specific character's DNA across hundreds of generated video frames.
The Studio Dashboard
Building the high-performance Next.js frontend used by enterprise publishers to review storyboards, adjust compositions, and export broadcast-quality motion.
Hands-on experience fine-tuning diffusion models — FLUX, Stable Diffusion, or SDXL — not as a user but as a builder
Practical experience with LoRA, DreamBooth, or IP-Adapter for character consistency across generated images
Working knowledge of the Hugging Face diffusers library at the code level
Experience building or integrating image-to-video generation pipelines — Luma, Kling, Wan, RunwayML, or equivalent
You have done something generative — not just built on top of GPT or Claude, but actually trained, fine-tuned, or modified a generative model yourself. You understand the difference between inference and training. You have a GitHub that shows it.
Take the next step in your career journey