Rakuten logo
EDB-IPP Project: Advancing GPU Optimization for Large Language Models
full-timeSingapore

Summary

Location

Singapore

Type

full-time

Explore Jobs

About this role

Job Description:

Rakuten Asia, in partnership with the Economic Development Board (EDB) through the Industrial Postgraduate Programme (IPP), is seeking new PhD students. We are looking for individuals with a robust understanding of deep learning, machine learning, and natural language processing to contribute to our innovative research projects.

Essential requirements include proven hands-on expertise and strong engineering skillsets, specifically in the development and training of PyTorch models.

IPP Programme Benefits
Candidates successfully selected for this programme will receive full sponsorship for their postgraduate studies and will be hired by Rakuten Asia upon successful completion.

Collaboration Model

The collaboration will include joint PhD student supervision, shared access to computational resources for large-scale model compression experiments, and regular research exchanges. Output will include high-impact publications, open-source tools, and demonstrable prototypes of efficient AI.

Project Outline

Introduction

Rakuten is committed to advancing the frontier of AI infrastructure, with a strong focus on optimizing large-scale GPU clusters for training and serving Large Language Models (LLMs). As models grow in size and complexity—ranging from dense architectures to mixture-of-experts (MoE)—achieving efficiency across training, inference, and deployment has become increasingly critical. Our GPU Optimization department combines deep system expertise and significant computational assets, and we are seeking strategic collaborations with leading universities to jointly tackle these challenges.

Proposed Research Areas

We propose collaborative research in the following areas, with flexibility to refine topics based on mutual expertise:

  • Efficient Scheduling for Sparse & Dense LLMs:

Design token-aware, load-balanced scheduling algorithms for MoE and hybrid LLM workloads that reduce inter-GPU communication and optimize heterogeneous cluster utilization.

  • Efficient Inference for State Space Models

Develop high-throughput, low-latency inference techniques for state space models, leveraging their linear-time properties to outperform traditional attention mechanisms in long-context scenarios.

  • Memory-Aware Training & Serving

Explore advanced quantization, memory-efficient checkpointing, offloading strategies, and dynamic memory management techniques to support training and inference of ultra-large models.

  • Scalable Parallelism for LLMs

Investigate hybrid parallelism (data, model, pipeline, expert) and communication-reduction strategies tailored for scaling LLMs across thousands of GPUs.

  • Hardware-Aware Optimization

Develop compiler, kernel, and data layout optimizations that fully exploit features of modern GPU architectures, improving throughput for both dense and sparse model operations.

  • High-Throughput, Low-Latency Inference

Create optimized model serving strategies using speculative decoding, continuous batching, expert routing, and adaptive computation for production-grade LLM applications.

Rakuten is an equal opportunities employer and welcomes applications regardless of sex, marital status, ethnic origin, sexual orientation, religious belief or age.

Other facts

Tech stack
Deep Learning,Machine Learning,Natural Language Processing,PyTorch,GPU Optimization,Model Compression,Scheduling Algorithms,Inference Techniques,Quantization,Memory Management,Parallelism,Compiler Optimization,Kernel Optimization,Data Layout Optimization,Model Serving Strategies,AI Infrastructure

About Rakuten

Rakuten Group, Inc. (TSE: 4755) is a global technology leader in services that empower individuals, communities, businesses and society. Founded in Tokyo in 1997 as an online marketplace, Rakuten has expanded to offer services in e-commerce, fintech, digital content and communications to 2 billion members around the world. The Rakuten Group has more than 30,000 employees, and operations in 30 countries and regions. For more information visit https://global.rakuten.com/corp/.

Team size: 10,001+ employees
LinkedIn: Visit
Industry: Software Development

What you'll do

  • Contribute to innovative research projects focused on GPU optimization for large language models. Collaborate with universities and participate in joint PhD student supervision and research exchanges.

Ready to join Rakuten?

Take the next step in your career journey

Frequently Asked Questions

What does a EDB-IPP Project: Advancing GPU Optimization for Large Language Models do at Rakuten?

As a EDB-IPP Project: Advancing GPU Optimization for Large Language Models at Rakuten, you will: contribute to innovative research projects focused on GPU optimization for large language models. Collaborate with universities and participate in joint PhD student supervision and research exchanges..

Why join Rakuten as a EDB-IPP Project: Advancing GPU Optimization for Large Language Models?

Rakuten is a leading Software Development company.

Is the EDB-IPP Project: Advancing GPU Optimization for Large Language Models position at Rakuten remote?

The EDB-IPP Project: Advancing GPU Optimization for Large Language Models position at Rakuten is based in Singapore, Singapore. Contact the company through Clera for specific work arrangement details.

How do I apply for the EDB-IPP Project: Advancing GPU Optimization for Large Language Models position at Rakuten?

You can apply for the EDB-IPP Project: Advancing GPU Optimization for Large Language Models position at Rakuten directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Rakuten on their website.