Millennium logo
Data Platform Engineer – Commodities Technology
full-timeBengaluru

Summary

Location

Bengaluru

Type

full-time

Claim this Company

Are you the employer? Manage your company page directly.

Explore Jobs

About this role

Data Platform Engineer – Commodities Technology

About Us

Founded in 1989, Millennium is a global alternative investment management firm. Millennium seeks to pursue a diverse array of investment strategies across industry sectors, asset classes and geographies. The firm’s primary investment areas are Fundamental Equity, Equity Arbitrage, Fixed Income, Commodities and Quantitative Strategies. We solve hard and interesting problems at the intersection of computer science, finance, and mathematics. We are focused on innovating and rapidly applying innovations to real world scenarios. This enables engineers to work on interesting problems, learn quickly and have deep impact to the firm and the business.

Within Millennium, the Commodities Technology team builds the data and analytics platforms that power our commodities investment strategies. We aggregate and process large volumes of fundamental and alternative data – including weather, supply/demand indicators, storage and transportation data – to provide our Portfolio Managers with a differentiated information edge.

The Role

We are seeking a Data Platform Engineer to help build the next-generation data platform (CFP) for the Commodities business.

In this role, you will design and implement the core platform infrastructure, APIs, and event‑driven services that power ingestion, transformation, cataloging, and consumption of commodities data. You will work across Python, SQL and modern cloud services to build resilient pipelines, orchestration frameworks, and system management tools with a strong focus on reliability, observability, and performance.

You will work closely with engineering teams in the US, Europe, and Singapore as well as with our commodities modelling and research teams in Bangalore to deliver a scalable platform that can support rapid experimentation and production workloads.

Key Responsibilities

  • Platform Engineering: Design and build the foundational data platform components, including event handling, system management tools, and query‑optimized storage for large‑scale commodities datasets.

  • Data Pipelines & Orchestration: Implement and maintain robust batch and streaming pipelines using Python, SQL, Airflow, and Kafka to ingest and transform data from multiple internal and external sources.

  • Cloud Infrastructure: Develop and manage cloud‑native infrastructure on AWS (S3, SQS, RDS, Terraform), ensuring security, scalability, and cost efficiency.

  • API & Services Development: Build and maintain FastAPI‑based services and APIs for data access, metadata, and platform operations, enabling self‑service consumption by downstream users.

  • Performance & Reliability: Optimize queries, workflows, and resource usage to deliver low‑latency data access and high platform uptime; introduce monitoring, alerting, and automated testing (PyTest, CI/CD).

  • Collaboration & Best Practices: Partner with quantitative researchers, data scientists, and other engineers to understand requirements, translate them into platform capabilities, and promote best practices in code quality, DevOps, and documentation.

Required Qualifications

  • Experience: 4–8 years of software/data engineering experience, preferably building or operating data platforms or large‑scale data pipelines.

  • Programming: Strong proficiency in Python with solid software engineering practices (testing, code review, CI/CD).

  • Data & SQL: Hands‑on experience with SQL and relational databases (SnowflakePostgres or similar); understanding of data modelling and query optimization.

  • Orchestration & Streaming: Practical experience with Airflow (or similar workflow orchestration tools) and message/streaming systems such as Kafka.

  • Cloud & Infrastructure as Code: Experience with AWS services (S3, SQS, RDS) and infrastructure‑as‑code tools such as Terraform.

  • APIs & Services: Experience building RESTful services, ideally with FastAPI or a similar Python web framework.

  • DevOps: Familiarity with Git‑based workflows and CI/CD tooling (e.g., GitHub Actions) and automated testing frameworks (PyTest).

  • Soft Skills: Strong communication skills, ability to work in a distributed team, and a pragmatic, ownership‑driven mindset.

Preferred Qualifications

  • Experience with columnar/analytic data formats and engines (e.g., Iceberg, ClickHouse, Parquet).

  • Exposure to monitoring/observability stacks (Prometheus, Grafana, OpenTelemetry, etc.).

  • Prior experience in financial markets or commodities data environments.

  • Experience working in high‑impact, globally distributed engineering teams.

Other facts

Tech stack
Python,SQL,Data Engineering,Cloud Infrastructure,AWS,Terraform,APIs,FastAPI,Data Pipelines,Orchestration,Kafka,DevOps,CI/CD,Monitoring,Data Modeling,Query Optimization

About Millennium

Millennium is a global, diversified alternative investment firm, founded in 1989, which manages $83.5 billion in assets. Defined by evolution, innovation and focus, Millennium's mission is to deliver high-quality returns for our investors.

Millennium seeks to empower talented professionals with the sophisticated expertise, resources and technology to pursue a diverse range of investment strategies across industry sectors, asset classes and geographies.

See our community guidelines at: mlp.com/guidelines

Read our disclosures at: https://www.mlp.com/disclosures/

Team size: 5,001-10,000 employees
LinkedIn: Visit
Industry: Investment Management
Founding Year: 1989

What you'll do

  • The Data Platform Engineer will design and implement the core platform infrastructure, APIs, and event-driven services for commodities data. Responsibilities include building data pipelines, managing cloud infrastructure, and optimizing performance and reliability.

Join Clera's Talent Pool

Get matched with similar opportunities at top startups

This role is hosted on Millennium's careers site.
Join our talent pool first to get notified about similar roles that match your profile.

Frequently Asked Questions

What does a Data Platform Engineer – Commodities Technology do at Millennium?

As a Data Platform Engineer – Commodities Technology at Millennium, you will: the Data Platform Engineer will design and implement the core platform infrastructure, APIs, and event-driven services for commodities data. Responsibilities include building data pipelines, managing cloud infrastructure, and optimizing performance and reliability..

Why join Millennium as a Data Platform Engineer – Commodities Technology?

Millennium is a leading Investment Management company.

Is the Data Platform Engineer – Commodities Technology position at Millennium remote?

The Data Platform Engineer – Commodities Technology position at Millennium is based in Bengaluru, India. Contact the company through Clera for specific work arrangement details.

How do I apply for the Data Platform Engineer – Commodities Technology position at Millennium?

You can apply for the Data Platform Engineer – Commodities Technology position at Millennium directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Millennium on their website.