Are You Ready to Make It Happen at Mondelēz International?
Join our Mission to Lead the Future of Snacking. Make It Uniquely Yours.
Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, Big Query, Data proc, Pub/Sub, and real-time streaming architectures is preferred. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week .
How you will contribute
A key aspect of the MDLZ Data Hub Google Big Query platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9.
What you will bring
A desire to drive your future and accelerate your career. You will bring experience and knowledge in:
6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment
Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platform
Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects.
Implementation and automation of Internal data extraction from SAP BW / HANA
Implementation and automation of External data extraction from openly available internet data sources via APIs
Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR
Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases
Exposing data via Alteryx, SQL Database for consumption in Tableau
Data documentation maintenance/update
Collaboration and workflow using a version control system (e.g., Git Hub)
Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts.
ETL or Data integration tool: Experience in Talend is highly desirable.
Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics
Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query.
Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc.
Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week.
Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks.
Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx
Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, Big Query, Data proc, Pub/Sub, and real-time streaming architectures is preferred. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
More about this role
Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week .
What you need to know about this position:
A key aspect of the MDLZ Data Hub Google Big Query platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9.
What extra ingredients you will bring:
6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment
Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platform
Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects.
Implementation and automation of Internal data extraction from SAP BW / HANA
Implementation and automation of External data extraction from openly available internet data sources via APIs
Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR
Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases
Exposing data via Alteryx, SQL Database for consumption in Tableau
Data documentation maintenance/update
Collaboration and workflow using a version control system (e.g., Git Hub)
Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts.
ETL or Data integration tool: Experience in Talend is highly desirable.
Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics
Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query.
Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc.
Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week.
Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks.
Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx
Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
Mondelēz International, Inc. (NASDAQ: MDLZ) is an American multinational confectionery, food, and beverage company based in Illinois which employs approximately 80,000 individuals around the world.
Our Purpose
Our purpose is to empower people to snack right. We will lead the future of snacking around the world by offering the right snack, for the right moment, made the right way.
Our Brands
We’re leading the future of snacking with iconic brands such as Oreo, belVita and LU biscuits; Cadbury Dairy Milk, Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum.
Our People
Our 80,000+ colleagues around the world are key to the success of our business. Our Values and Leadership Commitments of Love our Consumers and Brands, Grow Every Day, and Do What's Right shapes our culture – what we believe in, stand for, and what guides our actions and decisions. Great people and great brands. That’s who we are.
Our Strategies
We are uniquely positioned to lead the future of snacking with strong leadership in our categories, an unparalleled portfolio of global and local brands and a solid footprint in fast-growing markets. Aimed at delivering sustainable growth, our strategic plan is centered around three strategic priorities:
• Growth: accelerate consumer-centric growth
• Execution: drive operational excellence
• Culture: build a winning growth culture
Take the next step in your career journey