Job Duties: ETL Engineering & Data Pipeline Development Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions. Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows. Contribute to the creation of a master stitched data file to replace Varicent's current data‑assembly functions. Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code. Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations. Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios. Develop optimized, scalable physical data models aligned to business logic and downstream needs. Build reusable, parameterized data pipelines and frameworks supporting long‑term extensibility. Work closely with business analysts, data analysts, architects, and product owners across NA and Europe. Participate in data discovery sessions, helping interpret and validate logic, rules, and data patterns. Support three Scrum teams delivering compensation modernization, ensuring clarity on transformations and dependencies. Collaborate with QA, data quality testers, and governance teams to enforce validation standards. Quality, Performance & Reliability Implement data quality checks, profiling, reconciliation, and alerting across ingestion and transformation pipelines. Engineer performance‑optimized pipelines capable of processing large, complex datasets multiple times per month. Ensure compliance with audit, traceability, and business continuity expectations for compensation data. 5-8+ years of experience as a Data Engineer with strong hands‑on expertise in Azure (Data Factory, Databricks, Data Lake Storage, SQL, Synapse preferred). Proven ability to build production‑grade ETL/ELT pipelines supporting complex, multi‑regional business processes. Experience designing or implementing rules engines (Drools, ODM, or similar). Strong SQL skills and experience with data modeling, data orchestration, and pipeline optimization. Experience working in Agile Scrum teams and collaborating across global regions (U.S. and India preferred). Ability to partner closely with analysts and business stakeholders to translate rules into technical solutions. Minimum Skills Required: SQL, Python, Azure Data Factory, Databricks, Azure Synapse
NTT DATA – a part of NTT Group – IT and business services headquartered in Tokyo. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. NTT DATA enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe.
Take the next step in your career journey