Job Title
Senior Ab Initio Developer – Data Engineering & Analytics (AI Integration & BI Enablement)
Location
Bangalore, Pune (Hybrid)
Role Summary
We are seeking a seasoned Ab Initio Senior Developer to design, build, and optimize enterprise-grade data pipelines and metadata-driven frameworks on the Ab Initio platform (GDE, Co>Operate, EME, Metadata Hub, Conduct>IT). The role will work hand-in-hand with the AI/ML team and BI/Reporting teams to deliver trusted, well-governed, and performant data for analytics, machine learning, and executive dashboards. You will lead integration patterns between Ab Initio and modern BI tools, automate data quality controls, and establish standards for observability, lineage, and governance.
Key Responsibilities
Ab Initio Engineering & Platform Excellence
- Design and implement ETL/ELT pipelines using Ab Initio GDE, Co>Operate, Conduct>IT, and EME with a focus on scalability, reliability, and maintainability.
- Develop reusable components, graphs, and plans; enforce coding standards, modularization, and metadata-driven designs.
- Configure and maintain scheduling, parameterization, error handling, restartability, and performance tuning (parallelism, partitioning, vectorization).
Data Quality, Governance & Lineage
- Implement DQ rules, profiling, reconciliation checks, and threshold-based alerts within Ab Initio (e.g., Data Quality/Metadata Hub).
- Integrate with data catalog/lineage tools; ensure EME version control, audit trails, and regulatory compliance (PII handling, encryption, masking).
Integration with AI & BI
- Partner with AI engineers/data scientists to provision feature-ready datasets (batch/near-real-time), standardize feature stores, and MLOps handoffs.
- Establish integration patterns between Ab Initio and BI tools (e.g., Power BI, Tableau, Qlik, Looker) via warehouse/semantic layers, materialized views, and data marts.
- Optimize data refresh SLAs and semantic consistency for downstream BI; design CDC/streaming ingestion (Kafka, CDC tools) where applicable.
Architecture & Cloud/Data Platforms
- Contribute to data architecture decisions: warehouse/lakehouse design, star/snowflake schemas, conformed dimensions, and SCD strategies.
- Orchestrate data movement across on-prem and cloud platforms (e.g., Azure, AWS, GCP) while maintaining cost, performance, and security posture.
- Evaluate and implement APIs, microservices, or data product contracts for cross-team consumption.
Stakeholder Management & Delivery
- Work with product owners, solution architects, and business analysts to refine requirements and translate them into technical designs.
- Lead code reviews, mentor junior engineers, and drive DevOps practices (CI/CD pipelines for Ab Initio artifacts, automated testing).
- Produce high-quality documentation (design specs, runbooks, lineage maps, SLAs) and contribute to operational readiness.
Required Skills & Experience
- 8–12+ years of hands-on experience in Ab Initio (GDE, Co>Operate, EME, Metadata Hub, Conduct>IT).
- Deep expertise in ETL performance tuning, parallelism, graph optimization, error handling, and restartability.
- Strong SQL skills and data modeling (dimensional/star/snowflake, SCD, surrogate keys, CDC).
- Experience integrating Ab Initio outputs with BI tools (e.g., Power BI/Tableau/Qlik), data warehouses (e.g., Azure Synapse, Snowflake, BigQuery, Redshift), and lakehouses (e.g., Delta Lake).
- Solid understanding of data quality, lineage, metadata management, and governance controls.
- Familiarity with cloud services (Azure/AWS/GCP), storage formats (Parquet/ORC/Avro), and security (KMS, encryption, RBAC).
- Exposure to AI/ML data needs: feature engineering workflows, dataset versioning, reproducibility, and MLOps handoff patterns.
- Experience with scripting (Unix shell, Python) and DevOps/CI-CD (Git, pipelines, artifacts).
- Strong communication and stakeholder skills; ability to lead design reviews and mentor.
Nice-to-Have (Preferred)
- Experience with Kafka, Debezium, or other streaming/CDC tools.
- Hands-on with feature stores (e.g., Feast) or ML pipelines (Azure ML, SageMaker).
- Knowledge of data catalog tools (e.g., Purview, Collibra, Alation) and lineage integration.
- Familiarity with policy-as-code and data contracts for governed integrations.
- Exposure to containerization (Docker/Kubernetes) for ancillary services.
At Zensar, we’re
“experience-led everything”. We are committed to conceptualizing, designing, engineering, marketing, and managing digital solutions and experiences for over 130 leading enterprises. We are a company driven by a bold purpose:
Together, we shape experiences for better futures. Whether for our clients, our people, or the world around us, this belief powers everything we do. At the heart of our culture is
ONE with Client - a set of four core values that reflect who we are and how we work:
One Zensar, Nurturing, Empowering, and Client Focus.
Part of the $4.8 billion RPG Group, we’re a community of 10,000+ innovators across 30+ global locations, including Milpitas, Seattle, Princeton, Cape Town, London, Zurich, Singapore, and Mexico City. Explore
Life at Zensar and join us to
Grow. Own. Achieve. Learn. to be the best version of yourself.
We believe the best work happens when individuality is celebrated, growth is encouraged, and well-being is prioritized. We are an equal employment opportunity (EEO) and affirmative action employer, committed to creating an inclusive workplace. All qualified applicants will be considered without regard to race, creed, color, ancestry, religion, sex, national origin, citizenship, age, sexual orientation, gender identity, disability, marital status, family medical leave status, or protected veteran status.