Lilly logo
Sr. Data Engineer/Tech Lead
full-timeBangalore South

Summary

Location

Bangalore South

Type

full-time

Explore Jobs

About this role

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world.

As a Senior Data Engineer, you will :

  • demonstrate expert skills in ETL/ELT, data integration, ML Ops, and SQL, as well as intermediate to advanced skills in Python, Pyspark, AI/ML, and data visualization.
  • demonstrate the ability to review, optimize, document, and mentor data/visualization engineers on data pipelines, mapping, cleansing, and visual design using various tools and platforms.
  • possess ability to break down moderately complex problems to implement for increased business impact
  • support other team members and helps them to be successful. Actively shares learnings with team members
  • drive and enforce the team process improvements, ensuring others are brought along in understanding the benefits and tradeoffs
  • actively promote new and innovative ideas across multiple teams and capabilities

Key Responsibilities

Hands-On Development (75%)

  • Build, and maintain scalable data platforms and infrastructure on AWS
  • Implement end-to-end data pipelines for batch and real-time data processing
  • Build robust ETL/ELT workflows to ingest, transform, and load data from diverse sources
  • Implement data lake/Lakehouse architectures using AWS S3, Glue, Athena, and Lake Formation
  • Design and optimize data warehouse solutions (Redshift, Snowflake) for analytics and reporting
  • Establish data quality frameworks and automated monitoring systems
  • Write production-quality Python code for data processing, transformation, and automation
  • Build scalable data pipelines using Apache Airflow, AWS Step Functions, or similar orchestration tool
  • Develop streaming data solutions using Kinesis, Kafka, or AWS MSK
  • Optimize SQL queries and database performance for large-scale datasets
  • Implement data validation, cleansing, and quality checks
  • Build APIs and microservices for data access and integration
  • Create monitoring, alerting, and observability solutions for data pipelines
  • Debug and resolve data pipeline failures and performance bottlenecks

Technical Leadership & Collaboration (25%)

  • Mentor junior and mid-level data engineers through code reviews and technical guidance
  • Establish best practices for data engineering, testing, and deployment
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements
  • Work with ML engineers to build data pipelines supporting machine learning workflows
  • Partner with platform/infrastructure teams on cloud architecture and cost optimization
  • Lead technical design discussions and architectural reviews
  • Document data architectures, pipelines, and processes
  • Evangelize data engineering best practices across the organization

Required Qualifications

Technical Expertise

  • 10+ years of professional experience in data engineering or related roles
  • Expert-level proficiency in Python for data engineering:
    • Data processing libraries: Pandas, PySpark, Dask, Polars
    • API development: FastAPI, Flask
    • Testing: Pytest, unittest
  • Strong AWS expertise with hands-on experience in:
    • Data Storage: S3, RDS/Aurora, DynamoDB, Redshift
    • Data Processing: Glue (ETL jobs, crawlers, Data Catalog), EMR, Athena
    • Streaming: Kinesis (Data Streams, Firehose, Analytics), MSK (Managed Kafka)
    • Orchestration: Step Functions, EventBridge, Lambda
    • Analytics: QuickSight, Athena, Redshift Spectrum
    • Data Lake: Lake Formation, Glue Data Catalog
    • Infrastructure: CloudFormation, CDK, IAM, VPC, CloudWatch
  • Workflow Orchestration:
    • Apache Airflow (strong preference)
  • Big Data Technologies:
    • Apache Spark (PySpark) for distributed data processing
    • Experience with EMR, Databricks, or similar platforms
    • Understanding of distributed computing concepts
    • Parquet, Avro, ORC file formats

Architecture & Design

  • Solid understanding and implementation knowledge of data modelling (dimensional modelling, star/snowflake schemas)
  • Experience with both batch and streaming data processing patterns
  • Knowledge of data lake, data warehouse, and lake-house architectures
  • Understanding of data partitioning, bucketing, and optimization strategies
  • Expertise in designing for data quality, lineage, and governance

DevOps & Best Practices

  • Strong experience with CI/CD pipelines for data engineering (GitHub Actions, GitLab CI, Jenkins)
  • Infrastructure as Code using Terraform, CloudFormation, or AWS CDK
  • Containerization with Docker; experience with ECS/Fargate/Kubernetes is a plus
  • Git version control and branching strategies
  • Monitoring and observability tools: CloudWatch, Grafana
  • Data pipeline testing strategies and frameworks

Preferred Qualifications

  • Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or related field (or equivalent experience)
  • Experience in regulated industries (healthcare/pharma, finance, government) with compliance requirements
  • Hands-on experience with:
    • Additional AWS services: Glue DataBrew, AppFlow, Data Pipeline, Lambda, SageMaker
    • Streaming platforms: Apache Kafka, Confluent, AWS MSK
    • Data quality tools: Great Expectations, dbt, Monte Carlo, Bigeye
    • Data cataloging: AWS Glue Data Catalog, Alation, Collibra
    • Alternative clouds: GCP (BigQuery, Dataflow), Azure (Synapse, Data Factory)
    • Data orchestration: dbt for transformation workflows
  • Experience with clinical data, life sciences, or statistical computing domains (CDISC standards, clinical trials data)
  • Knowledge of data mesh or data fabric architectures
  • Experience building data platforms for ML/AI workloads
  • Familiarity with data governance and metadata management frameworks

Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.

Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.

#WeAreLilly

Other facts

Tech stack
ETL/ELT,Data Integration,ML Ops,SQL,Python,Pyspark,AI/ML,Data Visualization,AWS,Apache Airflow,Data Lake,Data Warehouse,Big Data Technologies,DevOps,Data Quality,Data Governance,Containerization

About Lilly

We're a medicine company turning science into healing to make life better for people around the world. It all started nearly 150 years ago with a clear vision from founder Colonel Eli Lilly: "Take what you find here and make it better and better." Harnessing the power of biotechnology, chemistry and genetic medicine, our scientists are urgently advancing science to solve some of the world's most significant health challenges.

General Information and Guidelines:
When you engage with us on LinkedIn, you're agreeing to these Community Guidelines: https://e.lilly/guidelines.

If you have questions about a Lilly medicine, contact The Lilly Answers Center at 1-800-Lilly-Rx (1-800-545-5979) Monday through Friday, excluding company holidays.

Team size: 10,001+ employees
LinkedIn: Visit
Industry: Pharmaceutical Manufacturing

What you'll do

  • The Senior Data Engineer will build and maintain scalable data platforms and infrastructure on AWS, implementing end-to-end data pipelines for batch and real-time data processing. They will also mentor junior engineers and establish best practices for data engineering.

Ready to join Lilly?

Take the next step in your career journey

Frequently Asked Questions

What does a Sr. Data Engineer/Tech Lead do at Lilly?

As a Sr. Data Engineer/Tech Lead at Lilly, you will: the Senior Data Engineer will build and maintain scalable data platforms and infrastructure on AWS, implementing end-to-end data pipelines for batch and real-time data processing. They will also mentor junior engineers and establish best practices for data engineering..

Why join Lilly as a Sr. Data Engineer/Tech Lead?

Lilly is a leading Pharmaceutical Manufacturing company.

Is the Sr. Data Engineer/Tech Lead position at Lilly remote?

The Sr. Data Engineer/Tech Lead position at Lilly is based in Bangalore South, Karnataka, India. Contact the company through Clera for specific work arrangement details.

How do I apply for the Sr. Data Engineer/Tech Lead position at Lilly?

You can apply for the Sr. Data Engineer/Tech Lead position at Lilly directly through Clera. Click the "Apply Now" button above to start your application. Clera's AI-powered platform will help match your profile with this opportunity and guide you through the application process. You can also learn more about Lilly on their website.