Job Title: Data Engineer
Location: Phoenix, AZ
Work Type: Hybrid (3 days onsite per week)
Employment Type: Contract (C2C)
Candidate Criteria:
- Only Female Candidates
- Experience: 6+ years
- Visa Status: Any visa is acceptable (OPT/CPT, GC EAD, H1B, USC)
- Green Card Holders: Not eligible
About the Role:
We are seeking an experienced Data Engineer to join an Agile, SDLC-based team supporting large-scale data initiatives at our firm. The ideal candidate will be a hands-on individual contributor with strong data engineering, analytics, and ETL development experience.
Key Responsibilities:
- Work as an individual contributor in an Agile SDLC environment
- Design, develop, and maintain scalable ETL pipelines using Python and PySpark
- Analyze, transform, and manage large datasets across data warehouses and data lakes
- Develop complex SQL queries across multiple RDBMS platforms
- Integrate data across systems using REST, SOAP, ETL, and SSIS
- Build and support reports and dashboards using BI tools
- Collaborate with cross-functional teams to deliver high-quality data solutions
- Write efficient, reusable, and maintainable code for data processing pipelines
- Learn and adapt to new cloud-based tools and platforms as required
Required Skills:
- Strong Data Analytics background
- Proficiency in MySQL
- Strong hands-on experience with Python and PySpark
- Expertise in ETL, Big Data, and Data Warehousing concepts
· Expert-level knowledge in at least one object-oriented programming language in C, C++ Or Java
- Advanced knowledge of SQL (SQL Server, DB2, Oracle)
- Experience with:
- Stored procedures, triggers, DML packages, materialized views
- Data modeling (schemas, entity relationships)
- BI tools: Tableau, Power BI, or Qlik
- System integrations using REST, SOAP, ETL, SSIS
- Node.js and JSON
- Familiarity with Apache Airflow and DAG development
Preferred / Plus Skills:
- Experience with Apache Spark
- Exposure to cloud-based data platforms
- Willingness to upskill on emerging tools and technologies