Immediate Joiner | Seeking New Opportunities in Software Engineering | Java Developer | PythonDeveloper | Ruby Developer | Data Engineer | Cloud Computing | Backend Developer | Open to Work
Skills: Java, J2EE, Spring Framework, Hibernate, JSP, Servlets, JDBC, RESTful APIs, Microservices, Maven, Git, Agile, Scrum, JavaScript, HTML, CSS, SQL, Relational Databases (MySQL, PostgreSQL, Oracle), NoSQL Databases (MongoDB), OOP, Design Patterns, TDD, CI/CD, Eclipse, IntelliJ IDEA, JUnit, RESTful Web Services, SOAP, JSON, XML, Front-end frameworks (Angular, React, Vue.js), Cloud platforms (AWS, Azure, Google Cloud), Docker, Kubernetes, DevOps, Jenkins, JIRA, Problem-solving, Analytical skills, Communication skills, Team player, Self-motivated, SDLC, Python, Postgres RDS, Unix, Hadoop (HBase, Hive, HDFS, MapReduce), Jupyter, Apache Spark, Scala, Kafka, Tomcat, JQuery, Apache CXF, JCE, SAML, SAX, Solr, Cryptography, Elasticsearch, Pyspark, Big Data, Flink, Flume, Couchbase, YARN, Data Warehousing, ETL, Data Integration, Pig, Sqoop, Oozie, Zookeeper, Apache Airflow, Apache NiFi, Data Lakes, Data Pipelines, Batch Processing, Stream Processing, Data Governance, Data Quality, Machine Learning, Deep Learning, Mahout, TensorFlow, PyTorch, MXNet, Data Security, Impala, Drill, Kylin, Data Engineering, Data Migration, Data Architecture, Distributed Systems, Scalability, Performance Tuning, Apache Ranger, Apache Atlas, Hadoop Administration, Hadoop Ecosystem.
Chat with Clera and we'll introduce you to the right opportunities.
This profile is based on publicly available information. Ajay is not affiliated with or endorsed by Clera. Privacy policy.
Walmart Global Tech · Contract
City University of Seattle · Part-time
Created Java-based test automation framework for student reports, enhancing grading efficiency and frontend development with JavaScript, including comprehensive logging. Java Programming: Collaborating closely with faculty and researchers on Java-based projects, participating in the design, development, debugging, and optimization of Java applications and systems. Python Programming: Collaborating closely with faculty and researchers on Python-based projects, including the development, debugging, and optimization of Python code for data analysis, scripting, and automation. Big Data Analysis: Engaging in projects involving extensive datasets and cutting-edge Big Data technologies. Employing advanced data analysis techniques, data cleansing, and data visualization to extract meaningful insights that align with research objectives. Technical Support: Providing valuable technical assistance to professors, researchers, and fellow students involved in Java, Python, and Big Data projects. Addressing technical issues, offering guidance on best practices, and facilitating seamless project execution. Documentation: Maintaining meticulous documentation of Java projects, Python scripts, data analysis methodologies, and project progress. Ensuring comprehensive and well-organized documentation to facilitate future reference and collaboration. Collaboration: Actively collaborating with interdisciplinary teams across the university to support a diverse range of research initiatives. Sharing expertise, knowledge, and insights related to Python and Big Data skills to foster collaborative success. Continual Learning: Demonstrating a commitment to ongoing professional growth by staying current with the latest developments in Python programming and Big Data technologies. Actively participating in relevant training sessions and workshops to continually enhance skill sets.
helped me get this job Using Docker and Kubernetes to transform a monolithic application into a microservice architecture: 1) To do this, a monolithic (single, huge application) must be converted into a microservices architecture. 2) In order to facilitate this shift, solutions like Docker and Kubernetes are used. The outcome of this change is: 1)Modularity and flexibility are increased during the application's development and maintenance processes. 2)Updating and maintaining individual microservices is simpler than doing so for a monolithic application. 3)Server Cost: There is a 20% reduction in the cost of hosting and maintaining the application. Using Jenkins, create a build pipeline: The application is currently built and deployed manually as part of the manual deployment procedure. An automated build pipeline is being implemented using Jenkins, a well-known automation tool. Before Jenkins, building and deploying required about eight hours. Following the installation of Jenkins, the procedure now completes in just one hour. Designing a Compliance Strategy for GDPR: Compliance with the General Data Protection Regulation (GDPR) is necessary in order to handle sensitive user data. A security risk with the existing system is that it saves some user-sensitive data in plaintext. This project entails developing and implementing a GDPR compliance solution, probably requiring encryption or other data protection techniques. Making an API for REST Services' Backward Compatibility: Versions (1.4, 1.2, and 1.1) of the REST API that is currently available have been specified for various clients. Clients have to adjust to changes in API versions, which might be a problem. The project entails developing a new API that guarantees backward compatibility. This modification creates a single API that maintains consistency across version changes. When the API is updated, clients don't have to change their user interfaces, making the transition easier.
Creating a Spark Job for Data Masking on Hive Warehouse Data: A Spark job has been developed to perform data masking based on a configuration for selected Primary Key columns on data stored in the Hive warehouse. PayPal is dealing with a massive 10 Petabytes of big data stored in a Hive cluster with an older Hive version (1.1) that doesn't support certain updates. To address this, a PySpark job has been created. This PySpark job identifies GDPR (General Data Protection Regulation) user data in tables and applies masking to sensitive information. The Spark process is capable of handling 2 million records in just 30 minutes, indicating its efficiency in processing large datasets. Creating a Data Pipeline Using Spark, Hive, and Kafka: A data pipeline has been designed and implemented to fulfill specific client requirements. The technologies used in this pipeline include Spark, Hive, and Kafka. Data flows through this pipeline, likely undergoing various transformations and processing steps as needed by the client. Developing a Standalone Java Application for Kafka Data Processing: A standalone Java application has been created to listen to a Kafka topic. When XML data is received on this topic, the Java application processes the XML, extracts data from it, and stores it in Hive tables. This application effectively acts as a Kafka consumer, processing data in batches, with each batch containing 50,000 records. Creating a REST Service Using Spring and Java for PayPal Honey Entity: A RESTful web service has been developed using the Spring framework and Java. This service is specifically designed for the PayPal Honey Entity, which likely represents a component or module within the larger PayPal system. The REST service provides an interface for interacting with the Honey Entity, enabling external systems or clients to access its functionality.
Grade: 4 Activities and societies: volunteer at Enactus
Claim it to keep it up to date, or request removal. We're happy to help either way.