We just announced our $3M Pre-Seed. Watch our — launch video.
Senior ML Engineer | Generative AI | Design, prototype, and ship AI-powered products that delight customers.
As a Senior Machine Learning Engineer, I specialize in building scalable data pipelines and deploying custom ML solutions to enhance robotics systems. My expertise includes designing end-to-end workflows for real-time video processing using Kubernetes, Metaflow, and PyTorch, as well as implementing advanced annotation tools and YOLO-based algorithms to achieve high-accuracy defect detection.
With over eight years of experience in AI/ML, I have a proven track record of delivering impactful, production-ready solutions. At unstructured.io, I spearheaded OCR initiatives, optimizing text extraction techniques to improve accuracy and streamline data workflows. My work reflects a commitment to leveraging cutting-edge AI technologies to solve complex technical challenges while driving innovation in robotics and intelligent automation.
Chat with Clera and we'll introduce you to the right opportunities.
This profile is based on publicly available information. Christine is not affiliated with or endorsed by Clera. Privacy policy.

RIOS Intelligent Machines · Freelance
★ Developed scalable ML pipelines using Python, PyTorch, and Metaflow for on-premise deployments, enhancing processing efficiency. ★ Implemented YOLO-based frame filtering and annotation pipelines using FiftyOne and Encord, achieving up to 98% accuracy in defect detection with custom computer vision algorithms. ★Developed PyTorch-based factory data loaders with sequence-aware batching and timestamp-based ordering, improving GPU utilization while preserving temporal integrity in training pipelines. ★Developed scalable CV pipelines with Kubernetes, Metaflow, and Ubuntu/Linux, cutting video processing time by 30% in real-time robotics applications. Skills: FiftyOne · Computer Vision · Machine Learning Algorithms · Roboflow · Data Engineering · Metaflow · Image Segmentation · Image Processing · Deep Learning · YOLO · Python (Programming Language) · Generative AI Tools · Generative Adversarial Networks (GANs) · Encord · MLflow · Industrial Robotics · Intelligent Agents · Data Pipelines · GPT-4 · Intelligence Systems · Product Development

𝐊𝐞𝐲 𝐒𝐤𝐢𝐥𝐥𝐬: 𝑶𝑪𝑹 𝑮𝒐𝒐𝒈𝒍𝒆 𝑴𝑳 𝑲𝒊𝒕 𝑻𝒆𝒙𝒕𝑹𝒆𝒄𝒐𝒈𝒏𝒊𝒕𝒊𝒐𝒏 𝒂𝒏𝒅 𝑻𝒆𝒔𝒔𝒆𝒓𝒂𝒄𝒕4𝑨𝒏𝒅𝒓𝒐𝒊𝒅 𝒂𝒓𝒆 𝒖𝒔𝒆𝒅 𝒇𝒐𝒓 𝒕𝒆𝒙𝒕 𝒓𝒆𝒄𝒐𝒈𝒏𝒊𝒕𝒊𝒐𝒏. 𝑶𝑪𝑹 𝒍𝒊𝒃𝒓𝒂𝒓𝒊𝒆𝒔: 𝑻𝒆𝒔𝒔𝒆𝒓𝒂𝒄𝒕4𝑨𝒏𝒅𝒓𝒐𝒊𝒅 & 𝑻𝒆𝒙𝒕𝑹𝒆𝒄𝒐𝒈𝒏𝒊𝒕𝒊𝒐𝒏, 𝑳𝒂𝒏𝒈𝒖𝒂𝒈𝒆 𝑰𝒅𝒆𝒏𝒕𝒊𝒇𝒊𝒄𝒂𝒕𝒊𝒐𝒏: 𝑮𝒐𝒐𝒈𝒍𝒆 𝑴𝑳 𝑲𝒊𝒕 𝑰𝒅𝒆𝒏𝒕𝒊𝒇𝒊𝒄𝒂𝒕𝒊𝒐𝒏 ★ [DOD Prototype] Design and Develop Collection Requirements Management Prototype for Distributed Common Ground/Surface System – Marine Corps (DCGS-MC). ★ [DOD Prototype] Design and Develop Intelligence Requirements to provide a collaborative environment to create and provide context to intelligence requirements and Commander's Critical Information Requirements. ★ [DOD Prototype] Design and Develop Collections Requirements Management – The system is expected to provide the ability to manage intelligence collection in a semi-automated manner in which the collections manager is able to quickly validate returns from collection assets and distribute to the customer. ★ [DOD Prototype] Design and Develop Sensitive Site Exploitation and Language Application System for Marine Corps. ★ Developed a system to provide an Optical Character Recognition (OCR) capability to ingest locally taken pictures or documents for use in Common Operational Picture (COP) updates and detailed reporting. ★ [DOD Prototype] Developed a system to utilize the results of the OCR capability or other ingested foreign language material if translate from French, Arabic, and Chinese into English. Skills: Machine Vision · Machine Learning Algorithms · System Deployment · Artificial Intelligence (AI) · Image Processing · PyTorch · Amazon Relational Database Service (RDS) · Full-Stack Development · Software Solution Architecture · Generative AI Tools · Generative Adversarial Networks (GANs) · Machine Learning · Software Architectural Design · Fine Tuning · Transformers · Reinforcement Learning · Natural Language Processing (NLP) · Product Development

helped me get this job ★ Spearheaded OCR-related initiatives, resulting in a 15% increase in text accuracy and a 10% reduction in missing text. ☆ Improved OCR accuracy by implementing image processing technologies, including scaling images to the right size, increasing the contrast and density, and choosing optimized Page Segmentation Modes (PSMs) for Tesseract. ☆ Improved structured table extraction performance by leveraging OCR approaches, including combining OCR with pdfminer extraction, applying PaddleOCR to cropped table images, and implementing TesseractOCR for full-page analysis. ★ Designed and implemented an advanced multi-agent system architecture using Pydantic AI and Model Context Protocol (MCP) to leverage Large Language Models (LLM). ☆ Developed a sophisticated multi-agent system by creating and integrating Pydantic AI graphs with specialized unstructured MCP servers. ☆ Implemented custom unstructured MCP servers that successfully interfaced with numerous existing unstructured APIs. ★ Led fine-tuning initiatives for layout detection models using 11,000+ technical PDFs, achieving a 10% increase in text accuracy and a 13% reduction in missing text detection. ★ Pioneered Vision Language Model (VLM) implementation that dramatically enhanced table structure recognition and image text extraction accuracy. ☆ Conducted comprehensive performance evaluation of 10+ leading VLM providers, including Anthropic (Claude 3.5/3.7), OpenAI (GPT-4o), and Google (Gemini 1.5/2.0 Flash) to identify optimal solutions for table structure recognition and image text extraction. ☆ Implemented sophisticated prompt optimization techniques for VLMs, resulting in substantial performance gains across critical document understanding metrics. Skills: Machine Learning Algorithms · System Deployment · Artificial Intelligence (AI) · Prompt Engineering · Object Detection · PyTorch · Chatbot Development · Amazon Relational Database Service (RDS) · Optical Character Recognition (OCR) · Large Language Models (LLM) · Generative AI Tools · Generative Adversarial Networks (GANs) · Retrieval-Augmented Generation (RAG) · Machine Learning · Large Language Model Operations (LLMOps) · Generative AI · AI Prompting · Fine Tuning · Intelligent Agents · VLM · Data Pipelines · GPT-4 · LangChain · Intelligence Systems

𝑺𝒑𝒆𝒆𝒄𝒉𝒍𝒂𝒃 𝒖𝒔𝒆𝒔 𝑨𝑰 𝒕𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒆 𝒔𝒑𝒆𝒆𝒄𝒉-𝒕𝒐-𝒔𝒑𝒆𝒆𝒄𝒉 𝒕𝒓𝒂𝒏𝒔𝒍𝒂𝒕𝒊𝒐𝒏 𝒇𝒐𝒓 𝒅𝒖𝒃𝒃𝒊𝒏𝒈 𝒐𝒇 𝒑𝒓𝒐𝒇𝒆𝒔𝒔𝒊𝒐𝒏𝒂𝒍𝒍𝒚 𝒑𝒓𝒐𝒅𝒖𝒄𝒆𝒅 𝒂𝒖𝒅𝒊𝒐 𝒂𝒏𝒅 𝒗𝒊𝒅𝒆𝒐. 𝑺𝒑𝒆𝒆𝒄𝒉𝒍𝒂𝒃 𝒊𝒔 𝒃𝒂𝒄𝒌𝒆𝒅 𝒃𝒚 𝑨𝒏𝒅𝒓𝒆𝒘 𝑵𝒈'𝒔 𝑨𝑰 𝒇𝒖𝒏𝒅. 𝑲𝒆𝒚 𝑺𝒌𝒊𝒍𝒍𝒔: 𝑨𝑾𝑺 𝑪𝒍𝒐𝒖𝒅 𝑨𝒓𝒄𝒉𝒊𝒕𝒆𝒄𝒕𝒖𝒓𝒆, 𝑺𝒑𝒆𝒆𝒄𝒉 𝑹𝒆𝒄𝒐𝒈𝒏𝒊𝒕𝒊𝒐𝒏, 𝑴𝒂𝒄𝒉𝒊𝒏𝒆 𝑻𝒓𝒂𝒏𝒔𝒍𝒂𝒕𝒊𝒐𝒏, 𝑻𝒆𝒙𝒕-𝒕𝒐-𝑺𝒑𝒆𝒆𝒄𝒉, 𝑴𝑬𝑹𝑵 𝒔𝒕𝒂𝒄𝒌. ★ Key member in designing system architecture based on Amazon Web Services. ★ Build core components of the API to access large language models. ★ Optimize the API and interfaces for performance, robustness, and ease of use. ★ Implement tools and processes to monitor model behavior and performance using Sentry and Loggly. ★ Implemented thumbnail generation for uploaded video files ★ Built utility backend services with AWS Lambda ☆ Built a service that processes Machine Learning Output and calls the backend API ☆ Built a service that is called by the backend API and then generates SRT formatted text file on the AWS S3 bucket ☆ Built a service that notifies new user registration ★ Wrote APIs for content sharing ☆ Direct content sharing by sending an invitation email that contains an invitation link using AWS SES ☆ General content sharing using a public invitation link ☆ Permission management for shared users ★ Wrote APIs and backend services to support multi-language translation and multi-accent dubbing. ★ Designed data models, implemented backend logic, and wrote APIs to implement the Paywall system by integrating the Paddle platform. ★ Implemented backend logic and APIs to keep transcribed language pronunciations for highlighted words/phrases in the translated languages and dubs. ★ Implemented backend logic and APIs to synchronize audio and text. Skills: Google Cloud Platform (GCP) · Software as a Service (SaaS) · System Deployment · Amazon Web Services (AWS) · Artificial Intelligence (AI) · MongoDB · PyTorch · Cloud Computing · Amazon Relational Database Service (RDS) · Big Data · Generative AI · Software Architecture · JavaScript · Fine Tuning · Natural Language Processing (NLP) Senior Software Architect

RIOS Intelligent Machines · Freelance
𝑲𝒆𝒚 𝑺𝒌𝒊𝒍𝒍𝒔: 𝑴𝑳𝑶𝒑𝒔, 𝑴𝒂𝒄𝒉𝒊𝒏𝒆 𝑳𝒆𝒂𝒓𝒏𝒊𝒏𝒈 , 𝑪𝒐𝒎𝒑𝒖𝒕𝒆𝒓 𝑽𝒊𝒔𝒊𝒐𝒏, 𝑬𝒅𝒈𝒆 𝑪𝒐𝒎𝒑𝒖𝒕𝒊𝒏𝒈, 𝑨𝒖𝒕𝒐 𝑴𝑳 𝑷𝒊𝒑𝒆𝒍𝒊𝒏𝒆, 𝑴𝒐𝒅𝒆𝒍 𝒅𝒆𝒑𝒍𝒐𝒚𝒎𝒆𝒏𝒕 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏, 𝑫𝒂𝒕𝒂 𝑬𝒏𝒈𝒊𝒏𝒆𝒆𝒓𝒊𝒏𝒈. ★ Built and deployed AI-powered end-to-end robotic workcells that integrate within existing workflows to help enterprises automate their entire factories, warehouses, or supply chain operations. ★ Designing production computer vision pipelines. ★ Designed, prototyped, trained and delivered machine learning models to revolutionize manufacturing, supply chains, e-commerce, and other industrial applications. ★ Worked on Data ETL (extract, transform and load) tasks which includes extracting image data from compressed formats and transforming data for ingestion into training pipelines. ★ Performed Feature extraction, transformation and standardization. ★ Performed Metadata generation. ★ Performed Data registration. ★ Collected appropriate data for model training and development, worked with Data acquisition engineers. ★ Architected, prototype, test and help deploy modern ML models and algorithms. ★ Solved real world robotics problems using ML and CV(computer vision) solutions. ★ Improved the performance of prototype with tuning model performance in the field through continual data acquisition. ★ Designed Data pipeline and data management with the custom/in-house datasets with State-of-the-Art models. ★ Utilized Auto ML pipeline for training customized object detection and recognition. ★ Developed APIs for interacting with backend storage infrastructure. ★ Architected and developed the data infrastructure for machine learning applications. ★ Worked on data ingestion pipelines (Multi-cloud / multiple physical locations) for databases such as SQL, NoSQL, and cloud solutions (data typing and learning frameworks). Skills: Kubernetes · Machine Vision · FiftyOne · Computer Vision · TensorFlow · Machine Learning Algorithms · System Deployment · Artificial Intelligence (AI) · Python · Data Engineering · Linux · Google Kubernetes Engine (GKE) · Metaflow · Extract, Transform, Load (ETL) · Object Detection · Image Segmentation · Image Processing · Deep Learning · Cloud Computing · NVIDIA cuDNN · Encord · Machine Learning · Industrial Robotics · Reinforcement Learning · Product Development · Robotic Process Automation (RPA)

★ Designed and Built requisite services for Dominique, a digital teaching assistant built by Soul Machines for Bill O'Connor's Innovation class at Maryville University. ★ Participated in designing the entire conversation flow in Whimsical. ★Designed a chatbot system architecture Based on Google Cloud Platform and its services. ★ Built a fully customized chatbot interface to be embedded in the Canvas learning management system with React.js. ★ Built orchestration services to integrate chatbot models with Node.js and Express.js. ★ Built chatbot models that can understand the intent of the user, learn from it, respond intelligently and perform actions if required with an efficient learning mechanism with RASA and Google Dialogflow. ★ Improved intent matching and entity extraction accuracy up to 98%. ★ Built a dialogue management model that predicts what action or response this chatbot should make based on the state of the conversation. Improved prediction accuracy up to 95%. ★ Built a custom action server that consumes a customized NLP model built by Spacy. ★ Created a multi-flow agent. ★ Built a webhook that fetches data from third-party APIs and database with Google Cloud Function. Skills: Machine Learning Algorithms · Cloud Infrastructure · Artificial Intelligence (AI) · Data Science · Chatbot Development · Cloud Computing · python · Natural Language Processing (NLP) Chatbot.png
★ Built data pipeline to ingest data to feed utility and internal websites using Python scripts, AWS Kinesis, and Apache Spark, which improved the metrics of the data used by the websites. ★ Designed and deployed a data pipeline for data analysis using Python scripts, AWS Elasticsearch/Kibana, and S3, to extract, transform, and load data from application logs and present the data to a dashboard. ★ Created visualizations using React.js, D3.JS library, and Python to collect metrics of ML models, and descriptive data (e.g. being able to show orders on map, visitors on map, make the map interactive, change the map and rest of the data changes, etc.) to increase user engagement by 10% ★ Developed a set of utility and internal websites for the company, enabling the end users (both internal and external) to get instant price estimates for shipping services, such as FTL, LTL and Domestic shipments. ★ Developed unit and component tests using Jest and Enzyme, resulting in 95% test coverage. ★ Reduced average data transformation time for a data set of over 100,000 records from more than one second to less than 100 milliseconds by implementing client-side data transformations in the browser using JavaScript. ★ Supported dynamic forms by developing algorithms for providing market data analysis for food and beverage companies making projections about product/factory capacity, cost, and several other factors. ★ Created dynamically populated data filters as dropdowns, checkbox groups, input fields, and sliders, resulting in a 60% increase in page views, in order to evaluate and pivot market data analysis charts. ★ Used Chrome DevTools to investigate and fix front-end rendering performance issues and computationally intensive bottlenecks. ★ Deployed and managed containerized web applications using Docker and Amazon ECS service which reduced deployment time from 1 week to 2 days, and reduced development and maintenance costs by 50% Skills: Amazon Web Services (AWS) · Data Engineering · Extract, Transform, Load (ETL) · Data Science · Python (Programming Language) · Data Analysis · Big Data · Apache Kafka · Databases · Big Data Analytics
★ Created near-real time event based ETL pipelines using Google Cloud Functions to fetch data from external API’s for example Phoneburner and LMS Canvas. ★ Enabled CICD using Github and Github Actions. ★ Created ETL pipelines in fivetran to integrate data from multiple sources e.g. SQL Server, Google Sheets and Salesforce. ★ Enabled automatic scaling and handle schema evolutions. ★ Used dbt models, seeds, exposure to transform and document data in BigQuery. ★ Created Data Lineage using DBT. ★ Built dashboards using thoughtspot to provide reports of data marts. ★ Extracted audio calls from phoneburner, converted to text using Cloud Speech to Text and performed sentiment analysis using Cloud Natural Language API. ★ Ingested assessment form data from Nice for analysis in BigQuery in order to enable stakeholders to track rep call performance improvement. ★ Prepared data representation for phoneburner in details and major entities involved. ★ Performed Performance Testing, including data sizes and time taken to fetch complete data for required routes. ★ Developed data fetch strategy based on performance test results. ★ Tested Cloud Run service for phoneburner and performed data integrity checks. ★ Provided assessment of potential roadmaps for cloud technologies with a focus on Synapse on-prem infrastructure and Measurement Enablement GCP infrastructure. ★ Tuned existing ETL and reporting models. ★ Reported to executive level to provide decision making insights. ★ Built prototype on various enhancements. ★ Created event-driven workflows using Google Cloud Functions and Google Pub/Sub. ★ Designed and built data marts and cubes for financial reporting and analytics. ★ Interacted with business analysts and marketing teams for requirement gathering and estimations. Skills: Google Cloud Platform (GCP) · Machine Learning Algorithms · Cloud Infrastructure · Artificial Intelligence (AI) · Data Engineering · Linux · Extract, Transform, Load (ETL) · Google BigQuery · Data Science · Cloud Computing · Data Analysis · Transformers · python
★ Developed utility and internal websites for the company. The tools are used to estimate the shipping price, the goal is to estimate the price for FTL, LTL and Domestic shipment to be used for both internal (marketing and sales team) and external use (providing service to users). ★ Bug fixing to ensure smooth delivery and functioning of the applications. ★ Conducted testing & maintained the code quality. ★ Collaborated with a team of 15 engineers to define, design and ship new features & correct multiple bugs Skills: Amazon Web Services (AWS) · Linux · Python (Programming Language) · Software Solution Architecture · Software Architecture · python

𝑨𝒓𝒂𝒚𝒂 𝑨𝒑𝒑 𝒊𝒔 𝒂𝒏 𝑬𝒍𝒆𝒄𝒕𝒓𝒐𝒏𝒊𝒄 𝑯𝒆𝒂𝒍𝒕𝒉 𝑹𝒆𝒄𝒐𝒓𝒅 (𝑬𝑯𝑹) 𝒂𝒑𝒑𝒍𝒊𝒄𝒂𝒕𝒊𝒐𝒏 𝒇𝒐𝒓 𝒑𝒂𝒕𝒊𝒆𝒏𝒕 𝒓𝒆𝒈𝒊𝒔𝒕𝒓𝒂𝒕𝒊𝒐𝒏 𝒂𝒏𝒅 𝒑𝒂𝒕𝒊𝒆𝒏𝒕 𝒎𝒂𝒏𝒂𝒈𝒆𝒎𝒆𝒏𝒕 [𝑯𝑰𝑷𝑷𝑨 𝑪𝒐𝒎𝒑𝒍𝒊𝒂𝒏𝒕]. ★ Architected cloud-based EHR software system (AWS EC2 Infrastructure ). ★ Architected EHR’s Communication System, Network, Patient Registration System, Hospital Registration & other External Services. ★ Architected, designed & developed the application in accordance with HIPAA compliance requirements. ★ Supervised implementation and deployments of HIPAA regulatory compliance requirements such as: Authentication, Auto Log-off, Adit & Alerts, Encryption, Hosting & Infrastructure & Authorization. ★ Supervised Production Environment Setup which involves various components of the application: AWS Amazon Virtual Private Cloud, ECR, ECS, EC2, and AWS RDS. ★ Managed a team of engineers to make necessary system improvements to satisfy physician and staff needs for improved services. ★ Hired / Interviewed Full-Stack engineers, Dev-Ops Engineer, Quality Engineer to support the development of the application. ★ Work with Dev-Ops Engineer on implementing CICD Pipeline in production using GitHub Actions. ★ Implemented Role Based Access Control for Doctors, nurses, and other staff to secure the application. ★ Implemented a mechanism to authenticate ePHI: Phone Verification (Vonage) & Email Verification ( SendGrid Mail Server) for Multi Factor Authentication. ★ Implemented Encryption and decryption: AWS RDS encryption feature using KMS & Completed SSL Certificate installation on the server for data transfers. ★ Implemented logs and audit controls: AWS RDS Alarm,, AWS EC2 Monitor Alarm, Web Server(NGINX) Error Alarm, EPM backend(Django) Error Alarm, & AWS CloudWatch. ★ Integrated CollaborateMD, eClaimStatus. epowerdoc to the application. ★ Integrated Sonicwall Firewall ★ Good knowledge of HL7 and FHIR protocols. Skills: U.S. Health Insurance Portability and Accountability Act (HIPAA) · Cloud Infrastructure · System Deployment · Amazon Web Services (AWS) · React.js · Google Kubernetes Engine (GKE) · MongoDB · Amazon Relational Database Service (RDS) · Software Solution Architecture · Software Architecture · Transformers · Agile Methodologies · Product Development Landing Page.png
★ Built an NLP worker engine that extracts key phrases from thousands of voice calls daily using unsupervised machine learning techniques. ★ Converted Voice calls to text using Google Speech to Text. ★ Implemented unsupervised keyphrase extraction algorithms like Topicrank, Textrank, Yake, Autophrase. ★ Extracted keyphrases from textual calls which are then passed via ensembles, and various pre and post-processing techniques to aggregate top keyphrases. ★ Deployed NLP worker on Google App Engine. ★ Improved cloud memory store used for performance improvement of NLP worker. ★ Utilized Google Cloud AutoML and AutoML tables for building keyphrase and call segmentation models on human-annotated data. ★ Worked on model training, evaluation, serving, and re-training automated. ★ Automated sentiment analysis performed using Google Natural Language API and Vader. ★ Sentiments generated via these algorithms were fed to *BERT* to train a custom sentiment analysis model. ★ Researched, prototyped (from research papers), built features and optimized(hyper-parameter tuning) the state-of-the-art machine learning and deep learning techniques like SVM, Logistic Regression, Random Forest regression, LSTM, CNN etc., using scikit-Learn, Keras, TensorFlow on CPU/GPU environments for student text-classification. ★ Utilized ELI5 and LIME for model interpretation. ★ Final scores/sentiments benchmarked Logistic Regression based text model. ★ Built dashboards for analyzing keyphrase data on Google Data Studio by connecting via BigQuery ★ Worked on pre-processing and data preparation for ML models done via BigQuery using complex SQL transformations and joins. ★ Built Rest APIs for serving ML models and integrated with MongoDB for better performance and scalability Key achievement: Deployed Call Segmentation and Sentiment Analysis models using BERT. Skills: Google Cloud Platform (GCP) · TensorFlow · Machine Learning Algorithms · Cloud Infrastructure · Artificial Intelligence (AI) · Data Engineering · Extract, Transform, Load (ETL) · Data Science · Deep Learning · Cloud Computing · Data Analysis · Machine Learning · Transformers · python · Natural Language Processing (NLP)

RKAnywhere connects trainers and athletes to train harder and achieve via iOS app, Web, and Alexa. Download “RKAnywhere” on the App Store: https://apps.apple.com/us/app/rkanywhere/id1471521980 ★Provide design specifications and architecture for new feature development ★ Collaborate with team to help define product development teams on product and technical roadmaps ★ Influence different aspects of the development process like API creation, design, and product. Skills: Amazon Web Services (AWS) · Software Solution Architecture · Product Development RKAnywhere Premiere 1, sponsored by LA Tri Club RKAnywhere
★ Worked productively with Product Team to understand requirements and business specifications around Portfolio Management, Analytic and Risk. ★ Effectively coded software changes and alterations based on specific design specifications. ★ Designed and developed automation framework for functional and regressing testing using Javascript, Coffeescript, Java, Selenium, Rest-Assured, Maven, Test NG, Junit, Postman. ★ Develop and load test data into test environments. ★ Designed Location Intelligence Products and other Insurance products ★ Extensive experience in preparing Test Strategy, Test plan, Test scenarios, Test cases, and Test scripts based on User requirements and System Requirements ★ Extensively Worked on Creation of Data-Driven, Modular driven and Page Object Module Frameworks Skills: Java · Python (Programming Language) · Software Architecture · JavaScript
Moody's Analytics · Internship
★ Building machine learning-based models for data analysis and visualization on spark and zeppelin. ★ Resourcefully performed multiple statistical analyses on the company’s datasets and based on their results proposed important changes to the data-processing pipeline. ★ Executed analysis using R, Python, and PySpark in zeppelin. Skills: Machine Learning Algorithms · Data Analysis · Project Management
★ Reduced the number of fraudulent activities by 80% by analyzing and reporting customer behavior patterns. ★ Designed and implemented programs to detect and prevent fraudulent activities, resulting in a 2% reduction in number of incidents of fraud. ★ Analyzed how to identify data by breaking it into separate parts, resulting in the ability to analyze data by identifying the underlying principles, reasons, or facts of information. ★ Implemented a scalable low-latency architecture for a data processing pipeline, including input data validation, data cleaning and transformation, and user interface for visualization. The architecture was developed using frameworks such as Kafka, HBase, HDFS, Flume, Spark, SparkStreaming, and Impala among many others.
★ Taught students in building a Micromouse autonomous vehicle [Micromouse is a robotics project that involves building a small robotic car to autonomously solve a maze as quickly as possible]. Skills: Artificial Intelligence (AI) · Robotics
★ Assisted and tutored students with computer programming ideas in the course taught using Python, Scheme, Spark, and SQL. ★ Assisted a class with about 40 students in utilizing UNIX-based computers for CS61A assignments, projects, and labs. Skills: Python (Programming Language) · MySQL · SQL
★ Worked on Algorithms for Automation of Surgical Subtasks in Robot-Assisted Minimally Invasive Surgery ★ Worked on Automation of Radiation Treatment Planning and Delivery. ★ Created ROS objects for visualizing trajectories before executing on robot ★ Wrote utility for recording demonstrated videos and trajectories Skills: Linux
★ Prepared the lab for introductory chemistry courses. ★ Assisted with the chemistry supply inventory. ★ Assisted in the set up of major experiments. ★ Set up and conducted chemical and other experiments. ★ Learn how to read and utilize chromatography, spectroscopy and microscopy to test and analyze lab results. ★ Worked with other students and laboratory technicians in the microbiological and chemical testing to accredit standard and methods. ★ Graded exams, quizzes and test.
★ Created worksheets and practice quizzes to help high school students prepare for their Math, Chemistry and Physics exams. ★ Help student’s improved their grade by 10-20 percent. ★ Assisted students in acquiring better understanding of targeted weak areas within a subject or a subject as a whole. ★ Successfully made three groups of grade D students acquire grade B levels in mock examinations.
Activities and societies: Cal Alumni Association, Cal Solar Vehicle Team, IEEE Micromouse, Upsilon Pi Epsilon, Taekwondo Club.
Claim it to keep it up to date, or request removal. We're happy to help either way.
𝑨𝒑𝒑𝒍𝒚 𝑴𝒂𝒄𝒉𝒊𝒏𝒆 𝑳𝒆𝒂𝒓𝒏𝒊𝒏𝒈 𝒕𝒐 𝒅𝒆𝒕𝒆𝒓𝒎𝒊𝒏𝒆 𝒕𝒉𝒆 𝒄𝒐𝒏𝒕𝒆𝒏𝒕 𝒐𝒇 𝒕𝒉𝒆 𝑨𝑻𝑻&𝑪𝑲 𝒇𝒓𝒂𝒎𝒆𝒘𝒐𝒓𝒌 𝒂𝒏𝒅 𝒕𝒉𝒆 𝒅𝒆𝒔𝒄𝒓𝒊𝒑𝒕𝒊𝒐𝒏 𝒐𝒇 𝒆𝒂𝒄𝒉 𝒅𝒊𝒈𝒊𝒕𝒂𝒍 𝒗𝒂𝒄𝒄𝒊𝒏𝒆 𝒊𝒏 𝒕𝒉𝒆 𝒅𝒂𝒕𝒂𝒔𝒆𝒕. ★ Built UIs (React.js) to visualize aggregation results of the machine learning model that classifies cyberattack techniques based on the MITRE ATT&CK framework. ★ Utilized Python and Natural Language Processing libraries to developed model that predicted the context of the ATT&CK framework data with 94% accuracy; resulting in reduction of false-positives in vaccine mis-identification by 50%. ★Developed advanced machine learning models to detect threats, with 90% accuracy in predicting malicious activities, and automated the deployment of ML models on production servers. Skills: Machine Learning Algorithms · Amazon Web Services (AWS) · Artificial Intelligence (AI) · Linux · Data Science · Cloud Computing · Data Analysis · Optical Character Recognition (OCR) · python · Natural Language Processing (NLP) · Product Development Semantic Similarity Analysis
★ Built ETL pipeline to ingest data from CloudSQL to BigQuery, ETL pipeline is capable of scaling to terabytes of data and runs throughout the day. ★ Built dashboards for analyzing keyphrase data on Google Data Studio by connecting via BigQuery Skills: Machine Learning Algorithms · Data Engineering · Extract, Transform, Load (ETL) · Google BigQuery · Data Science · Data Analysis · Machine Learning · python