About this role
<p style="margin-bottom:0cm;"><strong>Job Title:</strong> Databricks Solution Architect</p><p style="margin-bottom:0cm;"><strong>Experience:</strong> 10+ years</p><p style="margin-bottom:0cm;"><strong> </strong></p><p style="margin-bottom:0cm;"><strong>About Us</strong></p><p style="margin-bottom:0cm;">At Codvo, we are committed to building scalable, future-ready data platforms that power business impact. We believe in a culture of innovation, collaboration, and growth, where engineers can experiment, learn, and thrive. Join us to be part of a team that solves complex data challenges with creativity and cutting-edge technology.</p><div><br></div><p style="margin-top:12.0pt;"><strong>Senior / Expert Databricks Solutions Architect — Job Description</strong></p><p style="margin-top:12.0pt;"><strong>About the role</strong></p><p style="margin-top:4.0pt;margin-right:0cm;margin-bottom:4.0pt; margin-left:0cm;">We are seeking a highly experienced Solutions Architect to lead the design and delivery of end-to-end data and analytics solutions on the Databricks platform. You will translate complex business needs into scalable, secure, and cost-efficient data lakehouse architectures, collaborate with cross-functional teams, and guide customers from concept through implementation and adoption.</p><p style="margin-top:12.0pt;"><strong>What you’ll do (Responsibilities)</strong></p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Engage with business stakeholders to understand goals, data sources, and analytic use cases; translate into a holistic Databricks-based solution.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Design scalable data lakehouse architectures on Databricks (Delta Lake, Databricks SQL, Unity Catalog, Delta Live Tables) that support data ingestion, cleansing, modeling, governance, security, and analytics.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Lead technical architecture decisions and produce high-quality artifacts (reference architectures, solution blueprints, data models, data governance models, and integration plans).</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Architect data pipelines end-to-end (ingestion, transformation, storage, cataloging) with best practices for reliability, observability, and cost optimization.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Enable data science and ML workflows on Databricks (MLflow, feature store, notebooks, Automated ML) and design end-to-end MLOps strategies.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Ensure data governance, security, and compliance (IAM, encryption, Unity Catalog, data masking, lineage, access controls).</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Collaborate closely with data engineers, data scientists, software engineers, and DevOps to deliver production-ready solutions; implement CI/CD for data and ML pipelines.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Lead customer-facing activities: workshops, solution demos, proofs of concept, and responses to RFPs/RFIs; provide strategic guidance on platform adoption and ROI.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Mentor and coach junior architects and engineers; develop training materials and run knowledge-sharing sessions.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Monitor performance, optimize SQL and Spark workloads, manage cluster configurations, and drive cost/performance improvements.</p><p style="margin-top:12.0pt;"><strong>What you’ll bring (Required qualifications)</strong></p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• 8+ years of experience in solutions/enterprise architecture or senior data engineering roles; 3+ years of hands-on experience with the Databricks platform and Spark-based architectures.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Deep expertise in Databricks components: Delta Lake, Unity Catalog, Databricks SQL, Delta Live Tables, notebooks, and orchestration patterns.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Strong cloud experience (AWS, Azure, or GCP) with data storage and compute services (e.g., S3/Blob, ADLS, GCS, Redshift, BigQuery, Synapse, EMR/Databricks on cloud).</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Proficiency in data integration and orchestration tools (e.g., Apache Airflow, dbt, Kafka, Spark Structured Streaming).</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Advanced SQL and programming skills (Python or Scala); ability to prototype and review data pipelines, models, and analytics solutions.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Excellent communication and stakeholder management skills; ability to present complex technical concepts to both technical and non-technical audiences.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Experience delivering large-scale data lakehouse migrations/transformations, performance tuning, and cost optimization.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Databricks certification(s) or equivalent demonstrable expertise; willingness to obtain relevant certifications if not already held.</p><p style="margin-top:12.0pt;"><strong>Preferred qualifications</strong></p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Experience with ML and MLOps on Databricks (MLflow, feature stores, model registry, CI/CD for ML).</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Domain expertise in industries such as financial services, healthcare, retail, or telecommunications.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Familiarity with data governance, privacy regulations, and security frameworks (e.g., GDPR, HIPAA, SOC 2).</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Familiarity with real-time data processing and streaming architectures.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Prior experience in pre-sales or solutioning for customers, including building compelling ROI stories and technical demos.</p><p style="margin-top:12.0pt;"><strong>About you (soft skills and capabilities)</strong></p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Strategic thinker with a hands-on mindset; comfortable operating at both business and technical levels.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Strong analytical, problem-solving, and decision-making capabilities.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Collaborative team player who can lead without authority and influence stakeholders.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Comfortable working in a fast-paced, client-facing environment with travel as needed.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Submit your resume/CV and a brief cover letter outlining your Databricks projects and impact.</p><p style="margin-top:3.0pt;margin-right:0cm;margin-bottom:3.0pt; margin-left:36.0pt;">• Include links to relevant work (e.g., public case studies, GitHub repositories, or portfolio demos) if available.</p><p style="margin-top:4.0pt;margin-right:0cm;margin-bottom:4.0pt; margin-left:0cm;"><br></p>