Data Engineer – Databricks
About the Role
We’re looking for a Data Engineer to design, build, and optimize data pipelines using Databricks. You’ll work with clients and internal teams to deliver scalable, efficient data solutions tailored to business needs.
Key Responsibilities
- Develop ETL/ELT pipelines with Databricks and Delta Lake
- Integrate and process data from diverse sources
- Collaborate with data scientists, architects, and analysts
- Optimize performance and manage Databricks clusters
- Build cloud-native solutions (Azure preferred, AWS/GCP also welcome)
- Implement data governance and quality best practices
- Automate workflows and maintain CI/CD pipelines
- Document architecture and processes
What We’re Looking For
Required:
- 3+ years in data engineering with hands-on Databricks experience
- Proficient in Databricks, Delta Lake, Spark, Python, SQL
- Cloud experience (Azure preferred, AWS/GCP a plus)
- Strong problem-solving and communication skills
Preferred:
- Experience with MLflow, Power BI/Tableau
- Cloud/Data engineering certifications
- Familiarity with CI/CD and data automation tools
…