We are seeking highly skilled and motivated Senior Data Engineers with expertise in Databricks and Azure to join our team. As a Senior Data Engineer, you will be responsible for implementing domain-specific use cases within our medallion architecture lakehouse environment. Working with data that is already available in the Bronze layer, you will focus on transforming and refining data through the Silver layer and building high-quality data products and data marts in the Gold layer. You will work closely with business intelligence development teams, domain stakeholders, and analytics teams to ensure efficient data flow and enable data-driven decision-making across critical business domains.
Core Data Engineering Responsibilities
Collaborate with the Product Owner, Business Analyst and other team members to understand domain-specific data requirements and design scalable data pipelines and architectures
Transform and enrich data from the Bronze layer (raw data) into the Silver layer by performing data quality checks, cleansing, validation, standardization, and modeling activities
Build and maintain data products and data marts in the Gold layer that are curated, business-ready, and optimized for analytics and reporting purposes
Develop efficient ETL/ELT workflows to extract, transform, and load data from various sources through the medallion architecture layers
Optimize and fine-tune data pipelines for performance, reliability, and scalability using PySpark and Databricks
Implement comprehensive data quality checks and monitoring to ensure data accuracy, consistency, and integrity across all layers
Develop Power BI reports and dashboards that leverage the Gold layer data products to deliver business insights
Work with BI developers and data analysts to provide them with the necessary data infrastructure and tools for analysis and reporting
Troubleshoot and resolve data-related issues, including performance bottlenecks and data inconsistencies
Stay up to date with the latest trends and technologies in data engineering and recommend improvements to existing systems and processes
Document data engineering processes, data flows, and system configurations
Domain-Specific Responsibilities:
Depending on your domain expertise, you will work on one of the following business areas:
Sales Domain:
Design and implement data solutions for Sales Controlling, Sales Reporting, pipeline analysis, and sales performance metrics
Strong understanding of sales concepts, KPIs, and reporting requirements is essential
Finance Domain:
Build data products for Financial Reporting, Profit & Loss (P&L), balance sheet analysis, and other financial analytics
Familiarity with financial terminology, accounting principles, and financial reporting standards is required
Project Business Domain:
Develop data solutions for project controlling and profitability analysis
Work with metrics such as margins, billable hours, invoiced hours, profitability/contribution margin (DB), billability rates, and project performance tracking
We are looking for multiple engineers (1-2 per domain) who can bring both technical excellence and domain expertise to transform raw data into valuable business insights.
General requirements:
Bachelor's degree in Computer Science, Engineering, or a related field
5+ years of experience as a Data Engineer on Databricks and Azure, with a focus on designing and building data pipelines within a medallion architecture (Bronze, Silver, Gold layers)
Strong problem-solving and analytical skills
Effective communication in English and collaboration in agile environments
Experience working in global, multicultural teams
Technology Must Haves:
PySpark and SparkSQL: Advanced proficiency in PySpark and SparkSQL for large-scale data transformations and processing
Databricks: Hands-on experience with Databricks workspace, Delta Lake, workflows, and notebooks
Power BI: Proven experience developing interactive reports and dashboards in Power BI
Strong programming skills in SQL and Python (Scala knowledge is a plus)
Hands-on experience with Azure services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), Azure SQL, Synapse Analytics is an advantage
Proficiency in data modeling, database design, and SQL optimization
Solid understanding of data integration patterns, ETL/ELT best practices, cloud computing, and security principles
Domain specific Requirements:
You should have demonstrable experience in at least one of the following domains:
Sales: Experience working on Sales projects such as Sales Controlling, Sales Reporting, revenue analysis, or similar.
Finance: Understanding of financial concepts, terminology, and experience implementing Financial Reporting solutions
Project Business: Knowledge of project management metrics including margins, invoiced/billable hours, profitability, contribution margin (Deckungsbeitrag), billability, and project controlling
Nice to Haves:
Experience with Unity Catalog for data governance, permissions management, and secure data sharing
Familiarity with big data frameworks like Hadoop, Spark, and Hive
Certifications in Databricks or Azure services
Experience with data streaming technologies such as Apache Kafka or Azure Event Hubs
General conditions:
Workload: 100%
Working model: Remote
Start: January 2026
Duration: Long-term project with option to extend
Your application has been successfully submitted!