This is a remote position.
Design, build, and monitor ETL pipelines using Azure and Spark technologies
Implement scalable, cloud-native data processing workflows using PySpark
Configure and operate core Azure services: Databricks, Azure Data Factory, Azure Data Lake Storage, and Azure Functions
Collaborate closely with analysts, data scientists, and software engineers to deliver robust data solutions
Translate business needs into reliable and secure data products
Ensure data quality, governance, and performance best practices across solutions
At least 4 years of professional experience in data engineering
Proven experience with large-scale data processing and transformation pipelines
Hands-on knowledge of Azure or other major cloud platforms
Solid coding skills in Python (especially pandas/numpy) and SQL
Familiarity with Git workflows in a collaborative development setting
Fluency in Polish and English (C1 minimum level in English required)
Experience with Linux/Bash scripting
Familiarity with Docker or Kubernetes
Domain experience in retail, financial services, energy, or the public sector