We are seeking a highly skilled and motivated Senor Data Engineer with 5–7 years of experience in Big Data platforms and at least 2 years of hands-on experience in designing and implementing data warehouse (DWH) solutions on Snowflake. The ideal candidate will have strong expertise in cloud-based data services (preferably Azure), distributed computing frameworks, and robust ELT/ETL development, with a solid understanding of modern data engineering practices in a DevOps environment.
Design, develop, and optimize data warehouse solutions on Snowflake
Migrate large-scale data from on-premise systems (preferably Big Data platforms) to Snowflake cloud DWH
Build and maintain efficient ELT/ETL pipelines, ensuring performance tuning, robustness, and quick issue resolution
Perform in-depth data analysis and provide optimized solutions for complex data challenges
Utilize Apache Spark, Hadoop, and other distributed computing frameworks as required
Work closely with stakeholders to define data integration strategies, standards, and best practices
Implement data modeling, data standardization, and advanced SQL querying for analytics and reporting
Collaborate in Agile teams (Scrum/Kanban), contributing to sprint planning, reviews, and retrospectives
Mentor junior data engineers and contribute to the growth and scalability of the data engineering practice
4–7 years of experience in data engineering with at least 2 years on Snowflake
Strong proficiency in SQL and one programming language (Scala or Python)
Hands-on experience with Snowflake and Snowpark
Experience with Big Data tools: HDFS, YARN, Hive, Apache Spark
Proficiency in Azure data services (e.g., Data Factory, Blob Storage)
Familiarity with DevOps tools such as Jenkins, AWX, CTRL-M, Git (GitHub)
Good knowledge of Linux and shell scripting
Experience working in Agile environments using tools like Jira