DescriptionKey Responsibilities:
- Design, develop, and implement data pipelines and ETL processes using Python and Databricks and/or IICS(Informatica).
- Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet business needs.
- Optimize existing data workflows for performance and scalability.
- Monitor and troubleshoot data pipeline issues, ensuring timely resolution.
- Document data processes, workflows, and technical specifications.
- Stay updated with the latest industry trends and technologies related to data engineering and cloud services.
QualificationsQualifications:
- Proven experience in Python programming and data manipulation.
- Mid-level knowledge of Databricks and its ecosystem (Spark, Delta Lake, etc.).
- Experience with IICS or similar cloud-based data integration tools.
- Familiarity with SQL and database management systems (e.g., PostgreSQL, MySQL, etc.).
- Understanding of data warehousing concepts and best practices.
- Mid-level problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
- Experience with cloud platforms (e.g., AWS, Azure, GCP).
- Familiarity with Agile development methodologies