Mandatory Skills:
• Big Data
• Python
• SQL
• Spark / PySpark
• AWS Cloud
Job Description:
We are seeking an experienced Senior Data Engineer who will be responsible for designing, developing, and maintaining scalable data solutions. The ideal candidate should have expertise in Big Data, Python, SQL, Spark/PySpark, and AWS Cloud and will work closely with stakeholders to optimize data architecture and workflows.
Roles & Responsibilities:
• Participate in all phases of the software development lifecycle, including requirement gathering, design, development, testing, deployment, and support.
• Develop scalable, efficient, and supportable data solutions to solve complex business problems.
• Analyze source and target system data and implement appropriate transformations.
• Design and implement product features in collaboration with business and technology stakeholders.
• Ensure high data quality by identifying, analyzing, and resolving data-related issues.
• Build, clean, and optimize data pipelines for ingestion and consumption.
• Support new data management projects and enhance existing data architecture.
• Implement automated workflows using scheduling tools such as Airflow.
• Utilize continuous integration, test-driven development, and production deployment frameworks.
• Review and contribute to code, test plans, and dataset implementations to ensure adherence to data engineering standards.
• Perform root cause analysis and troubleshooting for data-related issues.
Required Skills & Experience:
• 5+ years of experience in developing data and analytics solutions.
• Strong experience in building data lake solutions using AWS (S3, EMR, Hive, PySpark, Databricks).
• Proficiency in SQL and scripting languages like Python.
• Hands-on experience with GitHub and version control processes.
• Experience with workflow scheduling tools such as Airflow.
• Ability to work in an Agile environment and collaborate effectively with teams.
• Strong problem-solving and analytical skills.
• Excellent verbal and written communication skills.
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
*Important Note*: key technologies we're looking for, especially:
PySpark and Spark ecosystem
AWS data services
Azure, Azure Data Factory, and Azure Databricks
Big Data pipeline development
Contact - elevhrconsultancy@gmail.com /9926078636