Job Description โ Senior Data Engineer
Experience: 4โ6 Years ย Employment Type: Full-time
๐ Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command over modern data stacks. Youโll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
๐ Key Responsibilities:
๐ธ PostgreSQL & Data Modeling
- Design and optimize complex SQL queries, stored procedures, and indexes
- Perform performance tuning and query plan analysis
- Contribute to schema design and data normalization
๐ธ Data Migration & Transformation
- Migrate data from multiple sources to cloud or ODS platforms
- Design schema mapping and implement transformation logic
- Ensure consistency, integrity, and accuracy in migrated data
๐ธ Python Scripting for Data Engineering
- Build automation scripts for data ingestion, cleansing, and transformation
- Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
- Maintain reusable script modules for operational pipelines
๐ธ Data Orchestration with Apache Airflow
- Develop and manage DAGs for batch/stream workflows
- Implement retries, task dependencies, notifications, and failure handling
- Integrate Airflow with cloud services, data lakes, and data warehouses
๐ธ Cloud Platforms (AWS / Azure / GCP)
- Manage data storage (S3, GCS, Blob), compute services, and data pipelines
- Set up permissions, IAM roles, encryption, and logging for security
- Monitor and optimize cost and performance of cloud-based data operations
๐ธ Data Marts & Analytics Layer
- Design and manage data marts using dimensional model.
- Build star/snowflake schemas to support BI and self-serve analytics
- Enable incremental load strategies and partitioning
๐ธ Modern Data Stack Integration
- Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
- Support modular pipeline design and metadata-driven frameworks
- Ensure high availability and scalability of the stack
๐ธ BI & Reporting Tools (Power BI / Superset / Supertech)
- Collaborate with BI teams to design datasets and optimize queries.
- Support development of dashboards and reporting layers
- Manage access, data refreshes, and performance for BI tools
โ
Required Skills & Qualifications:
- 4โ6 years of hands-on experience in data engineering roles
- Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
- Advanced Python scripting skills for automation and ETL
- Proven experience with Apache Airflow (custom DAGs, error handling)
- Solid understanding of cloud architecture (especially AWS)
- Experience with data marts and dimensional data modeling,
- Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
- Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
- Version control (Git) and CI/CD pipeline knowledge is a plus
- Excellent problem-solving and communication skills
ย