Job
Summary:<\/span><\/span><\/b>
<\/span><\/p>We are
seeking a highly skilled <\/span>Data Engineer<\/span><\/b> to design, develop, and maintain
robust data pipelines and architectures. The ideal candidate will transform
raw, complex datasets into clean, structured, and scalable formats that enable <\/span>analytics,
reporting<\/span><\/b>, and <\/span>business intelligence<\/span><\/b> across the organization. This
role requires strong collaboration with <\/span>data scientists, analysts, and
cross -functional teams<\/span><\/b> to ensure timely and accurate data availability and
system performance.<\/span><\/span>
<\/span><\/p> <\/span><\/span>
<\/span><\/p>Key
Responsibilities:<\/span><\/span><\/b>
<\/span><\/p>- Design and implement <\/span>scalable
data pipelines<\/span><\/b> to support real -time and batch processing.<\/span><\/span>
<\/span><\/li>- Develop and maintain <\/span>ETL/ELT
processes<\/span><\/b> that move, clean, and organize data from multiple sources.<\/span><\/span>
<\/span><\/li>- Build and manage modern <\/span>data
architectures<\/span><\/b> that support efficient storage, processing, and access.<\/span><\/span>
<\/span><\/li>- Collaborate with stakeholders to
understand data needs and deliver reliable solutions.<\/span><\/span>
<\/span><\/li>- Perform <\/span>data transformation<\/span><\/b>,
enrichment, validation, and normalisation for analysis and reporting.<\/span><\/span>
<\/span><\/li>- Monitor and ensure the <\/span>quality,
integrity, and consistency<\/span><\/b> of data across systems.<\/span><\/span>
<\/span><\/li>- Optimize workflows for <\/span>performance,
scalability, and cost -efficiency<\/span><\/b>.<\/span><\/span>
<\/span><\/li>- Support <\/span>cloud and on -premise
data integrations<\/span><\/b>, migrations, and automation initiatives.<\/span><\/span>
<\/span><\/li>- Document data flows, schemas,
and infrastructure for operational and development purposes.<\/span><\/span>
<\/span><\/li>- Apply best practices in <\/span>data
governance, security, and compliance<\/span><\/b>.<\/span><\/span>
<\/span><\/li><\/ul> <\/span><\/span><\/b>
<\/span><\/p> <\/span><\/span><\/b>
<\/span><\/p>Required
Skills & Qualifications:<\/span><\/span><\/b>
<\/span><\/p>- Bachelor’s or Master’s degree in
Computer Science, Data Engineering, or a related field.<\/span><\/span>
<\/span><\/li>- Proven 8+ Years experience in <\/span>data
engineering<\/span><\/b>, <\/span>ETL development<\/span><\/b>, or <\/span>data pipeline management<\/span><\/b>.<\/span><\/span>
<\/span><\/li>- Proficiency with tools and
technologies such as:<\/span><\/span>
<\/span><\/li><\/ul>- SQL<\/span><\/span><\/b>, <\/span>Python<\/span><\/b>, <\/span>Spark<\/span><\/b>, <\/span>Scala<\/span><\/b><\/span>
<\/span><\/li>- ETL tools<\/span><\/span><\/b> (e.g., Apache Airflow, Talend)<\/span><\/span>
<\/span><\/li>- Cloud platforms<\/span><\/span><\/b> (e.g., AWS, GCP, Azure)<\/span><\/span>
<\/span><\/li>- Big Data tools<\/span><\/span><\/b> (e.g., Hadoop, Hive, Kafka)<\/span><\/span>
<\/span><\/li>- Data warehouses<\/span><\/span><\/b> (e.g., Snowflake, Redshift, BigQuery)<\/span><\/span>
<\/span><\/li><\/ul><\/ul>- Strong understanding of <\/span>data
modelling<\/span><\/b>, <\/span>data architecture<\/span><\/b>, and <\/span>data lakes<\/span><\/b>.<\/span><\/span>
<\/span><\/li>- Experience with <\/span>CI/CD<\/span><\/b>, <\/span>version
control<\/span><\/b>, and working in agile environments.<\/span><\/span>
<\/span> <\/span><\/span>
<\/span><\/li><\/ul> <\/span><\/span>
<\/span><\/p>Preferred
Qualifications:<\/span><\/span><\/b>
<\/span><\/p>- Experience with <\/span>data
observability<\/span><\/b> and monitoring tools.<\/span><\/span>
<\/span><\/li>- Knowledge of <\/span>data cataloguing<\/span><\/b> and <\/span>governance frameworks<\/span><\/b>.<\/span><\/span>
<\/span><\/li>- AWS/GCP/Azure data certification
is a plus.<\/span><\/span>
<\/span><\/li><\/ul>
<\/div><\/span>