Company
Name :<\/b><\/span><\/span><\/span> |
| ||||||||||||||||||||||||||
Job
Title :<\/b><\/span><\/span><\/span> | Data
Engineer<\/span><\/span> Qualification
:<\/b><\/span><\/span><\/span> Any
graduation<\/span><\/span> Experience
:<\/b><\/span><\/span><\/span> 6
to 9 years<\/span><\/span> Must
Have Skills :<\/b><\/span><\/span><\/span> Strong
proficiency in SQL<\/span><\/span><\/b> and experience with relational databases (PostgreSQL, MySQL, SQL
Server, Oracle).<\/span><\/span><\/span><\/span> Hands -on
experience with ETL/ELT pipelines<\/span><\/span><\/b> using tools like Airflow, ADF, Glue, or similar.<\/span><\/span> Expertise
in big data technologies<\/span><\/span><\/b> such as Spark, Hadoop, Hive, or Kafka.<\/span><\/span> Programming
experience in Python/Scala<\/span><\/span><\/b> for data processing and automation.<\/span><\/span> Experience
with cloud data platforms<\/span><\/span><\/b> (AWS Redshift, Azure Synapse, GCP BigQuery, Snowflake, etc.)<\/span><\/span> Good
to Have Skills :<\/b><\/span><\/span><\/span> Knowledge
of <\/span><\/span>Data
Modeling concepts<\/span><\/span><\/b> (Star Schema, Snowflake Schema, Normalization).<\/span><\/span><\/span><\/span> Experience
with <\/span><\/span>Data
Warehousing and Lakehouse architectures<\/span><\/span><\/b>.<\/span><\/span> Familiarity
with <\/span><\/span>DevOps/DataOps
tools<\/span><\/span><\/b> (Docker, Kubernetes, CI/CD, Git).<\/span><\/span> Experience
with <\/span><\/span>stream -processing
pipelines<\/span><\/span><\/b> (Spark Streaming, Kafka Streams).<\/span><\/span> Understanding
of <\/span><\/span>data
quality, data governance, and metadata management<\/span><\/span><\/b> frameworks.<\/span><\/span> Roles
and Responsibilities :<\/b><\/span><\/span><\/span> Design,
build, and maintain scalable ETL/ELT data pipelines<\/span><\/span><\/b> for batch and real -time processing.<\/span><\/span><\/span><\/span> Develop
and optimize data models, warehouse structures<\/span><\/span><\/b>,
and storage solutions.<\/span><\/span> Integrate
data from various internal and external data sources<\/span><\/span><\/b> ensuring reliability and accuracy.<\/span><\/span> Collaborate
with analysts, data scientists, and business teams<\/span><\/span><\/b> to understand data needs.<\/span><\/span> Monitor
and troubleshoot production data workflows<\/span><\/span><\/b>,
ensuring high availability and performance.<\/span><\/span> Implement
data quality checks, validation processes<\/span><\/span><\/b>,
and governance practices.<\/span><\/span> Automate
data ingestion, transformation, and workflow orchestration<\/span><\/span><\/b> using scripting and tools.<\/span><\/span> Optimize
big -data processes<\/span><\/span><\/b> for performance, scalability, and cost efficiency.<\/span><\/span> Ensure
adherence to security, compliance, and privacy standards<\/span><\/span><\/b> when handling data.<\/span><\/span> Document
data flow diagrams, technical designs, and pipeline
specifications<\/span><\/span><\/b>.<\/span><\/span> Location
:<\/b><\/span><\/span><\/span> Bangalore
, Hyberabad , Chennai , Mumbai, Pune<\/span><\/span> CTC
Range :<\/b><\/span><\/span><\/span> 18
to 24 LPA<\/span><\/span> Notice
period :<\/b><\/span><\/span><\/span> Immediate
to 15 days<\/span><\/span> Shift
Timings :<\/b><\/span><\/span><\/span> General<\/span><\/span> Mode
of Interview :<\/b><\/span><\/span><\/span> Virtual<\/span><\/span> Mode
of Work :<\/b><\/span><\/span><\/span> Hybrid<\/span><\/span> Mode
of Hire :<\/b><\/span><\/span><\/span> Permanent<\/span><\/span> Note
:<\/b><\/span><\/span><\/span> |