Evnek Technologies logo

Senior Data Engineer (AWS)

Evnek Technologies
Contract
On-site
Bangalore, Karnataka, India
Senior Jobs

Job Title: Senior Data Engineer (AWS)<\/b>
<\/p>

Experience:<\/b> 10 – 13 Years
<\/p>

<\/p>

Location:<\/b> Pune / Bangalore / Hyderabad / Noida / Mumbai
<\/div>
Work Mode:<\/b> Hybrid
<\/div>
Shift Timings:<\/b> 11:00 AM – 8:00 PM
<\/div>
Notice Period:<\/b> Immediate Joiners Only
<\/div>

<\/p>

Job Overview<\/b>
<\/p>

We are seeking an experienced Senior Data Engineer<\/b> with strong expertise in Python, SQL, PySpark, and AWS data services<\/b> to support enterprise -scale data processing and analytics initiatives. The ideal candidate will have deep experience in distributed data systems, performance tuning, and cloud -based data platforms, and will work closely with cross -functional teams in a hybrid working model.
<\/p>

Key Responsibilities<\/b>
<\/p>

  • Design, develop, and maintain scalable data processing pipelines<\/b> using Python, PySpark, and Spark SQL<\/b>.
    <\/li>
  • Build and optimize distributed data processing workflows<\/b> on AWS platforms.
    <\/li>
  • Leverage AWS data services such as EMR, Glue, Lambda, and S3<\/b> for batch and real -time data processing.
    <\/li>
  • Design and manage data storage solutions using RDS/MySQL, Redshift<\/b>, and other AWS -native databases.
    <\/li>
  • Implement effective data modeling, schema design, and schema evolution<\/b> strategies.
    <\/li>
  • Perform performance tuning and optimization<\/b> of Spark jobs and SQL queries.
    <\/li>
  • Monitor and troubleshoot data pipelines using AWS CloudWatch<\/b> and logging frameworks.
    <\/li>
  • Manage secrets and credentials securely using AWS Secrets Manager<\/b>.
    <\/li>
  • Collaborate with data architects, analysts, and stakeholders to translate business requirements into technical solutions.
    <\/li>
  • Debug complex data issues and provide root cause analysis with long -term fixes.
    <\/li>
  • Ensure data quality, reliability, and scalability across platforms.
    <\/li><\/ul>

    Required Skills & Qualifications<\/b>
    <\/p>

    • 10–13 years of overall experience in Data Engineering<\/b>.
      <\/li>
    • Strong proficiency in Python and SQL<\/b>.
      <\/li>
    • Extensive hands -on experience with PySpark and Spark SQL<\/b>.
      <\/li>
    • Strong experience with AWS data services<\/b>, including:
      <\/li>
      • EMR
        <\/li>
      • Glue
        <\/li>
      • Lambda
        <\/li>
      • S3
        <\/li>
      • RDS / MySQL
        <\/li>
      • Redshift
        <\/li>
      • CloudWatch
        <\/li>
      • Secrets Manager
        <\/li><\/ul>
      • Solid understanding of distributed computing concepts<\/b>.
        <\/li>
      • Strong experience in data modeling, schema handling, and performance tuning<\/b>.
        <\/li>
      • Excellent debugging, analytical, and problem -solving skills.
        <\/li>
      • Ability to work effectively in a hybrid and collaborative environment<\/b>.
        <\/li><\/ul>

         
        <\/p>


        <\/div><\/span>