Koantek logo

Senior Data Engineer - Multiple Positions

Koantek
Full-time
On-site
Hyderabad, Telangana, India
Senior Jobs
We are seeking an experienced Full Stack Data Engineer with 5–6 years of industry
<\/div>
experience. The ideal candidate will have a proven track record of working on live projects,
<\/div>
preferably within the manufacturing or energy sectors. He/she will play a key role in developing,
<\/div>
and maintaining scalable data solutions using Databricks and related technologies.
<\/div>

<\/div>
Key Responsibilities:
<\/div>
  • Develop, and deploy end -to -end data pipelines and solutions on Databricks, integrating with<\/span>
    <\/li>
  • various data sources and systems.<\/span>
    <\/li>
  • Collaborate with cross -functional teams to understand data, and deliver effective BI<\/span>
    <\/li>
  • solutions.<\/span>
    <\/li>
  • Implement data ingestion, transformation, and processing workflows using Spark<\/span>
    <\/li>
  • (PySpark/Scala), SQL, and Databricks notebooks.<\/span>
    <\/li>
  • Develop and maintain data models, ETL/ELT processes ensuring high performance,<\/span>
    <\/li>
  • reliability, scalability and data quality.<\/span>
    <\/li>
  • Build and maintain APIs and data services to support analytics, reporting, and application<\/span>
    <\/li>
  • integration.<\/span>
    <\/li>
  • Ensure data quality, integrity, and security across all stages of the data lifecycle.<\/span>
    <\/li>
  • Monitor, troubleshoot, and optimize pipeline performance in a cloud -based environment.<\/span>
    <\/li>
  • Write clean, modular, and well -documented Python/Scala/SQL/PySpark code.<\/span>
    <\/li>
  • Integrate data from various sources, including APIs, relational and non -relational databases,<\/span>
    <\/li>
  • IoT devices, and external data providers.<\/span>
    <\/li>
  • Ensure adherence to data governance, security, and compliance policies.<\/span>
    <\/li><\/ul>
    Required Skills and Experience:<\/b>
    <\/div>
    • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.<\/span>
      <\/li>
    • 5 -6 years of hands -on experience in data engineering, with a strong focus on Databricks and<\/span>
      <\/li>
    • Apache Spark.<\/span>
      <\/li>
    • Strong programming skills in Python/PySpark and/or Scala, with a deep understanding of<\/span>
      <\/li>
    • Apache Spark.<\/span>
      <\/li>
    • Experience with Azure Databricks.<\/span>
      <\/li>
    • Strong SQL skills for data manipulation, analysis, and performance tuning.<\/span>
      <\/li>
    • Strong understanding of data structures and algorithms, with the ability to apply them to<\/span>
      <\/li>
    • optimize code and implement efficient solutions.<\/span>
      <\/li>
    • Strong understanding of data architecture, data modeling, ETL/ELT processes, and data<\/span>
      <\/li>
    • warehousing concepts.<\/span>
      <\/li>
    • Experience building and maintaining ETL/ELT pipelines in production environments.<\/span>
      <\/li>
    • Familiarity with Delta Lake, Unity Catalog, or similar technologies.<\/span>
      <\/li>
    • Experience working with structured and unstructured data, including JSON, Parquet, Avro,<\/span>
      <\/li>
    • and time -series data.<\/span>
      <\/li>
    • Familiarity with CI/CD pipelines and tools like Azure DevOps, version control (Git), and<\/span>
      <\/li>
    • DevOps practices for data engineering.<\/span>
      <\/li>
    • Excellent problem -solving skills, attention to detail, and ability to work independently or as<\/span>
      <\/li>
    • part of a team.<\/span>
      <\/li>
    • Strong communication skills to interact with technical and non -technical stakeholders.<\/span>
      <\/li><\/ul>

      <\/div><\/span>

      Requirements<\/h3>
      • Experience with Delta Lake and Databricks Workflows.<\/span>
        <\/li>
      • Exposure to real -time data processing and streaming technologies (Kafka, Spark Streaming).<\/span>
        <\/li>
      • Exposure to data visualization tool Databricks Genie for data analysis and reporting.<\/span>
        <\/li>
      • Knowledge of data governance, security, and compliance best practices.<\/span>
        <\/li><\/ul>

        <\/div><\/span>

Apply now
Share this job