KEY RESPONSIBILITIES
• Participate in requirements gathering, database design, testing and production deployments.
• Analyse system/application requirements and design innovative solutions.
• Translating business requirements to technical specifications— data streams, data integrations,
data transformations, databases, data warehouses & data validation rules.
• Design and develop SQL Server database objects (i.e., tables, views, stored procedures, indexes,
triggers, constraints, etc.)
• Understand design and develop apache spark notebooks in Azure databricks.
• Analyze and design data flow, data lineage mapping and data models.
• Optimize, scale and reduce the cost of analytics data platforms for multiple clients.
• Adherence to data management processes and capabilities.
• Enforcing compliance with data governance and data security.
• Performance tuning and query optimizations in terms of all the database objects.
• Perform unit and integration testing.
• Create technical documentation.
• Develop and assist team members.
• Provides training to the team members.
DESIRED PROFILE
• Graduate(BE/B.Tech)/Masters(ME/M.Tech/MS) in Computer Science or equivalent from a
premier institute (preferably NIT) with a minimum of 6 - 8 years of experience.
• Having good implementation experience in \:Azure Services\: Azure Data Factory, Azure Databricks,
Azure SQL, Azure Data Lake Storage (ADLS), Azure Key Vault
o Programming & Frameworks\: Python, PySpark, Spark SQL
• Data Engineering Capabilities:
o Building and consuming REST APIs
o Multithreading and parallel processing in Databricks
o Hands-on experience with medium to large-scale data-warehousing projects on Databricks
o Robust error handling, exception management, and logging in Databricks
o Performance tuning and optimization across Spark jobs, SQL queries, and Databricks jobs
• Professional Competencies\: Strong collaboration skills, effective problem-solving, and the ability
to learn quickly