Job Description:
Primary function of the role is to deliver high quality data engineering solutions to business and end users across Corporate Lending – either directly via self-service data products, or by working closely with the Analytics team, providing modelled data warehouses on which they can add reporting and analytics.
The candidate will be required to deliver to all stages of the data engineering process – data ingestion, transformation, data modelling and data warehousing, and build self-service data products. The role is primarily focused on Azure cloud delivery development.
The role itself will work closely with our Architect, Engineering lead, Analytics team, DevOps, DBAs, and upstream Application teams.
Responsibilities:
- Work closely with end-users and Data Analysts to understand the business and their data requirements
- Carry out ad hoc data analysis and ‘data wrangling’ using Synapse Analytics and Databricks
- Building dynamic meta-data driven data ingestion patterns using Azure Data Factory and Databricks
- Build and maintain business focused data products and data marts
- Build and maintain Azure Analysis Services databases and cubes
- Share support and operational duties within the wider engineering and data teams
- Work with Architecture and Engineering teams to deliver on these projects and ensure that supporting code and infrastructure follows best practices outlined by these teams.
- Help define test criteria to establish clear conditions for success and ensure alignment with business objectives.
- Manage their user stories and acceptance criteria through to production into day-to-day support
- Assist in the testing and validation of new requirements and processes to ensure they meet business needs
- Stay up-to-date with industry trends and best practices in data engineering
Core skills and knowledge
- Excellent data analysis and exploration using T-SQL
- Proficiency in Python for ETL development and data wrangling, especially in Databricks.
- Experience writing automated tests for data pipelines
- Strong SQL programming (stored procedures, functions)
- Knowledge and experience of data warehouse modelling methodologies (Kimball, dimensional modelling, Data Vault 2.0)
- Experience in Azure – one or more of the following: Data Factory, Databricks, Synapse Analytics, ADLS Gen2
- Azure Data products and MS Fabric
- Experience in using source control & ADO
- Awareness of data governance tools and practices (e.g., Azure Purview)
- Understanding and experience of deployment pipelines
- Excellent analytical and problem-solving skills, with the ability to think critically and strategically.
- Strong communication and interpersonal skills, with the ability to engage and influence stakeholders at all levels.
- To always act with integrity and embrace the philosophy of treating our customers fairly
- Analytical, ability to arrive at solutions that fit current / future business processes
- Effective writing and verbal communication
- Organisational skills: Ability to effectively manage and co-ordinate themselves.
- Ownership and self-motivation
- Delivery focus
- Assertive, resilient and persistent
- Team oriented
- Deal well with pressure and highly effective at multi-tasking and juggling priorities
Any other attributes that would be helpful, but not essential for the role
- Deeper programming ability (C#, .Net Core)
- Build ‘infrastructure-as-code’ deployment pipelines
- Any financial services and banking experience
Job Location: