Position Summary
The Fedcap Group (TFG) is seeking a transformational and highly strategic Data Engineer to architect and lead the enterprise data warehouse, and data capabilities. This role is instrumental in enabling operational excellence, mission alignment, and scalable growth across TFG’s international network. Reporting to the Head of Data and Analytics.
Goals of the Position (Hands on Sr. Data Engineering)
The Sr. Data Engineer will:
Lead development of dbt and transformation workflows
Write, test, and deploy dbt models and transformations with automated quality checks.
Build reusable macros and packages to accelerate pipeline delivery.
Tune Snowflake queries and warehouses to improve efficiency and reduce costs.
Build monitoring and alerting frameworks to detect performance or data quality issues.
Enable end to end Data pipeline
Build ingestion (Snowpipe, Streams, Tasks) and transformation workflows that move data from raw (Bronze) to curated (Gold) layers.
Deliver pipelines that are automated, resilient, and production ready.
Directly Support Business & Analytics Teams
Partner with stakeholders to understand data needs and translate them into solutions.
Take ownership from requirements to delivery, ensuring solutions are deployed accurately and are following engineering standard frameworks.
Key Responsibilities
Collaborate with the Head of Data and Analytics to implement the enterprise Medallion Architecture (Bronze → Silver → Gold)
Design, build, and maintain data ingestion pipelines in Azure Data Factory (ADF) to move data from diverse sources into Azure Data Lake Storage Gen2 (Bronze).
Configure and manage secure integrations between Azure and Snowflake, including external stages, storage integrations, and automated ingestion patterns (Snowpipe, Streams, Tasks).
Develop and optimize Snowflake data models (fact, dimension, staging tables) aligned to Bronze–Silver–Gold architecture and business KPIs.
Implement role-based access control (RBAC), data masking, and row/column-level security in Snowflake to ensure data privacy and compliance.
Build and maintain a modular dbt framework, including models, macros, tests, and snapshots, to enforce data quality and accelerate transformations.
Create and manage CI/CD pipelines for dbt using GitHub Actions or Azure DevOps, ensuring reliable deployments across environments.
Write and optimize complex SQL and Python scripts to automate workflows, monitor data pipelines, and troubleshoot production issues.
Implement data validation, quality checks, and monitoring frameworks to ensure freshness, accuracy, and reliability of data products.
Collaborate directly with BI, Analytics, and Data Science teams to deliver curated, business-ready datasets.
Take end-to-end ownership of assigned data engineering projects: requirements -design - build - deploy - support.
Document pipelines, transformations, and models to ensure reproducibility and team-wide adoption
Qualifications
Education & Certification
Professional Experience
Success Metrics (First 6–12 Months)