The Opportunity
Supplier.io is expanding our data engineering team and is seeking a Senior Data Engineer to play a critical role in scaling and modernizing our data platform. This role is ideal for a highly experienced, hands-on engineer who thrives in complex environments and enjoys mentoring others while collaborating across teams, geographies, and disciplines.
As a Senior Data Engineer, you will drive, shape and execute our long-term data strategy, design resilient and scalable data architectures, and technical excellence across our data ecosystem. You will work closely with Product, DevOps, and Product Engineering teams to ensure our data systems support business growth and data-driven decision making.
To support Supplier.io growth, we are investing heavily in cloud-native, data-driven technologies. This role will be instrumental in leveraging modern data services, optimizing cost, and ensuring our data platform is secure, reliable, and scalable.
What You Will Do
- Drive Data Architecture: Lead the implementation of scalable, fault-tolerant data infrastructure that powers our advanced data platform.
- Lead Technical Strategy: Partner with senior colleagues to define and execute a cohesive data strategy that aligns technical solutions with company objectives.
- Mentorship & Code Quality: Mentor mid-level and junior engineers, conduct code reviews, and enforce best practices for Python, SQL, and data modeling.
- Evaluate & Innovate: Proactively identify, evaluate, and integrate new technologies (such as SQL Mesh) to future-proof our stack.
- Cloud Native Workflow: Establish and maintain CI/CD pipelines, branching strategies, and project workflows using Azure DevOps and Jira.
- Reliability & Governance: Champion data quality, security, and governance standards and ensure robust monitoring and alerting are in place.
- Documentation: Own the technical documentation strategy, ensuring high-level architecture diagrams and runbooks are kept current.
- Perform other duties as assigned.
What You will Need to Succeed:
- Bachelor of Computer Science, Management Information System, Engineering, Data Science, or related field.
- 7+ years of progressive experience in data engineering with demonstrated ownership of complex data systems in production environments. At least 2 years in a senior or lead capacity preferred.
- Advanced experience designing, building, and supporting Snowflake based solutions at scale.
- Proven experience architecting and implementing large-scale data solutions (i.e. Azure, AWS, GCP, or multi-cloud environments).
- Strong expertise in data modeling, ETL/ELT processes, and modern data warehousing principles.
- Expert-level proficiency in Python and SQL with the ability to write reusable, modular, and efficient code.
- Strong experience with orchestration tools (e.g., Airflow) or transformation frameworks (e.g., SQL Mesh). Experience working in an agile development environment and collaborating through ticketing systems such as Jira and Azure DevOps.
- Ability to communicate technical concepts clearly to technical and non-technical teams and influence decision-making.
- Strong problem-solving skills with the ability to troubleshoot and resolve ambiguous, high-impact issues.
- A results-oriented mindset with a demonstrated history of driving process improvements and technical excellence.
- Ability to work independently while also serving as a trusted technical partner and mentor to others along
- Ability to take vague requirements and turn them into technical roadmaps.
We do no accept unsolicited resumes from recruitment/search firms.