Madhive is the leading independent and fully customizable operating system built to help local media professionals build profitable, differentiated, and efficient businesses. Madhive empowers sales teams to extend their reach into streaming and connects local advertisers with the communities they serve. Madhive’s platform provides the unique ability to reach local audiences at national scale, with premium supply partnerships and end-to-end tools for planning, targeting, and measuring full-funnel campaign outcomes. Powering campaigns for over 30,000 small and medium businesses per day, Madhive is driving the evolution of local media.
We are seeking a Senior Data Engineer to join our growing data engineering team. In this role, you’ll play a crucial part in scaling the infrastructure that powers our universal pixel and transforming high-volume event data into reliable, actionable signals. You’ll work closely with cross-functional teams to build robust, scalable pipelines and models, while helping ensure high data quality across our platform.
What You’ll Do:
Design and implement scalable data pipelines for ingesting, processing, and transforming large volumes of universal pixel and event data
Build and maintain real-time and batch workflows using tools like Kafka, Airflow, and BigQuery
Collaborate with engineers and product teams to ensure event data is captured accurately through our JavaScript-based universal pixel
Own and optimize ELT processes to support reporting, analytics, and machine learning use cases
Develop and maintain data models to support internal stakeholders and platform features
Monitor pipeline health, implement anomaly detection, and maintain high data quality standards
Contribute to the evolution of our cloud data infrastructure (built on GCP)
Document data pipelines, models, and operational workflows for transparency and team knowledge sharing
Promote and enforce best practices in data engineering, observability, and data governance
Who You Are:
5+ years of experience in software or data engineering, with a focus on building scalable data infrastructure
Strong experience with data pipelining and modeling, including tools like Apache Airflow, Databricks, Snowflake and dbt
In-depth knowledge of streaming technologies such as Apache Kafka
Skilled in designing and maintaining ELT/ETL workflows using modern tooling
Proficient in SQL and comfortable working with both relational and NoSQL databases (e.g., Postgres, Bigtable, Spanner)
Experience working with cloud platforms, ideally GCP
Familiarity with JavaScript and front-end tracking concepts, especially in non-browser environments like CTV.
Strong problem-solving and debugging skills, especially with distributed systems and large-scale event data
Excellent collaboration and communication skills
Bonus: experience in adtech, martech, or CTV attribution
The approximate compensation range for this position is $145,000-$210,000. The actual offer, reflecting the total compensation package and benefits, will be determined by a number of factors including the applicant's experience, knowledge, skills, and abilities, as well as internal equity among our team.
#LI-Remote
We are Madhive
Madhive is a dynamic, diverse, innovative, and friendly place to work. We embrace our differences and believe they fuel our creativity. We come from varied backgrounds and think that’s important. Whether it’s taking ideas from previous lives and applying them in different ways or creating something completely new, we are all trail-blazing team players who think big and want to make an impact.
We are committed to cultivating a culture of inclusion and collaboration. We welcome diversity in education, culture, opinions, race, ethnicity, gender identity, veteran status, religion, disability, sexual orientation, and beliefs.
Please be advised that we will NOT be using third-party recruiting agencies for this search.