Premise logo

Senior Data Engineer

Premise
Full-time
Remote
Mexico
Senior Jobs

Title:   Senior Data Engineer 



Mission

Every consumer on earth purchases in one of three places: online, big-box retail, or mom-and-pop shops. Paradoxically, the largest commercial channel is, by far, humble traditional trade shops. Yet they remain fragmented, offline, and opaque.

Native is the first intelligence-grade system built to penetrate this opacity. Each analog store is digitized into a dynamic graph where noise is filtered into low latency signals, transforming the antiquated offline world into advanced digital intelligence. Commercial leaders gain the precision to see what others cannot, store by store, rendering decisive decision advantage to win the market.

Join the ground floor of the only platform engineered to decode the analog economy into operational dominance.

In this role, you will join a high-impact team responsible for building and scaling Native’s data platform, the system that powers our global data collection pipeline, real-time ingestion, and the delivery of insights to our customers. You will design and operate the data workflows that move, transform, and model data across GCP services, ensuring that information flows reliably from our collection technologies into the products and analytics our clients depend on. Your work will sit at the center of Native’s business, driving how data is captured, validated, processed, and ultimately delivered across our platform

Native thinks differently about market research, an industry undergoing fundamental changes right now. We’re adopting technology around new data collection methods, geolocation and in-the-moment CX that will modernize how (and how deeply) companies understand their customers. If you’re looking to impact the future of global insights and analytics, then let’s talk.


Talent Values

  • High Leverage: Consistent ability to attain the productive capacity of 5-10 people through grit, raw talent, and sheer force of will.
  • High Agency: Relentless sense of ownership in the outcome, regardless of circumstance, acting decisively to shape the environment rather than being shaped by it.
  • Curiosity With Discipline: An evidence seeking, measurement mindset without succumbing to analysis paralysis, and a penchant for experimentation.
  • Intellectual Honesty: Certain enough to act, humble enough to always be learning



Role

  • Design, build, and maintain scalable ETL pipelines and DAG-orchestrated workflows using GCP-native tools (e.g. Cloud Composer/Airflow, Dataflow, BigQuery) to support both operational and analytical needs
  • Develop and evolve data models and schemas that enable clean, reliable, and well-structured downstream consumption
  • Partner with analytics, product, and engineering teams to understand data requirements and deliver accessible, high-quality datasets
  • Own the end-to-end lifecycle of data workflows, including deployment, monitoring, alerting, and troubleshooting in production environments
  • Ensure data quality, consistency, and integrity through validation frameworks, monitoring, and adherence to engineering best practices (testing, version control, documentation)
  • Optimize data processing for performance, scalability, and cost efficiency across GCP services
  • Contribute to the evolution of the data platform architecture to ensure maintainability, scalability, and long-term reliability
  • Stay current with emerging technologies and best practices in data engineering, workflow orchestration, and cloud data platforms


Qualifications

  • 5+ years of experience in data engineering or similar roles
  • Strong proficiency in SQL and Python
  • Hands-on experience with GCP data infrastructure (BigQuery, Cloud Composer/Airflow, Dataflow, Cloud Storage, Pub/Sub)
  • Experience designing and orchestrating DAG-based ETL/ELT pipelines
  • Solid understanding of data modeling and data warehousing concepts
  • Experience designing and optimizing data movement across GCP services (e.g., BigQuery ⇄ GCS, Pub/Sub pipelines, Dataflow transformations)
  • Experience deploying, monitoring, and troubleshooting production data pipelines
  • Familiarity with CI/CD, testing, and version control best practices in data ecosystems
  • Ability to partner cross-functionally with analytics, product, and engineering teams



Apply now
Share this job