Logicbroker logo

Staff Data Engineer

Logicbroker
Full-time
Remote
Senior Jobs
Logicbroker is the Agentic Commerce Orchestration Engine helping enterprise retailers, brands, suppliers, and distributors connect and grow. Our Intelligent Commerce Network powers over $10B in GMV for global leaders like Samsung, Walgreens, and Home Depot by automating the entire process from discovery to doorstep and stock to dock. We make products discoverable, shoppable, fulfillable, and returnable so our clients can grow faster, delight customers, and run smarter operations.   
 
Job Summary: 
As a Staff Data Engineer, you will set the technical vision and architecture for Logicbroker’s data infrastructure. You will build and optimize large-scale data pipelines, event streams, and data services that power analytics, AI-driven insights, and mission-critical commerce operations. You’ll lead initiatives to modernize our data platform, mentor engineers, and collaborate with cross-functional teams (Engineering, SRE, Data Science, Product) to deliver scalable, secure, and high-performance data systems. 
 
What You'll Do: 
  • Define the strategy for our data pipelines, ETL/ELT workflows, and event streaming architectures to ensure scalability and reliability. 
  • Design and implement real-time and batch data processing systems using frameworks like Apache Spark, Flink, or Beam. 
  • Lead efforts to evolve our data lakehouse and warehouse solutions (e.g., Databricks, Redshift, BigQuery, or Delta Lake). 
  • Establish best practices for data modeling, schema evolution, partitioning, and query optimization. 
  • Champion data observability and quality, building automated validation, monitoring, and anomaly detection systems. 
  • Build data ingestion services that handle high-volume e-commerce events (orders, inventory, shipments) in real time. 
  • Implement and optimize data APIs and self-service data access for analytics, machine learning, and reporting teams. 
  • Develop robust streaming pipelines (Kafka, Kinesis) and integrate with event-driven architectures across the platform. 
  • Improve data tooling for developers: local testing environments, automated schema checks, and metadata management. 
  • Partner with Product and Data Science teams to translate business needs into data products and pipelines. 
  • Work closely with SRE to ensure data services meet strict SLAs, SLOs, and resilience goals. 
  • Collaborate with stakeholders to deliver actionable insights, enabling real-time dashboards, operational metrics, and predictive analytics. 
  • Represent data engineering in strategic planning sessions and technical deep-dives with leadership. 
  • Mentor senior and mid-level engineers, guiding data architecture decisions and career development. 
  • Advocate for data-driven engineering culture, setting standards for testing, documentation, and secure data handling. 
  • Contribute to hiring efforts, ensuring we build a world-class data engineering team. 
What We Need: 
  • Bachelor’s degree in Computer Science, Data Engineering, or related field. 
  • 10+ years of experience in software/data engineering, with at least 2 years in a senior or staff-level leadership capacity. 
  • Deep expertise in cloud-native data engineering and distributed data processing. 
  • Proficiency in Python, Java, Scala, Go, or other backend/data-focused languages. 
  • Strong experience with streaming data frameworks (Kafka, Kinesis, or Pulsar) and data workflow orchestration (Airflow, Dagster, Prefect). 
  • Advanced knowledge of SQL and NoSQL databases (Postgres, DynamoDB, MongoDB, Cassandra). 
  • Proven ability to design and optimize large-scale data pipelines (batch and real-time). 
  • Familiarity with containerization and orchestration (Docker, Kubernetes). 
  • Expertise with at least one major cloud platform (AWS, GCP, or Azure) and their data services. 
  • Experience with CI/CD pipelines, infrastructure-as-code, and automated testing in data environments. 
  • Experience with lakehouse technologies (Delta Lake, Iceberg) and modern data warehouses (Snowflake, BigQuery). 
  • Exposure to AI/ML data pipelines, feature stores, and model serving. 
  • Strong knowledge of data security, compliance, and governance (e.g., SOC 2, GDPR). 
  • Familiarity with observability and lineage tools (OpenLineage, Monte Carlo, or DataDog). 

Why Logicbroker:

 

Mission-Driven Culture: Be part of a company transforming digital commerce through innovation and agility—your work directly shapes how global brands connect with customers. 

Collaborative, No-Ego Environment: We believe the best ideas win, not the loudest voices. You’ll work alongside teammates who challenge and support each other. 

Hybrid Flexibility with High-Performance Energy: Whether remote or in-office, we foster autonomy and accountability—because we trust you to own your success. 

Leadership That Listens: Our executives are not just accessible—they’re invested in your growth, open to your ideas, and committed to building a company where people thrive. 

Celebrated Wins, Shared Learnings: From team offsites to Slack shoutouts, we celebrate progress and learn from setbacks together.