JPMorganChase logo

Lead Data Engineer - Payments Technology

JPMorganChase
Full-time
On-site
Chicago, Illinois, United States
$128,250 - $175,000 USD yearly
Senior Jobs
Description

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorgan Chase within the Commercial & Investment Bank's Payments Technology team, you play a crucial role in an agile environment, focusing on improving, developing, and implementing data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your expertise and contributions have a substantial impact on the business, as you utilize your deep technical knowledge and problem-solving skills to address a wide range of challenges across various data pipelines, architectures, and consumer needs.

Job responsibilities

  • Designs and delivers trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
  • Defines database back-up, recovery, and archiving strategy
  • Generates advanced data models for one or more teams using firmwide tooling, linear algebra, statistical and geometrical algorithms
  • Delivers data pipeline/architecture solutions that can be leveraged across multiple businesses/domains
  • Influences peer leaders and senior stakeholders across the business, product, and data technology teams
  • Provides recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data
  • Identify opportunities for process improvements and operational efficiencies
  • Creates data models for complex applications and integrations while being accountable for ensuring design constraints are met by data engineering standards and software code development
  • Oversee the design, development, and implementation of data solutions using Databricks and AWS Glue.
  • Ensure the scalability, reliability, and performance of data pipelines and infrastructure.

Required qualifications, capabilities, and skills

  • Formal training or certification on Data Engineering concepts and 5+ years applied experience.
  • Ability to guide and coach teams on approach to achieve goals aligned against set strategic initiatives
  • Practical SQL and NoSQL experience
  • Advanced understanding of database back-up, recovery, and archiving strategy
  • Expert knowledge of AI/ML models 
  • Experience presenting and delivering visual data
  • Proficient in automation and continuous delivery methods including all aspects of the Software Development Life Cycle
  • Expert knowledge of a combination of PySpark, Databricks, AI/ML, Snowflake, Redshift, Data Lakes, Data products, Cloud Based Big data technologies and handling Metadata
  • Advanced understanding of agile methodologies such as CI/CD, Application Resiliency & Security 

 

Preferred qualifications, capabilities, and skills

  • Applicable certifications are preferred