Sr. Data Engineer
Location: Richmond, VA
Department: Engineering
Reports to: Data Team Manager
About Fulcrum Collaborations
Fulcrum Collaborations is a Salesforce ISV Partner that develops MCIM (www.mcim24x7.com), a cutting-edge mission-critical information management platform for data centers. Our platform integrates with various enterprise systems, including CMMS, ERP, service desk, and building automation systems, to provide data-driven insights and operational intelligence.
We are looking for a Data Engineer with DevOps expertise to help build and deliver data products that power analytics, reporting, and AI-driven decision-making. You will be responsible for designing scalable data pipelines, automating deployments, deploying REST API endpoints, and optimizing our data infrastructure.
Key Responsibilities
Data Engineering & Data Products Development
- Design, build, and optimize scalable ETL/ELT pipelines to support data analytics
- Design, develop, and maintain RESTful API using Flask, Django or similar Python frameworks.
- Implement authentication, authorization, and security best practices for API access.
- Develop and manage data lakes, data warehouses, and real-time data processing using tools like Snowflake and AWS services.
- Ensure data integrity, security, and governance across Fulcrum's data platform.
- Work with structured and unstructured data from sources such as CMMS or ERP data
- Collaborate with data analysts, product managers, and software engineers to develop data products that provide actionable insights to customers.
- Create and maintain clear technical documentation for data pipelines, including data lineage, transformation logic, and dependencies.
- Develop API documentation using industry-standard tools (e.g., Swagger/OpenAPI), including endpoint specs, request/response examples, and error handling.
- Document data models, schemas, and data dictionaries to enable self-service discovery.
DevOps & Automation for Data Infrastructure
- Build and maintain CI/CD pipelines for data pipelines, infrastructure deployments, and analytics models using GitHub Actions, docker, Heroku, AWS ECR/ECS or GitLab CI.
- Deploy and manage containerized data processing tools
- Implement observability, logging, and monitoring for data workflows
- Optimize cloud infrastructure (AWS) to balance performance and cost efficiency.
Security, Compliance, & Best Practices
- Implement best practices for data security, encryption, and access control.
- Ensure compliance with GDPR, SOC2, HIPAA, and other regulatory requirements.
- Define and document data pipeline standards, testing frameworks, and automation best practices.
Qualifications
Must-Have:
- 5+ years of experience in Data Engineering with DevOps expertise.
- Strong proficiency in Python, SQL, and data pipeline orchestration tools
- Hands-on experience with CI/CD pipelines, containerization
- Hands-on experience with REST API development using FastAPI, Flask, Django or Golang
- Expertise in cloud platforms (AWS, Azure, or GCP) for data storage and processing.
- Experience with Infrastructure-as-Code
- Familiarity with data pipeline monitoring and observability tools.
Nice-to-Have:
- Experience integrating data from CMMS, ERP, and IoT systems.
- Understanding of ML Ops and AI model deployment.
- Knowledge of data privacy and compliance standards.
Why Join Fulcrum Collaborations?
- Work on high-impact data products that transform data center operations.
- Leverage the Salesforce ecosystem to build next-gen data solutions.
- Competitive salary, benefits, and flexible remote work options.