At Parsons, you can imagine a career where you thrive, work with exceptional people, and be yourself. Guided by our leadership vision of valuing people, embracing agility, and fostering growth, we cultivate an innovative culture that empowers you to achieve your full potential. Unleash your talent and redefine what’s possible.
Job Description:
Parsons is looking for an amazingly talented Data Engineer to join our team!
In this role, you will support the design, implementation, and sustainment of the data plane for a large-scale distributed system spanning microservices, event processing, search/analytics, enterprise reporting, and operational telemetry.
You will work across both cloud-native and constrained edge-style deployments, helping ensure data is collected, transformed, moved, searched, and visualized reliably even in challenging operating environments.
This role is not a traditional database administrator position. Instead, it is focused on data engineering across distributed microservices, event pipelines, search/analytics platforms, enterprise data integration, and resilient/manual data movement workflows.
What You'll Be Doing:
Designing, developing, and maintaining the data plane for a distributed microservices ecosystem consisting of:
approximately 25 core Java microservices
approximately 25 ancillary integration microservices connecting to external systems
Supporting data flows across a modern service platform built around:
MongoDB for operational data storage
Elasticsearch / ELK for analytics, search, cognitive/semantic search use cases, and telemetry analysis
RabbitMQ for event-driven processing and message distribution
service orchestration and security components such as Consul, Nomad, and Vault
Engineering data solutions that can operate across a wide range of deployment models:
very small, constrained environments (e.g., a few bare-metal small-form-factor servers)
larger hybrid/cloud-native environments that can scale to hundreds of nodes
Building and optimizing data pipelines that support:
operational application data flows
telemetry ingestion (hot/warm/cold paths)
dashboard/reporting data feeds
enterprise data integration across inventory, asset, and financial systems
Developing resilient mechanisms for manual or semi-manual data extraction, transfer, and re-ingestion when automated connectivity is degraded or unavailable due to technical, physical, or geopolitical disruptions
Supporting data migration activities between legacy and modernized platforms:
legacy platform built with Java/Grails using PostgreSQL
current distributed platform using MongoDB
working within existing migration processes and improving tooling, reliability, and observability where appropriate
Building and maintaining integrations that feed Power BI dashboards, Power Automate workflows, and downstream data science/analytics use cases
Enabling operational visibility through telemetry engineering:
ingesting and transforming logs, events, and metrics
supporting observability pipelines based on ELK and current/transitioning monitoring tooling
helping shape future-state consolidation toward a more ELK-centric observability model
Collaborating closely with software engineers, platform engineers, SRE/operations, data scientists, and enterprise stakeholders to operationalize data and deliver reliable insights
Documenting data models, interfaces, schemas, transformations, operational procedures, and recovery workflows
What Required Skills You'll Bring:
Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, Information Systems, or related technical field. 4 Additional years of experience can substitute for a degree.
10+ years of software and/or data engineering
Strong experience in data engineering for distributed applications and microservices-based systems
Hands-on experience with:
Java-based service ecosystems
MongoDB
Elasticsearch / ELK
RabbitMQ or similar message/event platforms
Good understanding of distributed systems concepts including:
eventual consistency
retries and idempotency
partition tolerance and degraded/offline operation
data reconciliation and replay
Experience developing and supporting data pipelines across both operational systems and analytics/reporting environments
Experience designing robust data movement mechanisms, including fallback/manual transfer workflows for disrupted environments
Experience with Power BI and/or Power Automate integration pipelines, including data shaping and feed preparation for dashboards and workflow automation
Experience working with telemetry/log/metric pipelines and operational dashboarding concepts
Ability to write clean technical documentation for schemas, data flows, and operating procedures
Strong oral and written communication skills and the ability to work cooperatively and effectively as a team member
Domestic or international travel may be required.
What Desired Skills You'll Bring:
Master’s degree in Data Engineering, Computer Science, Analytics, or related field
Experience with service platform tooling such as Consul, Nomad, and Vault
Familiarity with legacy-to-modern data transition patterns, including relational to document-oriented migration approaches
Experience supporting systems deployed in constrained environments with intermittent connectivity and variable infrastructure quality
Experience with PostgreSQL and Grails/Java legacy systems in migration or sustainment contexts
Experience with search engineering, semantic search, or cognitive search implementations using Elasticsearch
Experience building data feeds that support operational dashboards, executive reporting, and data science workflows
Familiarity with observability tooling transitions and telemetry normalization across multiple sources
Ability to identify data architecture improvements, develop prototypes, and help build the case for operational enhancements
Experience training users or technical teams as new data capabilities move into production
Demonstrated success working across software, platform, analytics, and operations teams in complex technical environments
Security Clearance Requirement:
An active Top Secret SCI security clearance is required for this position.This position is part of our Federal Solutions team.The Federal Solutions segment delivers resources to our US government customers that ensure the success of missions around the globe. Our intelligent employees drive the state of the art as they provide services and solutions in the areas of defense, security, intelligence, infrastructure, and environmental. We promote a culture of excellence and close-knit teams that take pride in delivering, protecting, and sustaining our nation's most critical assets, from Earth to cyberspace. Throughout the company, our people are anticipating what’s next to deliver the solutions our customers need now.Salary Range: $148,300.00 - $266,900.00We value our employees and want our employees to take care of their overall wellbeing, which is why we offer best-in-class benefits such as medical, dental, vision, paid time off, Employee Stock Ownership Plan (ESOP), 401(k), life insurance, flexible work schedules, and holidays to fit your busy lifestyle!Parsons is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, veteran status or any other protected status.We truly invest and care about our employee’s wellbeing and provide endless growth opportunities as the sky is the limit, so aim for the stars! Imagine next and join the Parsons quest—APPLY TODAY!Parsons is aware of fraudulent recruitment practices. To learn more about recruitment fraud and how to report it, please refer to https://www.parsons.com/fraudulent-recruitment/.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Senior Data Engineer to lead enterprise data platform architecture, cloud migrations, and analytics infrastructure for Cardinal Health’s Digital Partner organization.
Lead the technical delivery and operational maturity of Siepe’s integrations and data pipelines, driving T-SQL excellence and reliable ETL/ELT for mission-critical financial workflows.
Expeditors is hiring a Data Engineer III / Senior to build scalable Azure data platforms and embed Generative AI features into production-grade pipelines that support analytics and applications.
Help shape Brex’s analytics backbone by building scalable data pipelines and Core Data tables that enable data-driven decisions across the company.
LexisNexis seeks a Data Engineer III to develop and maintain large-scale Azure/Databricks data pipelines and ETL solutions supporting legal analytics in a hybrid Horsham, PA role.
Labcorp seeks an early-career Enterprise Data Architect (Level 1) in Durham, NC to support enterprise data modeling, data quality, and database deployment in a hybrid full-time role.
Barclays Services Corp seeks a Data Engineer AVP to lead the design and delivery of robust data pipelines, warehouses, and compliance-focused analytics for onboarding and risk systems at the Whippany campus.
Experienced data engineer needed to architect and deliver production-grade data pipelines and AI-ready data foundations for a Google Cloud-focused consulting firm in Reston, VA (hybrid).