Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Data Engineer image - Rise Careers
Job details

Data Engineer - job 1 of 4

Job Title: Data Engineer
Position Type: Full-Time, Remote
Working Hours: U.S. client business hours (with flexibility for pipeline monitoring and data refresh cycles)

About the Role:
Our client is seeking a Data Engineer to design, build, and maintain reliable data pipelines and infrastructure that deliver clean, accessible, and actionable data. This role requires strong software engineering fundamentals, experience with modern data stacks, and an eye for quality and scalability. The Data Engineer ensures data flows seamlessly from source systems to warehouses and BI tools, powering decision-making across the business.

Responsibilities:
Pipeline Development:

  • Build and maintain ETL/ELT pipelines using Python, SQL, or Scala.
  • Orchestrate workflows with Airflow, Prefect, Dagster, or Luigi.
  • Ingest structured and unstructured data from APIs, SaaS platforms, relational databases, and streaming sources.

Data Warehousing:

  • Manage data warehouses (Snowflake, BigQuery, Redshift).
  • Design schemas (star/snowflake) optimized for analytics.
  • Implement partitioning, clustering, and query performance tuning.

Data Quality & Governance:

  • Implement validation checks, anomaly detection, and logging for data integrity.
  • Enforce naming conventions, lineage tracking, and documentation (dbt, Great Expectations).
  • Maintain compliance with GDPR, HIPAA, or industry-specific regulations.

Streaming & Real-Time Data:

  • Develop and monitor streaming pipelines with Kafka, Kinesis, or Pub/Sub.
  • Ensure low-latency ingestion for time-sensitive use cases.

Collaboration:

  • Partner with analysts and data scientists to provide curated, reliable datasets.
  • Support BI teams in building dashboards (Tableau, Looker, Power BI).
  • Document data models and pipelines for knowledge transfer.

Infrastructure & DevOps:

  • Containerize data services with Docker and orchestrate in Kubernetes.
  • Automate deployments via CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI).
  • Manage cloud infrastructure using Terraform or CloudFormation.

What Makes You a Perfect Fit:

  • Passion for clean, reliable, and scalable data.
  • Strong problem-solving skills with debugging mindset.
  • Balance of software engineering rigor and data intuition.
  • Collaborative communicator who thrives in cross-functional environments.

Required Experience & Skills (Minimum):

  • 3+ years in data engineering or back-end development.
  • Strong Python and SQL skills.
  • Experience with at least one major data warehouse (Snowflake, Redshift, BigQuery).
  • Familiarity with pipeline orchestration tools (Airflow, Prefect).

Ideal Experience & Skills:

  • Experience with dbt for transformations and data modeling.
  • Streaming data experience (Kafka, Kinesis, Pub/Sub).
  • Cloud-native data platforms (AWS Glue, GCP Dataflow, Azure Data Factory).
  • Background in regulated industries (healthcare, finance) with strict compliance.

What Does a Typical Day Look Like?
A Data Engineer’s day revolves around keeping pipelines running, improving reliability, and enabling teams with high-quality data. You will:

  • Check pipeline health in Airflow/Prefect and resolve any failed jobs.
  • Ingest new data sources, writing connectors for APIs or SaaS platforms.
  • Optimize SQL queries and warehouse performance to reduce costs and latency.
  • Collaborate with analysts/data scientists to deliver clean datasets for dashboards and models.
  • Implement validation checks to prevent downstream reporting issues.
  • Document and monitor pipelines so they’re reproducible, scalable, and audit-ready.
    In essence: you ensure the business has accurate, timely, and trustworthy data powering every decision.

Key Metrics for Success (KPIs):

  • Pipeline uptime ≥ 99%.
  • Data freshness within agreed SLAs (hourly, daily, weekly).
  • Zero critical data quality errors reaching BI/analytics.
  • Cost-optimized queries and warehouse performance.
  • Positive feedback from data consumers (analysts, scientists, leadership).

Interview Process:

  • Initial Phone Screen
  • Video Interview with Pavago Recruiter
  • Technical Task (e.g., build a small ETL pipeline or optimize a SQL query)
  • Client Interview with Engineering/Data Team
  • Offer & Background Verification

Average salary estimate

$125000 / YEARLY (est.)
min
max
$100000K
$150000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs
Photo of the Rise User
Pavago Hybrid No location specified
Posted 6 hours ago

Work with a high-performing marketing team to own email campaign strategy, execution, and optimization across ESPs to increase engagement and conversions.

Photo of the Rise User
Posted 3 hours ago

Lead multi-channel paid acquisition (Google, Meta, LinkedIn, X, TikTok, and more) to drive qualified B2B leads and measurable pipeline growth for a remote U.S.-timezone team.

Photo of the Rise User
Posted 15 hours ago

Gartner is hiring a Data Engineer - Production Support to manage daily Azure-based data warehouse operations, ensuring high data quality and stable ETL/ELT processes.

Photo of the Rise User
Posted 15 hours ago

Lead Data Engineer to guide Data Operations and Analytics Engineering, ensuring a reliable Databricks lakehouse and high-quality analytics that power OneOncology’s mission.

Photo of the Rise User
Boston Red Sox Hybrid Boston, MA, United States
Posted 23 hours ago

The Red Sox are hiring a Data Engineer to maintain and enhance their GCP-based data architecture, building ETL pipelines and monitoring to support business teams across the organization.

Photo of the Rise User
NBCUniversal Hybrid 30 Rockefeller Plaza, New York, NEW YORK
Posted 14 hours ago

Experienced Principal Data Engineer needed to design and develop large-scale, cloud-native data and ad-tech systems leveraging serverless, event-driven architectures and AI to power NBCUniversal's audience and advertising products.

Photo of the Rise User
MediaRadar Hybrid No location specified
Posted 3 hours ago

Lead MediaRadar’s data delivery lifecycle as a Data Engineering Manager, managing distributed engineers and delivering scalable ETL pipelines across a modern cloud stack.

Photo of the Rise User
Posted 5 hours ago

Senior analytics leader needed to build AI-enabled analytics, establish company-wide metrics and governance, and turn data into actionable recommendations across Sur La Table's commerce and operations functions.

Photo of the Rise User
NielsenIQ Hybrid Chicago, IL, United States
Posted 19 hours ago

NIQ is hiring an experienced Commercial Data Manager to set data standards, drive data quality, and govern sales and product data that underpin revenue operations and decision-making.

pavago - thinking globally to grow locally 🌍 welcome to pavago, where the world is your talent pool. we believe in a borderless future where businesses can harness the best of international expertise without breaking the bank. 🌟 why choose pav...

103 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
March 29, 2026
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!