Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Software Engineer — ETL & Data image - Rise Careers
Job details

Software Engineer — ETL & Data

We're building the company which will de-risk the largest infrastructure build-out in history.

When people finance GPU clusters, the datacenters housing them, and the infrastructure powering them, they need "offtake" - meaning someone has signed a contract to lease the cluster for a period of time before its even built.

Financing a GPU cluster is inherently risky, since margins are thin and volumes are huge. Lenders don't want to take on the risk that cluster developers can't repay their loan, and cluster developers really don't want to risk not selling their cluster. As a result, risk is offloaded to the customer using fixed-price long-term contracts.

If you don't mitigate this customer risk, there's a bubble. This isn't SaaS anymore - application layer companies sign multi-year contracts for computer and inference, but sell to customers on monthly subscriptions. If you mess up a purchase, it's game over: a minor shift in your revenue growth rate might mean the difference between profit or bankruptcy. But what if companies could exit their contract by selling it back to the market?

Otherwise, as AI scales, compute only becomes available to folks who can effectively take on that risk. A 2-person startup in a San Francisco Victorian can't realistically sign a 5-year take or pay contract on $100m supercomputers. But they may be able to buy the month of liquidity that someone else sold back.

So that's what we make: a liquid market for GPU offtake.

About the Tooling Team


We are a small team focused on making SFCompute engineering faster, more observable, and more reliable. Our work spans data infrastructure, developer experience, pre-production environments, and AI tooling — but the common thread isn't any specific domain. It's that we find the problems nobody else owns and make them solved problems.

Everyone on this team wears many hats. You'll work across the stack, collaborate with all parts of engineering, and regularly take on problems that don't fit neatly into a job description. If you want a narrow scope and a clear ticket queue, this team isn't it. If you want to have a large, legible impact on a small team building serious infrastructure, read on.

The Role

We're looking for a data-focused engineer to own and evolve our internal data infrastructure. You'll take over a lightweight but powerful OLTP-to-OLAP data pipeline and use it to define, instrument, and monitor the KPIs that matter most across the company.

If you've built data pipelines professionally — whether under the title of Data Engineer, Analytics Engineer, or Software Engineer — that's the background we're looking for. This isn't a "build dashboards and wait for requests" role. You'll work closely with engineering, operations, and leadership to shape what we measure and why, turning raw trading and infrastructure data into clear signals that drive decisions.

What You'll Do

  • Own and extend our OLTP-to-OLAP data infrastructure

  • Define and maintain company-wide and team-level KPIs revenue, utilization, reliability, fulfillment rate, and more

  • Build and iterate on dashboards that surface actionable insight, not just data

  • Partner with engineers to instrument new product features from the start

  • Investigate anomalies, debug data quality issues, and improve pipeline reliability

  • Help establish data conventions and best practices as we scale

What We're Looking For

  • Strong SQL and data modeling skills; you can write a complex analytical query without a framework

  • Experience with ETL pipelines and columnar stores (DuckDB, ClickHouse, BigQuery, or similar)

  • A bias toward simple, legible solutions over elaborate architectures

  • Ability to drive ambiguous problems to clear outcomes; you can decide what to measure, not just how

  • Nice to have: experience with Rill or similar BI tooling; familiarity with marketplace or infrastructure business models

Why This Role

The data you'll work with is genuinely unusual: GPU procurement volumes, fulfillment rates, trading engine output, capacity utilization across bare-metal clusters. Nobody has standardized how to measure this stuff yet. The metrics you define will show up in weekly leadership reviews, inform how we price and allocate capacity, and tell us whether the business is working. You'll have direct access to leadership and the actual decision-making process.

Benefits

Generous equity grant

Team members are offered a competitive salary along with equity in the company

Visa Sponsorships

Yes, we sponsor visas and work permits

Retirement matching

We match 401(k) plans up to 4%

Medical, dental & vision

We offer competitive medical, dental, vision insurance for employees and dependents and cover 100% of premiums

Time off

We offer unlimited paid time off as well as 10+ observed holidays

Parental leave

We offer biological, adoptive, and foster parents paid time off to spend quality time with family

Daily lunch

We cover lunch daily for employees

Unlimited office book budget

You can buy as many books for the office as you want

The San Francisco Compute Company is committed to maintaining a workplace free from discrimination and harassment.

We make employment decisions based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, belief, national origin, social or ethical origin, age, physical, mental, or sensory disability, sexual orientation, gender identity or expression, marital status, civil union or domestic partnership status, past or present military service, HIV status, family medical history or genetic information, family or parental status including pregnancy, or any other status protected by law.

We welcome the opportunity to consider qualified applicants with prior arrest or conviction records. Our commitment to diversity includes hiring talented individuals regardless of their criminal history, in accordance with local, state, and federal laws, including San Francisco’s Fair Chance Ordinance and California’s ban-the-box laws.

If you require reasonable accommodation for any reason, please reach out to us at hiring@sfcompute.com

Average salary estimate

$185000 / YEARLY (est.)
min
max
$150000K
$220000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs
Photo of the Rise User

Help shape how AI powers engineering at a seed-stage company by auditing, improving, and owning the team's AI skills, prompts, and developer-facing workflows.

Photo of the Rise User
Posted 12 hours ago

Senior analytics leader needed to build AI-enabled analytics, establish company-wide metrics and governance, and turn data into actionable recommendations across Sur La Table's commerce and operations functions.

Photo of the Rise User
Posted 22 hours ago

Lead Data Engineer to guide Data Operations and Analytics Engineering, ensuring a reliable Databricks lakehouse and high-quality analytics that power OneOncology’s mission.

Photo of the Rise User
MediaRadar Hybrid No location specified
Posted 10 hours ago

Lead MediaRadar’s data delivery lifecycle as a Data Engineering Manager, managing distributed engineers and delivering scalable ETL pipelines across a modern cloud stack.

Photo of the Rise User
Modal Hybrid new york
Posted 5 hours ago

Modal is hiring a Data Engineer to build modern analytics pipelines and datasets that power product insights, financial reporting, and company-wide decision making at a fast-growing AI infrastructure startup.

Photo of the Rise User
Posted 22 hours ago

Gartner is hiring a Data Engineer - Production Support to manage daily Azure-based data warehouse operations, ensuring high data quality and stable ETL/ELT processes.

Photo of the Rise User
Posted 4 hours ago

At Collective, lead the design and production of an internal data agent that enables everyone to run complex analyses without writing SQL, combining data engineering, schema ownership, and LLM/agent systems expertise.

Photo of the Rise User

Lead the implementation and operation of data governance, quality, and metadata practices at Anchorage Digital to ensure trusted data for analytics, reporting, and regulatory needs.

Photo of the Rise User
NBCUniversal Hybrid 30 Rockefeller Plaza, New York, NEW YORK
Posted 20 hours ago

Experienced Principal Data Engineer needed to design and develop large-scale, cloud-native data and ad-tech systems leveraging serverless, event-driven architectures and AI to power NBCUniversal's audience and advertising products.

A large, low-cost H100 cluster you can rent by the hour

5 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, onsite
DATE POSTED
April 17, 2026
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!