Physical Intelligence builds general-purpose AI for the physical world. Training our models requires orchestrating thousands of accelerators across a heterogeneous fleet of GPU and TPU clusters — spanning different hardware generations, cloud providers, and cluster topologies.
Today, researchers often need to know which cluster to target, what resources are available, and how to configure their jobs accordingly. That doesn't scale. We need a scheduling and compute layer that makes the right placement decision automatically — routing jobs to the best cluster based on availability, hardware fit, cost, and priority — so researchers can focus entirely on the science.
This role owns that problem end-to-end: the scheduling systems, the placement logic, the cluster management layer, and the operational tooling that keeps it all running.
This is not cloud DevOps. It's not about standing up clusters and walking away. It's a systems role for people who care about intelligent resource allocation, utilization, fault tolerance, and making large-scale distributed training seamless.
The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. You will work closely with ML Infra (training systems), data platform, and research teams to ensure compute scheduling is never the bottleneck.
- Own Intelligent Job Scheduling and Placement: Design and build multi-tenant scheduling systems that automatically place training jobs on the best available cluster based on hardware requirements, topology, availability, cost, and priority. Support fair resource sharing across teams and projects with quota management, priority tiers, and preemption policies. Abstract away cluster differences so researchers submit jobs without needing to know where they will land.
- Scale Multi-cluster Orchestration: Build the control plane that manages the job lifecycle across diverse clusters (mixed GPU/TPU, multi-generation hardware, on-prem/cloud) and enables seamless job migration, failover, and re-scheduling.
- Optimize Accelerator Utilization and Efficiency: Monitor and optimize GPU/TPU utilization across the entire fleet. Implement priority, preemption, queueing, and fairness policies that balance research velocity with cost efficiency.
- Ensure Scaling and Stability: Implement fault detection, automatic recovery, and resilience for long-running multi-node training jobs. Manage health checking, node management, and scaling to thousands of accelerators.
- Support Inference and Robot Deployment: Extend scheduling and orchestration to inference workloads, including deploying models to edge devices on physical robots.
- Enhance Observability and Developer Experience: Build the dashboards, alerting, SLOs, and debugging tools necessary for researchers to understand job status and for the team to ensure high scheduling quality and cluster reliability.
We’re intentionally flexible on exact background, but strong candidates usually have:
- Strong software engineering fundamentals
- Experience building or operating job scheduling / resource management systems at scale
- Experience with large-scale compute clusters (GPU and/or TPU)
- Familiarity with schedulers and orchestration systems (SLURM, Kubernetes, GKE, K3S, or internal equivalents)
- Comfort reasoning about resource allocation, bin-packing, priority scheduling, and multi-tenancy
- Understanding of how ML training workloads behave — long-running, multi-node, sensitive to stragglers, topology-dependent
- A bias toward owning systems end-to-end, from design to operation
- Enjoy working closely with researchers and unblocking fast-moving projects
- Experience building multi-cluster or federated scheduling systems
- Experience with TPU infrastructure (GCP TPU slices, Multislice, GKE)
- Background in cluster resource managers (Borg, YARN, Mesos, or custom schedulers)
- Linux systems engineering, networking, and infrastructure-as-code
- NCCL/collective communication and topology-aware placement
- Experience with capacity planning and cloud cost optimization at scale
- Familiarity with JAX, PyTorch, or similar ML frameworks at the runtime/systems level
In this role you will help scale and optimize our training systems and core model code. You’ll own critical infrastructure for large-scale training, from managing GPU/TPU compute and job orchestration to building reusable and efficient JAX training pipelines. You’ll work closely with researchers and model engineers to translate ideas into experiments—and those experiments into production training runs.
This is a hands-on, high-leverage role at the intersection of ML, software engineering, and scalable infrastructure.
The Team
The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. The team works closely with research, data, and platform engineers to ensure models can scale from prototype to production-grade training runs.
In This Role You Will
- Own training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging.
- Scale distributed training: Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction.
- Optimize performance: Profile and improve memory usage, device utilization, throughput, and distributed synchronization.
- Enable rapid iteration: Build abstractions for launching, monitoring, debugging, and reproducing experiments.
- Manage compute resources: Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost.
- Partner with researchers: Translate research needs into infra capabilities and guide best practices for training at scale.
- Contribute to core training code: Evolve JAX model and training code to support new architectures, modalities, and evaluation metrics.
What We Hope You’ll Bring
- Strong software engineering fundamentals and experience building ML training infrastructure or internal platforms.
- Hands-on large-scale training experience in JAX (preferred), PyTorch.
- Familiarity with distributed training, multi-host setups, data loaders, and evaluation pipelines.
- Experience managing training workloads on cloud platforms (e.g., SLURM, Kubernetes, GCP TPU/GKE, AWS).
- Ability to debug and optimize performance bottlenecks across the training stack.
- Strong cross-functional communication and ownership mindset.
Bonus Points If You Have
- Deep ML systems background (e.g., training compilers, runtime optimization, custom kernels).
- Experience operating close to hardware (GPU/TPU performance tuning).
- Background in robotics, multimodal models, or large-scale foundation models.
- Experience designing abstractions that balance researcher flexibility with system reliability.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Fortuna Health is hiring a strategic Chief of Staff/Head of Business Development to amplify the CEO, drive partnerships and policy work, and scale operations in-person from our New York office.
CH Insurance is hiring a Commercial Insurance Marketing/Customer Service Associate to manage marketing, renewals, endorsements, and client service for small to mid-size commercial accounts using Applied EPIC and carrier rating systems.
As a Senior Frontend Software Engineer on ActiveCampaign's DUX team, you will drive frontend architecture, build scalable design-system components, and improve the developer and user experience across a micro-frontend platform.
Temporal is hiring a Staff Software Engineer to lead the architecture and operation of internal builder tools and AI-driven agent platforms that improve developer flow and reliability across the organization.
SeatGeek is looking for Software Engineers to design, build, and operate scalable services and user experiences for a high-traffic ticketing marketplace in a fully remote work environment.
Lead and mentor a software engineering team to design and deliver manufacturing software and tooling that enables production of next‑generation surgical robotics.
Experienced software engineer needed to build and integrate scalable, secure payment and AI-enabled systems for Visa’s global platforms.
Work remotely on cloud infrastructure and data systems that power large-scale AI-driven automation for a mission-focused company transforming global waste systems.
CapTech is hiring a senior Full-Stack Developer (.NET) in Salt Lake City to deliver cloud-ready, API-driven enterprise applications and integrations across front-end and back-end stacks.
Lead backend development for Bumble's Dating product by building scalable GCP-native services, driving projects end-to-end, and mentoring junior engineers.
Experienced software engineer needed to build and maintain cloud-based, customer-facing legal software using Java, JavaScript frameworks (e.g., Angular), and AWS in a hybrid Agile team environment.
PracticeQ is hiring a Lead Software Engineer to drive design and delivery of scalable .NET services and modern front-end features that improve practice management and patient experiences.
Wellmark is hiring a seasoned Platform Engineer to design, build, and scale agentic AI platforms and infrastructure that enable autonomous, enterprise-grade AI workflows.
Lead cross-team engineering to build scalable catalog, integration, and AI-native merchant systems that improve onboarding, catalog health, and merchant growth at Pinterest.
A product-minded Engineering Manager is needed to lead and grow engineering teams, drive technical execution for distributed, service-oriented systems, and partner cross-functionally to deliver impactful scheduling products.
SpringRole is the first professional reputation network powered by artificial intelligence and blockchain to eliminate fraud from user profiles. Because SpringRole is built on blockchain and uses smart contracts, it's able to verify work experienc...
736 jobs