At Replicate, we believe AI shouldn't be exclusive to tech giants — it should be accessible to every software developer. Our goal is straightforward: build the best platform for creating, deploying, and running machine learning models. As an Infrastructure Engineer on the Platform team, you'll play a key role in making generative AI available to everyone.
The Platform team at Replicate oversees the entire lifecycle of models, from packaging and deployment to serving, scaling, and monitoring. You'll be developing the infrastructure that supports thousands of models and powers millions of predictions daily. This is a chance to build something truly innovative, where each decision you make has a tangible impact and allows your creativity to shine.
What you'll be doing: Designing and building our deployment and model-serving platform. Building technology to operate the latest advancements in the ML and AI space. Designing systems to maximize the utilization and reliability of our Kubernetes clusters and GPUs, including multi-regional traffic shifting and failover capabilities. Owning and optimizing fair and reliable task allocation and queuing across a diverse set of customers with heterogeneous workloads. Working with our Models team to speed up model inference through techniques like caching, weights management, machine configurations, and runtime optimizations in Python and PyTorch. Working with technologies such as:
Python, Go, and Node.js Kubernetes and Terraform Redis, Google BigQuery, and PostgreSQL We're looking for the right person, not just someone who checks boxes, but it's likely you have… Experience building platforms at scale. Worked in complex systems with many moving parts; you have opinions on monoliths vs. services. Designed and implemented developer-friendly APIs to enable scalable and reliable integration. Hands-on experience setting up and operating Kubernetes. A passion for building tools that empower developers. Strong communication and collaboration skills, with the ability to understand customer needs and distill complex topics into clear, actionable insights. At least 3 years of full-time software engineering experience. These aren't hard requirements, but we definitely want to talk with you if… You have worked on machine learning platform teams in the past. You have experience working with or on teams that have put ML/AI into production, even though this role does not entail building ML models directly. You have some exposure to serving Generative AI features where GPUs are costly commodities and workloads can take significant time to finish. You'll be working from our beautiful office in the Mission, San Francisco for this role. We want to build a strong in-person culture for the people who are there. We want you to be there, not feel like we have to drag you in.
Salary: $200k - $280k USD
Apply now Name: Required
Email: Required
Phone number:
City:
Country:
Resume: If you haven't got a resume, a LinkedIn profile, GitHub profile, or some plain text is fine too.
LinkedIn profile:
Can you work from our office in San Francisco at least 3 days a week? Required
Yes / I'm willing to relocate / No
Can you legally work in the United States? Required
Yes / No
Do you have at least 3 years of full-time software engineering experience? Required
Yes / No
Have you worked on building platforms? Required
Do you have experience working on teams that have built and shipped machine learning models? Required (This is not required, but would love to know if you do!)
#J-18808-Ljbffr