About the Team Come help us build the world's most reliable on-demand, logistics engine for delivery! We're bringing on experienced engineers to help us further our 24x7, global infrastructure system that powers DoorDash's three-sided marketplace of consumers, merchants, and dashers.
About the Data Ingestion Team The Data Ingestion team at DoorDash is at the forefront of managing the seamless movement of trillions of telemetry and transaction data points from diverse sources to our data lakehouse in real-time. By integrating this data with our online systems, we empower multiple business lines, drive critical machine learning models, and fuel fast-paced experimentation. Our team leverages cutting-edge open-source technologies such as Apache Spark, Flink, Kafka, Airflow, Delta Lake, and Iceberg to build and maintain a scalable, high-quality data ingestion framework. As a key player in this innovative and dynamic team, you will help evolve our systems to support DoorDash's expanding international footprint and ensure the highest standards of reliability and flexibility. This hybrid role requires you to be located in the Bay Area or Seattle.
You're excited about this opportunity because you will… High Impact: Contribute to powering multiple business lines with high-quality, low-latency data directly integrated into online systems, driving billions in revenue. Cutting-Edge Technology: Work with advanced open-source technologies such as Apache Spark, Flink, Kafka, Airflow, Delta Lake, and Iceberg. Scalability: Play a crucial role in evolving our systems to accommodate a 10x scale increase, supporting DoorDash's expanding international footprint. Innovation and Excellence: Be part of a team that drives innovation and maintains high standards of reliability and flexibility in our data infrastructure. Cross-Functional Collaboration: Collaborate closely with cross-functional teams in Analytics, Product, and Engineering to ensure stakeholder satisfaction with the data platform's roadmap. Career Growth: Join a dynamic and growing company where your contributions are recognized and valued, providing excellent visibility and opportunities for professional development. We're excited about you because… B.S., M.S., or PhD. in Computer Science or equivalent 2+ years of experience with CS fundamental concepts and experience with at least one of the programming languages of Scala, Java, and Python You are located or are willing to locate to the Bay Area or Seattle Prior technical experience in Big Data solutions - you've built meaningful pieces of data infrastructure. Bonus if those were open-sourced big data processing frameworks using technologies like Spark, Airflow, Kafka, Flink, Iceberg, Deltalake Experience improving efficiency, scalability, and stability of data platforms
#J-18808-Ljbffr