About the Team DoorDash is a data driven organization and relies on timely, accurate and reliable data to drive many business and product decisions. The Data Platform owns all the infrastructure necessary to run an operationally efficient analytical data stack. The Core Data part of this includes data ingestion (batch and real time), data compute & transformation, data storage (warehouse, data lake, OLAP etc.), querying infrastructure as well as data compliance, quality and governance. The adjacent areas of major focus are Machine Learning Infrastructure and workflow, Experimentation Platform, Knowledge Graphs and various Data Science and Analytics related tooling.
About the Role As a Senior Staff engineer, you will be responsible for the most critical long term technical roadmap of the organization. You will use your skills and experience in guiding the engineers and the management leadership on the right technical choices, mentor several senior engineers and hold a high bar of technical competency. You will report into the Senior Director of Engineering, Data Platform as part of our Data organization in Engineering.
You're excited about this opportunity because you will… Build data intensive solutions that are used by DoorDash engineers, data scientists, analysts or business users from across the company. Drive and deliver the ongoing product vision of data products. Think in terms of building data products and not systems. You excel at driving the engineering vision, strategy, and execution for an organization consisting of multiple teams and sub teams. You are a technology leader. You excel at mentoring and guiding a fast growing organization in setting the right architectural patterns, handling build vs buy decisions, working with various vendors in the data solutions space, making judicious investments in the right areas anticipating what the company needs a few years down the road. You think of quick wins while planning for long term strategy and engineering excellence. You are excited about breaking down large systems into manageable, sustainable components that can be iterated on. You strive for continuous improvement of data architecture and development process. You are excited about cross collaboration with stakeholders, external partners and peer data leaders. You love rolling up your sleeves to get down to the lowest level of detail. We're excited about you because… You have extensive experience building and operating scalable, fault-tolerant, distributed systems in the area of large scale data intensive applications. You have experience building data products and abstractions on top of infrastructure (such as Metrics Server, Realtime Job Processing, Self-serve platforms for forecasting, experimentation frameworks, deriving data-driven insights etc.) You have experience with a range of large scale data systems such as data processing, complex/high volume real-time insights, data quality and reliability frameworks, cost efficiency etc.. The following areas are representative of the breadth of data technologies in which familiarity would be optimal: Backend service and tool development in Golang, Kotlin and/or Python AWS Kubernetes, Docker, Kops, Helm Apache Kafka, Apache Flink, Apache Spark, Trino (Presto), Apache Airflow/Dagster, Apache Superset, AWS S3, Snowflake, Amplitude, CDP (e.g. Segment.com)
#J-18808-Ljbffr