What is this role? We are looking for a Director of Data Engineering to lead our growing data team. They will manage Cirkul's data pipeline architecture and optimize data flow and collection. They should be advanced in Python and SQL, and be able to work with modern technologies in a fast paced, startup-like environment. The Director of Data Engineering will support our data initiatives and will ensure an optimal data delivery architecture that is consistent throughout ongoing projects. They must be a self-starter, and comfortable supporting the data needs of multiple teams, systems and products. The right candidate should be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.
What does an average day look like? Driving Results: Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python, SQL (using dbt for transformation), commercial SaaS and OSS offerings, and the entire AWS suite of technologies. Create and maintain optimal data pipeline architecture. Keep team members on-task and heading-off scope creep. Taking Ownership: Manage requirement-gathering and compiling business problem/solution definitions. Manage infrastructure and configuration of data platform in the Cloud, ensuring health, reliability and uptime. Making Decisions: Work with management to identify projects that can deliver value. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Cultivating Relationships: Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. Instilling Trust: Ensure successful delivery of business value by communicating directly with stakeholders. Customer Focus: Onboard retail partners, standing up automated data ingestion pipelines and ensuring data integrity. What background should you have? 8+ years of experience in a Data Engineer role at a fast-paced organization 8+ years of experience in any one of the Cloud providers such as AWS, Azure or GCP 4+ years of experience leading a data engineering organization Familiarity with AWS Services (S3, EC2, RDS, EKS, MSK, Lambda) is preferred Advanced working knowledge of Python Exceptional fluency with SQL; you conquered the join venn diagram long ago and have moved on to explaining cost based optimization to your peers on the engineering team Experience with modern columnar data warehouses such as Snowflake, Redshift, BigQuery Experience with orchestration tools such as Airflow, Dagster, or Prefect, with big data tools such as Kafka & Spark, and with dbt Experience ingesting, processing, and visualizing data sources of varying types - structured/relational and unstructured Experience developing, managing, and manipulating large, complex datasets Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Strong project management and organizational skills Experience supporting and working with cross-functional teams in a dynamic environment
#J-18808-Ljbffr