Data is essential for all our clients decision-making needs whether it's related to franchisee advertising effectiveness, helping key stakeholders understand business KPI's or building new products in emerging markets. This data is deeply valuable and gives us insights into how we can continue improving our service for our team, advertisers and partners. Comma8 is seeking a highly motivated Data Engineer with a strong technical background and passionate about diving deeper into Big Data to develop state of the art Data Solutions. "About Us" sections always feel like a forced dating bio. So we'll cut right to it: we solve complex problems for some of the biggest brands in fit-tech. We've collected the expertise needed to help them find and amaze their customers with top-notch digital fitness experiences.
And we work with some unbiasedly awesome people, who believe in deep diving on new challenges, personal growth, and having a life. Since we know you'll deliver, we're a Results-Only Workplace Environment, where we offer as many vacation, sick and mental-health days as you need to get the job done.
With projects ranging from high-impact, cutting-edge solutions to complete strategic overhauls we can promise you one thing—your work will have a meaningful impact on an active and thriving customer base.
Benefits Though we have an office in Irvine, CA and co-working spaces in Los Angeles, we've become a remote-first company through the pandemic. When it's safe to do so, we'd love to meet in person and host retreats. Therefore, candidates in Southern California and Pacific time-zone are preferred, but we're happy to consider any candidates in the US.
We're a Results-Only Workplace Environment (ROWE) which allows you as many vacations, sick, and mental-health days as you need, as long as you are meeting your performance goals, assigned tasks, and projects.
Other benefits include: Health, Dental, Vision and Life Insurance, 401K Plan, Various Fitness and Wellness Benefits (Group Classes, Free Training, Products, etc.), Continued Education Courses, and Conference Attendance The responsibilities and job scope discussed above may change as necessitated by business demands.
\n ResponsibilitiesLead the design and growth of our Products and Data Warehouses around our clients AnalyticsDesign and develop scalable data warehousing solutions, building ETL pipelines in Big Data environments (cloud, on-prem, hybrid)Manage the transformation of large daily batch data volumes in the cloud using Apache Spark, EMR, and Glue, ensuring streamlined processing and cost savingsConstruct and maintain high-throughput streaming data pipelines using technologies like Kinesis, Spark Streaming, and Elasticsearch, while minimizing response lagAutomate and orchestrate complex data workflows using Python, Apache Airflow, and Step Functions to eliminate bottlenecks in data pipelinesMentor and guide a team, providing technical expertise in SQL query execution, data manipulation, data visualization, and performance optimizationDevelop, test, and deploy scalable reverse ETL solutions using API Gateway, Python (Flask), and Lambda, achieving near-zero latency and high scalabilityHelp architect data solutions/frameworks and define data models for the underlying data warehouse and data lakesCollaborate with key stakeholders to map, implement, and deliver successful data solutionsMaintain detailed documentation of your work and changes to support data quality and data governanceEnsure high operational efficiency and quality of your solutions to meet SLAs and support commitment to our clientsBe an active participant and advocate of agile/scrum practice to ensure health and process improvements for your team Qualifications3-5 years of data engineering experience developing large data pipelinesStrong SQL skills and ability to create queries to extract data and build performant datasetsHands-on experience with data integration tools (e.g. Apache Spark, Apache Kafka)Hands-on experience with cloud-based data services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow)Experience with version control systems (e.g., Git) and collaborative development practicesStrong programming skills in PythonExperience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query)Solid experience with data integration toolsets (i.e Airflow) and writing and maintaining Data PipelinesStrong in Data Modeling techniques and Data Warehousing standard methodologies and practicesFamiliar with Scrum and Agile methodologiesYou are a problem solver with strong attention to detail and excellent analytical and communication skillsNice to have experience with Cloud technologies like AWS (S3, EMR, EC2)
\n$120,000 - $150,000 a year
\n