Technical Skills Required: - Expert proficiency in Python, C#, and T-SQL - Strong experience with Databricks and Apache Spark (especially Spark SQL) - Proficient in SQL Server and ASP.NET Core - Experience with Apache Kafka - Skilled in query optimization and SQL performance tuning - Familiarity with data warehousing concepts and dimensional modeling - Knowledge of containerization technologies (e.g., Docker, Kubernetes) - Experience with cloud platforms (preferably Azure) - Proficiency in Git and CI/CD pipelines - Familiarity with data governance and security best practices - Understanding of RESTful API design principles - Experience with data visualization tools (e.g., Power BI, Tableau) Responsibilities: 1. Design, build, and maintain scalable data jobs and workflows in Databricks using Spark SQL and Python 2. Migrate and optimize SQL stored procedures from T-SQL to Databricks 3. Develop and maintain RESTful backend APIs using C# and ASP.NET Core 4. Implement advanced data transformations, aggregations, and analytical workloads using Spark SQL 5. Collaborate with data scientists and analysts to deliver optimized data solutions 6. Implement and maintain data governance practices, ensuring data quality, integrity, and security 7. Optimize Databricks jobs and SQL queries for performance and cost-effectiveness 8. Automate data workflows using Python scripting and orchestration tools 9. Design and implement data pipelines for real-time and batch processing 10. Contribute to the design and maintenance of data warehouses and data lakes 11. Participate in code reviews and mentor junior team members 12. Stay current with emerging data engineering technologies and best practices 13. Troubleshoot and resolve complex data-related issues 14. Develop documentation for data processes, APIs, and architectures 15. Collaborate with cross-functional teams to integrate data solutions into broader applications and services This role requires a blend of technical expertise in data engineering, full-stack development skills, and a strong understanding of data architectures. The ideal candidate will be able to bridge the gap between traditional SQL environments and modern big data platforms, while also contributing to the full stack of data-driven applications. Nice to Have for all roles: · Understanding of the health insurance, employee benefits, and HCM technology. · Familiarity with healthcare regulations (e.g., HIPAA, ACA). · Knowledge of EDI (Electronic Data Interchange) standards and processes · Experience with enrollment systems and benefits administration workflows · Familiarity with retirement plans and wealth management concepts · Experience working with large datasets typical in health and benefits industries · Knowledge of plan administration processes and systems · Familiarity with LIMRA standards and other industry-specific protocols · Awareness of regulatory compliance requirements in the health and benefits sector · Experience working directly with business stakeholders to gather and refine requirements · Familiarity with insurance industry terminology and concepts · Familiarity with 401(k) plans and retirement benefit