At MedScout, our mission is to empower MedTech commercial teams with the data, insights, and tools they need to deliver life-changing medical innovations to the patients who need them most. We're creating a best-in-class revenue acceleration platform that unites the latest medical claims intelligence with an intuitive user experience built specifically for sales professionals at medical device and diagnostic companies.
We just closed a $15M Series A and we're ready to bring on new team members to join our Engineering team. As a Data Engineer, you'll help us build and optimize the data infrastructure that processes billions of healthcare claims, turning complex data into actionable insights that drive business decisions. You'll work alongside our talented engineering team to evolve and scale our data architecture, using modern technologies like Databricks and Elasticsearch, while having significant opportunities to grow technically and drive business impact.
How will you help us build this company?You will design, implement, and maintain scalable data pipelines to process large volumes of healthcare claims data using Databricks, Python, and PySpark. You'll be ensuring high data quality and performance optimization for downstream analytics.
You will develop processes to integrate multiple data sources, including healthcare claims databases, into a unified data model that powers MedScout's sales enablement platform.
You will work with Product, Customer Success, and Sales leaders to understand what our customers are looking to achieve with our platform and use those insights to inform and validate your thinking as you make design and implementation decisions.
You will collaborate with data scientists and analysts to implement data transformations that support efficiently delivering advanced analytics, market insights, and predictive modeling capabilities for the platform.
You will troubleshoot and resolve complex data pipeline issues, optimize system performance, and contribute to the continuous improvement of MedScript's data infrastructure and engineering practices.
You will optimize workloads and cluster configurations to reduce compute costs while maintaining performance, including implementing auto-scaling policies, right-sizing clusters, and monitoring resource utilization patterns.
What does an ideal background look like?You have 3+ years of experience building, maintaining, and operating data pipelines in a modern data warehouse like Databricks, Snowflake, or AWS Redshift.
You feel confident using Python and PySpark.
You have a good understanding of data modeling and schema design, particularly in contexts involving complex relationships and high-volume data processing.
You're an expert with data quality frameworks, including automated testing, validation, and monitoring of data pipelines.
You have familiarity with modern software development practices including version control (Git), CI/CD, and infrastructure as code.
You are able to work effectively with cross-functional teams, translating business requirements into technical specifications and communicating complex technical concepts to non-technical stakeholders.
Are we a fit for each other?At our stage, we believe how you operate is more important than what you'll do day-to-day. As an early team member, we're looking for individuals with strong alignment with the following core values.
Effort on our inputs: We prepare diligently, leave it all on the "field", and move on quickly. Focusing on good habits and work ethic, not individual outcomes, ultimately creates a winning culture and a successful company.
Earn Trust: We keep our commitments to our customers, partners, and each other. We listen attentively, speak candidly, and treat others respectfully. We strive to demonstrate empathy, inclusion, and intellectual honesty.
Intelligence Drives Operations: We learn continuously and have the humility to quickly recognize when our assumptions are wrong so we can readjust accordingly.
Hire And Develop The Best: Good players like playing on good teams. We look to raise the bar with every hire and promotion. We work hard to identify and develop high potential.
Take Decisive Action: The only sure path to continuous improvement is a hypothesis-driven approach with a bias for speed of experimentation.
What is the interview process?Introductory call with the VP of Engineering.
Technical Review with members of the data team.
A walk through of a product scenario with our Head of Product and Data Lead.
Culture Interviews with both the engineering team and other cross functional team members.
What can you expect from us?Fully covered healthcare and a great vision, dental, and 401k package.
You will feel heard. You will hear, "Yes, let's do that!" and then have the opportunity to execute your ideas successfully.
Remote first culture and quarterly on-sites with the rest of the MedScout Team.
We stay in nice hotels and eat well when we travel for work. No one feels like a badass walking into a Quality Inn.
Generous budget for learning and development + any tools you feel would make you more effective.
MedScout embraces diversity and equal opportunity in a serious way. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our work will be.
We will ensure that individuals with disabilities are provided reasonable accommodation who need it. We want you to be able to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. If you require any accommodation please let us know!