Sr Worldwide Specialist Solutions Architect, Research - Genai, Training & Inference

Details of the offer

Sr Worldwide Specialist Solutions Architect, Research - GenAI, Training & InferenceJob ID: 2834813 | Amazon Web Services, Inc.
Do you want to help define the future of Go to Market (GTM) at AWS using generative AI (GenAI)?

AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector.

Within SMGS, you will be part of the core worldwide GenAI Applied Computational Machine Learning team, responsible for defining, building, and deploying targeted strategies to accelerate customer adoption of our services and solutions across industry verticals.

You will be working directly with the most important customers (across segments) in the GenAI model training and inference space helping them adopt and scale large-scale workloads (e.g., foundation models) on AWS. You will conduct model performance evaluations, optimizations and work directly with engineering and product to optimize the ML stack for efficiency and scale. You will conduct external/internal evangelism, and developing demos and proof-of-concepts.

Key job responsibilities
You will help develop the industry's best cloud-based solutions to grow the GenAI business. Working closely with our engineering teams, you will help enable new capabilities for our customers to develop and deploy GenAI workloads on AWS. You will facilitate the enablement of AWS technical community, solution architects and, sales with specific customer centric value proposition and demos about end-to-end GenAI on AWS cloud.

You will possess a technical and business background that enables you to drive an engagement and interact at the highest levels with startups, Enterprises, and AWS partners. You will have the technical depth and business experience to easily articulate the potential and challenges of GenAI models and applications to engineering teams and C-Level executives. This requires deep familiarity across the stack – compute infrastructure (Amazon EC2, Lustre), ML frameworks PyTorch, JAX, orchestration layers Kubernetes and Slurm, parallel computing (NCCL, MPI), MLOPs, as well as target use cases in the cloud.

You will drive the development of the GTM plan for building and scaling GenAI on AWS, interact with customers directly to understand their business problems, and help them with defining and implementing scalable GenAI solutions to solve them (often via proof-of-concepts). You will also work closely with account teams, research scientists, and product teams to drive model implementations and new solutions.

You should be passionate about helping companies/partners understand best practices for operating on AWS. An ideal candidate will be adept at interacting, communicating and partnering with other teams within AWS such as product teams, solutions architecture, sales, marketing, business development, and professional services, as well as representing your team to executive management. You will have a natural appetite to learn, optimize and build new technologies and techniques. You will also look for patterns and trends that can be broadly applied across an industry segment or a set of customers that can help accelerate innovation.

This is an opportunity to be at the forefront of technological transformations, as a key technical leader. Additionally, you will work with the AWS ML and EC2 product teams to shape product vision and prioritize features for AI/ML Frameworks and applications. A keen sense of ownership, drive, and being scrappy is a must.

About the team
The Foundation Models (fka Training & Inference) team is highly specialized on computational workloads, performance evaluations and optimization. We work with Foundation model builders and large scale training customers, dive deep into the ML stack including the hardware (GPUs, Custom Silicon), operating system (kernel, communication libraries (NCCL, MPI), Frameworks (PyTorch, NeMO, Jax) and models (Llama, Nemotron...). We also work with containers (Docker, Enroot), orchestrators (EKS) and schedulers (Slurm).


BASIC QUALIFICATIONS- Bachelor's degree in computer science, engineering, mathematics or equivalent
- 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience
- 3+ years of design, implementation, or consulting in applications and infrastructures experience
- 5+ years building or optimizing computational applications for large scale HPC systems (e.g. physics based simulations) to take advantage of high performance networking (e.g. Amazon EFA, Infiniband, RoCE), distributed parallel filesystems (e.g. Lustre, BeeGFS, GPFS) and accelerators (e.g. GPUs, custom-silicon)
- Understanding of deep learning training and inference workloads and requirements for high performance compute, network and storage
PREFERRED QUALIFICATIONS- 5+ years of infrastructure architecture, database architecture and networking experience
- Experience working with end user or developer communities
- 8+ years building or optimizing computational applications for large scale HPC systems (e.g. physics based simulations) to take advantage of high performance networking (e.g. Amazon EFA, Infiniband, RoCE), distributed parallel filesystems (e.g. Lustre, BeeGFS, GPFS) and accelerators (e.g. GPUs, custom-silicon)
- Professional experience in deep learning training and inference workloads and requirements for high performance compute, network and storage
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

#J-18808-Ljbffr


Nominal Salary: To be agreed

Source: Jobleads

Requirements

Machine Learning Engineer, Nerf

Machine Learning Engineer, NeRF Our mission at DigitalFish is to help our customers derive transformative value across their organizations by building next-g...


Digitalfish - California

Published 13 days ago

Senior Software Engineer, Alerts And Notifications

About Sentry Bad software is everywhere, and we're tired of it. Sentry is on a mission to help developers write better software faster, so we can get back to...


Sentry - California

Published 13 days ago

Technical Support Analyst

About Stellic Remember the days when you were navigating your academic path — the feeling of being lost, unsure of which courses to take, or uncertain about ...


Stellic - California

Published 13 days ago

Senior Technical Program Manager

Intercom was founded in 2011 to change the standard of customer service online. Our AI-first customer service platform is a totally new way to deliver custom...


Intercom - California

Published 13 days ago

Built at: 2024-12-18T09:46:41.725Z