About PathwayPathway is an enabler for Live AI, allowing organizations to run contextualized ML models connected to ever-changing enterprise data. In addition to being an infrastructure provider delivering an AI framework, we are working to advance the state-of-the-art.
This is an R&D position in attention-based models. Pathway is VC-funded, with some amazing BAs (such as Lukasz Kaiser, co-inventor of Transformers). Pathway's CTO has co-authored papers with Geoff Hinton and Yoshua Bengio. We have just raised a $10M+ seed, with exciting developments ahead. The management team includes growth leaders who have scaled companies with multiple exits, and who have built online communities reaching millions of users.
Our client portfolio is focused around mobility, IoT data, logs and transactions, and also includes larger actors such as NATO and national postal services. We have a vibrant community centered around our developer frameworks, with almost 10,000 stars on GitHub.
Our offices are in Menlo Park, CA, as well as Paris, France and Wroclaw, Poland.
The OpportunityWe are currently searching for 1 or 2 R&D Engineers with a strong track record in machine learning models research.
This is an extremely ambitious foundational project. There is a flexible GPU budget associated with this specific project, guaranteed to be in the 7-digit range minimum.
You WillPerform (distributed) model training.Help improve/adapt model architectures based on experiment results.Design new tasks and experiments.Optionally: oversee activities of team members involved in data preparation.
The results of your work will play a crucial role in the success of the project. You Are Expected to Meet at Least One of the Following Criteria:You have published at least one paper at NeurIPS, ICLR, or ICML - where you were the lead author or made significant conceptual & code contributions.You have significantly contributed to an LLM training effort which became newsworthy (topped a Huggingface benchmark, best in class model, etc.), preferably using multiple GPUs.You have spent at least 6 months working in a leading Machine Learning research center (e.g. at: Google Brain / Deepmind, Apple, Meta, Anthropic, Nvidia, MILA).You were an ICPC World Finalist, or an IOI, IMO, or IPhO medalist in High School.
You AreA deep learning researcher, with a track record in Language Models and/or RL (candidates with a Vision or Robotics ML background are also welcome to apply).Interested in improving foundational architectures and creating new benchmarks.Experienced at hands-on experiments and model training (PyTorch, Jax, or Tensorflow).Have a good understanding of GPU architecture, memory design, and communication.Have a good understanding of graph algorithms.Have some familiarity with model monitoring, git, build systems, and CI/CD.Respectful of others.Fluent in English.
Bonus PointsKnowledge of approaches used in distributed training.Familiarity with Tryton.Successful track-record in algorithms & data science contests.Showing a code portfolio.Why You Should ApplyJoin an intellectually stimulating work environment.Be a pioneer: you get to work with a new type of "Live AI" challenges around long sequences and changing data.Be part of an early-stage AI startup that believes in impactful research and foundational changes.Type of contract: Full-time, permanent.Preferable joining date: January 2025. The positions are open until filled – please apply immediately.Compensation: six-digit annual salary based on profile and location + Employee Stock Option Plan.Location: Remote work. Possibility to work or meet with other team members in one of our offices: Menlo Park, CA, Paris, France, or Wroclaw, Poland. As a general rule, permanent residence will be required in the EU, UK, US, or Canada.If you meet our broad requirements but are missing some experience, don't hesitate to reach out to us.
#J-18808-Ljbffr