Machine Learning Compiler Engineer

Details of the offer

Machine intelligence will soon take over humanity's role in knowledge-keeping and creation. What started in the mid-1990s as the gradual off-loading of knowledge and decision making to search engines will be rapidly replaced by vast neural networks - with all knowledge compressed into their artificial neurons. Unlike organic life, machine intelligence, built within silicon, needs protocols to coordinate and grow. And, like nature, these protocols should be open, permissionless, and neutral. Starting with compute hardware, the Gensyn protocol networks together the core resources required for machine intelligence to flourish alongside human intelligence.
As a Compiler Engineer at Gensyn, your responsibilities would see you:
Lowering deep learning graphs - from common frameworks (PyTorch, Tensorflow, Keras, etc) down to an IR for training and inference - with particular focus on ensuring reproducibility.Writing novel algorithms - for transforming intermediate representations of compute graphs between different operator representations.Owning two of the following compiler areas:Front-end - deal with the handshaking of common Deep Learning Frameworks with Gensyn's IR for internal IR usage. Write Transformation passes in ONNX to alter IR for middle-end consumption.Middle-end - write compiler passes for training-based compute graphs, integrate reproducible Deep Learning kernels into the code generation stage, and debug compilation passes and transformations as you go.Back-end - lower IR from middle-end to GPU target machine code.Must have: Compiler knowledge - base-level understanding of a traditional compiler (LLVM, GCC) and graph traversals required for writing code for such a compiler.Solid software engineering skills - practicing software engineer, having significantly contributed to/shipped production code.Understanding of parallel programming - specifically as it pertains to GPUs.Ability to operate on:High-Level IR/Clang/LLVM up to middle-end optimisation; and/orLow Level IR/LLVM targets/target-specific optimisations - particularly GPU specific optimisations.Highly self-motivated with excellent verbal and written communication skills.Comfortable working in an applied research environment with extremely high autonomy.Should have: Architecture understanding - full understanding of a computer architecture specialised for training NN graphs (Intel Xeon CPU, GPUs, TPUs, custom accelerators).Compilation understanding - strong understanding of compilation in regards to one or more High-Performance Computer architectures (CPU, GPU, custom accelerator, or a heterogenous system of all such components).Proven technical foundation - in CPU and GPU architectures, numeric libraries, and modular software design.Deep Learning understanding - both in terms of recent architecture trends + fundamentals of how training works, and experience with machine learning frameworks and their internals (e.g. PyTorch, TensorFlow, scikit-learn, etc.).Exposure to a Deep Learning Compiler frameworks - e.g. TVM, MLIR, TensorComprehensions, Triton, JAX.Kernel Experience - Experience writing and optimizing highly-performant GPU kernels.Nice to have: Rust experience - systems level programming experience in Rust.Open-source contributions to existing compilers/frameworks with a strong preference for ML compilers/frameworks.Benefits: Competitive salary + share of equity and token pool.Fully remote work - we hire between the West Coast (PT) and Central Europe (CET) time zones.Relocation Assistance - available for those that would like to relocate after being hired (anywhere from PST through CET time zones).4x all expenses paid company retreats around the world, per year.Whatever equipment you need.Paid sick leave.Private health, vision, and dental insurance - including spouse/dependents.Work Culture: Focus on small teams - misalignment and politics scale super-linearly with team size. Small protocol teams rival much larger traditional teams.Reject waste - guard the company's time, rather than wasting it in meetings without clear purpose/focus, or bikeshedding.Give direct feedback to everyone immediately rather than avoiding unpopularity, expecting things to improve naturally, or trading short-term pain for extreme long-term pain.Embrace an extreme learning rate rather than assuming limits to your ability/knowledge.No quit - push to the final outcome, despite any barriers.Apply for this job* indicates a required field
First Name *
Last Name *
Email *
Phone
Resume/CV *
Accepted file types: pdf, doc, docx, txt, rtf

#J-18808-Ljbffr


Nominal Salary: To be agreed

Source: Jobleads

Job Function:

Requirements

Sr Manager Mechanical Design Engineering

The Company Dexcom Corporation (NASDAQ DXCM) is a pioneer and global leader in continuous glucose monitoring (CGM). Dexcom began as a small company with a bi...


Dexcom - California

Published 11 days ago

Engineering Manager

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and s...


Paypal - California

Published 11 days ago

Senior Engineer, Backend (Java)

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and s...


Paypal - California

Published 11 days ago

Staff Engineer, Backend (Java)

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and s...


Paypal - California

Published 11 days ago

Built at: 2024-12-26T17:26:14.671Z