drjobs Sr. ML Compiler Engineer, AWS Neuron, Annapurna Labs

Sr. ML Compiler Engineer, AWS Neuron, Annapurna Labs

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Toronto - Canada

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron the software development kit used to accelerate deep learning and GenAI workloads on Amazons custom machine learning accelerators Inferentia and Trainium.

The Product: The AWS Machine Learning accelerators (Inferentia/Trainium) offer unparalleled ML inference and training performances. They are enabled through state-of-the-art software stack - the AWS Neuron Software Development Kit (SDK). This SDK comprises an ML compiler runtime and application framework which seamlessly integrate into popular ML frameworks like PyTorch. AWS Neuron running on Inferentia and Trainium is trusted and used by leading customers such as Snap Autodesk and Amazon Alexa.

The Team: Annapurna Labs was a startup company acquired by AWS in 2015 and is now fully integrated. If AWS is an infrastructure company then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering hardware design and verification software and operations. AWS Nitro ENA EFA Graviton and F1 EC2 Instances AWS Neuron Inferentia and Trainium ML Accelerators and in storage with scalable NVMe are some of the products we have delivered over the last few years.

Within this ecosystem the Neuron Compiler team is developing a deep learning compiler stack that takes state of the art LLM Vision and multi-modal models created in frameworks such as TensorFlow PyTorch and JAX and makes them run performantly on our accelerators. The team is comprised of some of the brightest minds in the engineering research and product communities focused on the ambitious goal of creating a toolchain that will provide a quantum leap in performance.

The Neuron team is hiring systems and compiler engineers in order to solve our customers toughest problems. Specifically the performance team in Toronto is focused on analysis and optimization of system-level performance of machine learning models on AWS ML accelerators. The team conducts in-depth profiling and works across multiple layers of the technology stack - from frameworks and compilers to runtime and collectives - to meet and exceed customer requirements while maintaining a competitive edge in the market. As part of the Neuron Compiler organization the team not only identifies and implements performance optimizations but also works to crystallize these improvements into the compiler automating optimizations for broader customer benefit.

This is an opportunity to work on innovative products at the intersection of machine-learning high-performance computing and distributed architectures. You will architect and implement business-critical features publish innovative research and mentor a brilliant team of experienced engineers. We operate in spaces that are very large yet our teams remain small and agile. There is no blueprint. Were inventing. Were experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.

Explore the product and our history!
job responsibilities
Our performance engineers collaborate across compiler runtime and framework teams to optimize machine learning workloads for our global customer base. Working at the intersection of machine learning high-performance computing and distributed systems youll bring a passion for performance analysis distributed systems and machine learning. In this role you will:

- Analyze and optimize system-level performance of machine learning models across the entire technology stack from frameworks to runtime
- Conduct detailed performance analysis and profiling of ML workloads identifying and resolving bottlenecks in large-scale ML systems
- Work directly with customers to enable and optimize their ML models on AWS accelerators understanding their specific requirements and use cases
- Design and implement compiler optimizations transforming manual performance improvements into automated compiler passes
- Collaborate across teams to develop innovative optimization techniques that enhance AWS Neuron SDKs performance capabilities
- Work in a startup-like development environment where youre always working on the most important stuff

A day in the life
As you design and code solutions to help our team drive efficiencies in software architecture youll create metrics implement automation and other improvements and resolve the root cause of software defects. Youll also:

Build high-impact solutions to deliver to our large customer base.

Participate in design discussions code review and communicate with internal and external stakeholders.

Work cross-functionally to help drive business decisions with your technical input.

Work in a startup-like development environment where youre always working on the most important stuff.

About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures and were building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough but kind code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description we encourage candidates to apply. If your career is just starting hasnt followed a traditional path or includes alternative experiences dont let it stop you from applying.

About AWS
Amazon Web Services (AWS) is the worlds most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating thats why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.

Inclusive Team Culture
Here at AWS its in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences inspire us to never stop embracing our uniqueness.

Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home theres nothing we cant achieve in the cloud.

Mentorship & Career Growth
Were continuously raising our performance bar as we strive to become Earths Best Employer. Thats why youll find endless knowledge-sharing mentorship and other career-advancing resources here to help you develop into a better-rounded professional.


- 5 years of non-internship professional software development experience
- 5 years of programming with at least one software programming language experience
- 5 years of leading design or architecture (design patterns reliability and scaling) of new and existing systems experience
- Experience as a mentor tech lead or leading an engineering team

- 5 years of full software development life cycle including coding standards code reviews source control management build processes testing and operations experience
- Bachelors degree in computer science or equivalent
- Experience in compiler design for CPU/GPU/Vector engines/ML-accelerators.
- Experience with System Level performance analysis and optimization
- Experience with LLVM and/or MLIR
- Experience with the following technologies: PyTorch OpenXLA StableHLO JAX TVM deep learning models and algorithms.

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status disability or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.


Required Experience:

Senior IC

Employment Type

Full-Time

Department / Functional Area

Software Development

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.