Shape the Future of AI Accelerators at AWS Neuron
Join the elite team behind AWS Neuronthe software stack powering AWSs next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team youll be at the forefront of deploying and optimizing some of the worlds most sophisticated AI models at unprecedented scale.
What Youll Impact:
Pioneer distributed inference solutions for industry-leading LLMs such as GPT Llama Qwen
Optimize breakthrough language and vision generative AI models
Collaborate directly with silicon architects and compiler teams to push the boundaries of AI acceleration
Drive performance benchmarking and tuning that directly impacts millions of inference calls globally
Key job responsibilities
You will drive the Evolution of Distributed AI at AWS Neuron
Youll develop the bridge between ML frameworks including PyTorch JAX and AI hardware. This isnt just about just optimizationits about revolutionizing how AI models run at scale.
Technical Impact Youll Drive:
Spearhead distributed inference architecture for PyTorch and JAX using XLA
Engineer breakthrough performance optimizations for AWS Trainium and Inferentia
Develop ML tools to enhance LLM accuracy and efficiency
Transform complex tensor operations into highly optimized hardware implementations
Pioneer benchmarking methodologies that shape next-gen AI accelerator design
What Makes This Role Unique:
Direct influence on AWSs AI infrastructure used by thousands of ML applications
Full-stack optimization from high-level frameworks to hardware-specific primitives
Creation of tools and frameworks that define industry standards for ML deployment
Collaboration with both open-source ML communities and hardware architecture teams
Your Technical Arsenal Should Include:
Deep expertise in Python and ML framework internals
Strong understanding of distributed systems and ML optimization
Passion for performance tuning and system architecture
A day in the life
Work/Life Balance
Our team puts a high value on work-life balance. It isnt about how many hours you spend at home or at work; its about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures and were building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
About the team
At AWS Neuron were revolutionizing how the worlds most sophisticated AI models run at scale through Amazons next-generation AI accelerators. Operating at the unique intersection of ML frameworks and custom silicon our team drives innovation from silicon architecture to production software deployment.
We pioneer distributed inference solutions for PyTorch and JAX using XLA optimize industry-leading LLMs like GPT and Llama and collaborate directly with silicon architects to influence the future of AI hardware. Our systems handle millions of inference calls daily while our optimizations directly impact thousands of AWS customers running critical AI workloads.
Were focused on pushing the boundaries of large language model optimization distributed inference architecture and hardware-specific performance tuning. Our deep technical experts transform complex ML challenges into elegant scalable solutions that define how AI workloads run in production.
- 3 years of computer science fundamentals (object-oriented design data structures algorithm design problem solving and complexity analysis) experience
- 3 years of programming experience using Python or C and PyTorch.
- Experience with AI acceleration via quantization parallelism model compression batching KV caching vllm serving
- Experience with accuracy debugging & tooling performance benchmarking of AI accelerators
- Fundamentals of Machine learning and deep learning models their architecture training and inference lifecycles along with work experience on optimizations for improving the model execution.
- Bachelors degree in computer science or equivalent
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status disability or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129300/year in our lowest geographic market up to $223600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge skills and experience. Amazon is a total compensation company. Dependent on the position offered equity sign-on payments and other forms of compensation may be provided as part of a total compensation package in addition to a full range of medical financial and/or other benefits. For more information please visit This position will remain posted until filled. Applicants should apply via our internal or external career site.