AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible for development and performance optimization of core building blocks of LLM Inference - Attention MLP Quantization Speculative Decoding Mixture of Experts etc.
The team works side by side with chip architects compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B 3.1 405B DBRX Mixtral and so on.
Key job responsibilities Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.
About the team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures and were building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough but kind code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.
- 3 years of non-internship professional software development experience - 2 years of non-internship design or architecture (design patterns reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language - Fundamentals of Machine learning models their architecture training and inference lifecycles along with work experience on some optimizations for improving the model performance.
- 3 years of full software development life cycle including coding standards code reviews source control management build processes testing and operations experience - Bachelors degree in computer science or equivalent - Hands-on experience with PyTorch or Jax - preferably involving developing and deploying LLMs in production on GPUs Neuron TPU or other AI acceleration hardware.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status disability or other legally protected status.
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.