The Sensory Inference team at AGI is a group of innovative developers working on groundbreaking multimodal inference solutions that revolutionize how AI systems perceive and interact with the world. We push the limits of inference performance to provide the best possible experience for our users across a wide range of applications and devices. We are looking for talented passionate and dedicated Inference Engineers to join our team and build innovative missioncritical highvolume production systems that will shape the future of AI. You will have an enormous opportunity to make an impact on the design architecture and implementation of cuttingedge technologies used every day potentially by people you know. This role offers the exciting chance to work in a highly technical domain at the boundary between fundamental AI research and production engineering such as Quantization Speculative Decoding and Long Context for inference efficiency.
Key job responsibilities
Develop highperformance inference software for a diverse set of neural models typically in C/C
Design prototype and evaluate new inference engines and optimization techniques
Participate in deepdive analysis and profiling of production code
Optimize inference performance across various platforms (ondevice cloudbased CPU GPU proprietary ASICs)
Collaborate closely with research scientists to bring nextgeneration neural models to life
Partner with internal and external hardware teams to maximize platform utilization
Work in an Agile environment to deliver highquality software against aggressive schedules
Hold a high bar for technical excellence within the team and across the organization
3 years of noninternship professional software development experience
2 years of noninternship design or architecture (design patterns reliability and scaling) of new and existing systems experience
Experience programming with at least one software programming language
Bachelors degree in Computer Science Computer Engineering or related field
Strong C/C programming skills
Solid understanding of deep learning architectures (CNNs RNNs Transformers etc.
3 years of full software development life cycle including coding standards code reviews source control management build processes testing and operations experience
Experience with inference frameworks such as PyTorch TensorFlow ONNXRuntime TensorRT etc.
Proficiency in performance optimization for CPU GPU or AI hardware
Proficiency in kernel programming for accelerated hardware using programming models such as (but not limited to) CUDA OpenMP OpenCL Vulkan and Metal
Experience with latencysensitive optimizations and realtime inference
Understanding of resource constraints on mobile/edge hardware
Knowledge of model compression techniques (quantization pruning distillation etc.
Experience with LLM efficiency techniques like speculative decoding and long context
Strong communication skills and ability to work in a collaborative environment
Passion for solving complex problems and driving innovation in AI technology
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status disability or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.