Are you passionate about bringing state of the art AI technology to edge devices Join Amazons Device OS organization where were revolutionizing how AI powers everyday devices. Were seeking exceptional Senior Software Development Engineers who can help us push the boundaries of edge computing and artificial intelligence. This is your chance to shape the future of AI-enabled devices that impact millions of customers worldwide.
Key job responsibilities
As a Sr. Software Development Engineer you will conceive design and deliver innovative features for Amazon devices. You will be responsible for architecting and developing software solutions for new confidential products.
Investigate and prototype edge computing solutions with AI integration capabilities
Work in an Agile/Scrum environment to deliver high quality software
Establish architectural principles and mentor team members
Design and implement unified inference execution frameworks for diverse hardware platforms
Develop backend integrations for various SoC vendors and AI accelerators
Optimize model deployment workflows and performance for resource-constrained environments
Collaborate with cross-functional teams to deliver scalable edge AI solutions
About the team
The Device OS Team builds Edge AI/ML frameworks enabling partners to launch applications services and devices customers love. We develop unified inference platforms that connect advanced AI models with diverse hardware ecosystems across multiple SoC platforms. Our infrastructure and tooling empower developers to deploy optimize and monitor high-performance AI models on edge devices.
- 8 years of full software development life cycle including coding standards code reviews source control management build processes testing and operations experience
- Experience in embedded development in C/C
- 5 years of leading design or architecture (design patterns reliability and scaling) of new and existing systems experience
- 5 years in AI/ML systems and deployment (e.g. model deployment model evaluation data processing debugging fine tuning)
- Strong understanding of deep learning architectures with emphasis on transformer-based LLMs
- Proven experience working with edge hardware platforms (Qualcomm MediaTek NVIDIA Jetson Apple Neural Engine etc)
- Experience with edge AI inference frameworks (Executorch TensorFlow Lite ONNX Runtime etc.) and compiler stacks
- Experience with GPU programming (e.g. CUDA OpenCL Metal Vulkan Compute)
- Experience with ML frameworks (e.g. PyTorch JAX TensorFlow)
- Experience deploying and tuning LLMs using techniques like LoRA QLoRA and instruction tuning
- Strong coding skills in C/C and Python to implement custom optimized kernels
- Performance optimization for inference speed memory utilization and bandwidth usage
- Masters degree in machine learning or equivalent
- Deep knowledge of SoC architecture hardware acceleration vectorization and memory hierarchies
- Experience deploying LLMs or advanced ML models on resource-constrained devices
- Strong Knowledge in the following areas:
- Generative AI concepts such as embeddings RAG semantic search and transformer-based LLMs
- MCP workflows and Agentic ecosystem
- Vector databases (e.g. FAISS Annoy HNSW) and data pipelines
- Contributions to open-source inference frameworks or compiler toolchains
- Strong cross-functional leadership with the ability to influence product roadmaps and technical strategy across organizations
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process including support for the interview or onboarding process please visit
for more information. If the country/region youre applying in isnt listed please contact your Recruiting Partner.