In this role youll be at the forefront of architecting and building our next-generation distributed ML infrastructure where youll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs optimizing every layer of the stackfrom low-level memory access patterns to high-level distributed algorithmsto achieve maximum hardware utilization while minimizing latency for real-time user experiences. Youll work at the intersection of cutting-edge ML systems and hardware acceleration collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics while simultaneously building the production systems that will serve billions of requests is a hands-on technical leadership position where youll not only architect these systems but also dive deep into performance profiling implement novel optimization techniques and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apples secure cloud infrastructure.
- Strong knowledge of GPU programming (CUDA ROCm) and high-performance computing
- Must have excellent system programming skills in C/C Python is a plus
- Deep understanding of distributed systems and parallel computing architectures
- Experience with inter-node communication technologies (InfiniBand RDMA NCCL) in the context of ML training/inference
- Understand how tensor frameworks (PyTorch JAX TensorFlow) are used in distributed training/inference
- Technical BS/MS degree
- Familiar with model development lifecycle from trained model to large scale production inference deployment
- Proven track record in ML infrastructure at scale
In this role youll be at the forefront of architecting and building our next-generation distributed ML infrastructure where youll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing ...
In this role youll be at the forefront of architecting and building our next-generation distributed ML infrastructure where youll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs optimizing every layer of the stackfrom low-level memory access patterns to high-level distributed algorithmsto achieve maximum hardware utilization while minimizing latency for real-time user experiences. Youll work at the intersection of cutting-edge ML systems and hardware acceleration collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics while simultaneously building the production systems that will serve billions of requests is a hands-on technical leadership position where youll not only architect these systems but also dive deep into performance profiling implement novel optimization techniques and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apples secure cloud infrastructure.
- Strong knowledge of GPU programming (CUDA ROCm) and high-performance computing
- Must have excellent system programming skills in C/C Python is a plus
- Deep understanding of distributed systems and parallel computing architectures
- Experience with inter-node communication technologies (InfiniBand RDMA NCCL) in the context of ML training/inference
- Understand how tensor frameworks (PyTorch JAX TensorFlow) are used in distributed training/inference
- Technical BS/MS degree
- Familiar with model development lifecycle from trained model to large scale production inference deployment
- Proven track record in ML infrastructure at scale
View more
View less