Systems Performance Engineer
Austin, TX - USA
Job Summary
Our vision is to transform how the world uses information to enrich life for all.
Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence inspiring the world to learn communicate and advance faster than ever.
The engineer will work with senior engineers and researchers on AI training and inference systems with a strong focus on LLM execution engines data and KVcache management and multitier memory hierarchies across modern datacenter platforms. The role centers on endtoend performance characterization and optimization of largescale AI workloads spanning singlenode GPUs to rackscale inference deployments.
Responsibilities include systems software development workload engineering performance analysis and memorycentric optimization for LLM training serving and agentic AI frameworks. The work emphasizes real customer inference and training workloads emerging memory technologies (HBM LP/DRAM CXL NVMe remote memory fabrics) and the economics and tokenlevel efficiency of largescale inference systems.
This role combines handson engineering with applied systems research directly influencing nextgeneration AI platforms and memorydriven system architectures.
Key Responsibilities
- Build develop and improve systems software tools for profiling tracing and analyzing LLM training and inference workloads
- Design and evaluate KVcache and statemanagement strategies for LLM serving including reuse eviction compression tiering and lifecycle management
- Build and extend benchmarking simulation and emulation frameworks for AI inference and training across heterogeneous memory tiers
- Develop and evaluate data placement migration and prefetching algorithms across HBM LP/DRAM CXL memory pools NVMe and remote memory systems
- Characterize and optimize LLM execution engines (prefill/decode) including attention behavior batching strategies and tokenlevel performance
- Analyze rackscale and clusterscale inference deployments focusing on throughput latency utilization cost and token economics
- Develop workloads that reflect real customer AI systems including LLM serving agentic pipelines retrievalaugmented generation multimodal inference and longcontext workloads
- Instrument and analyze performance across GPUs CPUs memory subsystems interconnects and storage identifying endtoend bottlenecks
- Evaluate system interactions across OS runtime layers containerized deployments and distributed inference stacks
- Automate performance measurement experimentation and analysis workflows to improve repeatability and scale
- Summarize findings into clear methodologies internal reports and technical presentations for engineering and leadership audiences
- Collaborate across engineering architecture and research teams and with external academic and industry partners
- Provide actionable feedback to product architecture and platform teams to influence future AI systems and memory designs
Required Qualifications
- Bachelors or Masters degree or equivalent experience in Computer Science Electrical Engineering or a related field
- Strong foundation in operating systems memory systems parallel computing or distributed systems
- Proficiency in systems programming and analysis using C/C and Python
- Experience working in Linux environments including debugging profiling and automation
- Solid understanding of modern server architectures including GPUs CPUs cache hierarchies NUMA and memory subsystems
- Experience analyzing performance data and reasoning about systemlevel behavior
- Strong written and verbal communication skills
- Ability to work independently on scoped problems and collaboratively on larger system efforts
Preferred Qualifications
- Experience with LLM training and inference systems including execution runtimes and serving frameworks
- Handson experience with KV cache management longcontext execution or stateful inference workloads
- Familiarity with GPU architectures and AI accelerators including memory and interconnect behavior
- Experience with multitier memory systems including HBM LP/DRAM CXLattached memory NVMe and remote/disaggregated memory
- Experience profiling and optimizing AI inference pipelines including batching scheduling and latencysensitive workloads
- Familiarity with agentic AI frameworks multiagent systems or workflowbased inference pipelines
- Experience with distributed AI systems rackscale deployments or clusterlevel performance analysis
- Exposure to memory or system simulators (e.g. gem5 Ramulator) or analytical performance modeling
- Familiarity with containers orchestration and AI infrastructure stacks
- Experience applying machine learning techniques to systems optimization or performance analysis
As a world leader in the semiconductor industry Micron is dedicated to your personal wellbeing and professional growth. Micron benefits are designed to help you stay well provide peace of mind and help you prepare for the future. We offer a choice of medical dental and vision plans in all locations enabling team members to select the plans that best meet their family healthcare needs and budget. Micron also provides benefit programs that help protect your income if you are unable to work due to illness or injury and paid family leave. Additionally Micron benefits include a robust paid time-off program and paid holidays. For additional information regarding the Benefit programs available please see the Benefits Guide posted on is proud to be an equal opportunity workplace and is an affirmative action employer. All qualified applicants will receive consideration for employment without regard to race color religion sex sexual orientation age national origin citizenship status disability protected veteran status gender identity or any other factor protected by applicable federal state or local laws.
To learn about your right to work click here.
To learn more about Micron please visit US Sites Only: To request assistance with the application process and/or for reasonable accommodations please contact Microns People Organization at or 1- (select option #3)
Micron Prohibits the use of child labor and complies with all applicable laws rules regulations and other international and industry labor standards.
Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron.
AI alert: Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However all information provided must be accurate and reflect the candidates true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification.
Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology Inc.
Required Experience:
IC
About Company
Explore Micron Technology, leading in semiconductors with a broad range of performance-enhancing memory and storage solutions