About the Job
The Red Hat Performance and Scale Engineering team is seeking a Senior Performance Engineer to join our PSAP (Performance and Scale for AI Platforms) this role you will drive the performance and scalability of distributed inference for Large Language Models (LLMs) as part of the Red Hat AI Inference Server (RHAIIS) open-source will be responsible for characterizing modeling and understanding performance deltas to ensure industry-leading throughput latency and cost-efficiency of AI workloads. This includes using tools like vLLM GuideLLM and PyTorch for is a dynamic role for a seasoned engineer with a growth mindset who handles and adapts to rapid change has a strong commitment to open-source values and the willingness to learn and apply new technologies. You will be joining a vibrant open source culture and helping promote performance and innovation in this Red Hat engineering team.
The broader mission of the Performance and Scale team is to establish performance and scale leadership of the Red Hat product and cloud services portfolio. The scope includes component level system and solution analysis and targeted enhancements. The team collaborates with engineering product management product marketing and customer support as well as Red Hats hardware and software ecosystem partners.
At Red Hat our commitment to open source innovation extends beyond our products - its embedded in how we work and grow. Red Hatters embrace change especially in our fast-moving technological landscape and have a strong growth mindset. Thats why we encourage our teams to proactively thoughtfully and ethically use AI to simplify their workflows cut complexity and boost efficiency. This empowers our associates to focus on higher-impact work creating smart more innovative solutions that solve our customers most pressing challenges.
What youll do
Define and track key performance indicators (KPIs) and service level objectives (SLOs) for large-scale LLM inference services
Formulate and execute performance benchmarks utilizing tools like vLLM GuideLLM and PyTorch Profiler and other related tools to characterize performance drive improvements and detect issues through data analysis and visualization.
Develop and maintain tools scripts and automated solutions that streamline performance benchmarking and AI model profiling tasks.
Collaborate closely with cross-functional engineering teams to identify and address critical performance bottlenecks within the architecture and inference stacks.
Partner with DevOps to bake performance gates into GitHub Actions/RHAIIS Pipelines.
Explore and experiment with emerging AI technologies relevant to software development proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
Triage field and customer escalations related to performance; distill findings into upstream issues and product backlog items.
Publish results recommendations and best practices through internal reports presentations external blogs technical papers and official documentation.
Represent the team at internal and external conferences presenting key findings and strategies.
What youll have
5 years of experience in performance engineering or systems-level software design.
Hands-on experience with operating systems distributed systems or system-level performance tooling.
Understanding of AI and LLM fundamentals.
Fluency in Python (data & ML) and strong Bash/Linux skills.
Knowledge of performance benchmarking and profiling for LLMs.
Exceptional communication skillsable to translate raw performance data into customer value and executive narratives.
Commitment to open-source values.
The following is considered a plus
Masters or PhD in Computer Science AI or a related field.
History of upstream contributions and community leadership.
Experience publishing blogs or technical papers.
Hands-on experience with any of the following Kubernetes/OpenShift/RHAIIS/RHELAI
Familiarity with performance observability stacks such as perf/eBPF tools Nsight Systems PyTorch Profiler among others
Hands-on experience with modern LLM inference server stacks (e.g. vLLM TensorRT-LLM TGI Triton Inference Server).
#LI-EK1
#AI-HIRING
The salary range for this position is $133650.00 - $220680.00. Actual offer will be based on your qualifications.Pay Transparency
Red Hat determines compensation based on several factors including but not limited to job location experience applicable skills and training external market value and internal pay equity. Annual salary is one component of Red Hats compensation package. This position may also be eligible for bonus commission and/or equity. For positions with Remote-US locations the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience.
About Red Hat
Red Hat is the worlds leading provider of enterprise open source software solutions using a community-powered approach to deliver high-performing Linux cloud container and Kubernetes technologies. Spread across 40 countries our associates work flexibly across work environments from in-office to office-flex to fully remote depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas no matter their title or tenure. Were a leader in open source because of our open and inclusive environment. We hire creative passionate people ready to contribute their ideas help solve complex problems and make an impact.
Benefits
Comprehensive medical dental and vision coverage
Flexible Spending Account - healthcare and dependent care
Health Savings Account - high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability paid family medical leave and paid military leave
Additional benefits including employee stock purchase plan family planning reimbursement tuition reimbursement transportation expense account employee assistance program and more!
Note: These benefits are only applicable to full time permanent associates at Red Hat located in the United States.
Inclusion at Red Hat
Red Hats culture is built on the open source principles of transparency collaboration and inclusion where the best ideas can come from anywhere and anyone. When this is realized it empowers people from different backgrounds perspectives and experiences to come together to share ideas challenge the status quo and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access and that all voices are not only heard but also celebrated. We hope you will join our celebration and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.
Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race color religion sex sexual orientation gender identity national origin ancestry citizenship age veteran status genetic information physical or mental disability medical condition marital status or any other basis prohibited by law.
Required Experience:
Senior IC
We revolutionized the operating system with Red Hat® Enterprise Linux®. Now, we have a broad portfolio, including hybrid cloud infrastructure, middleware, agile integration, cloud-native application development, and management and automation solutions. With Red Hat technologies, compa ... View more