drjobs Senior/Staff Software Engineer (CUDA Expert)

Senior/Staff Software Engineer (CUDA Expert)

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Durham - UK

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

About Nu

Nu is the worlds largest digital banking platform outside of Asia serving over 105 million customers across Brazil Mexico and Colombia. The company has been leading an industry transformation by leveraging data and proprietary technology to develop innovative products and services. Guided by its mission to fight complexity and empower people Nu caters to customers complete financial journey promoting financial access and advancement with responsible lending and transparency. The company is powered by an efficient and scalable business model that combines low cost to serve with growing returns. Nus impact has been recognized in multiple awards including Time 100 Companies Fast Companys Most Innovative Companies and Forbes Worlds Best Banks. Learn more: the role

At Nubank one of our engineering principles is Leverage Through Platforms. We believe that platforms are a very efficient way of solving complex concerns that are needed for different products and teams.
The AI Infrastructure Squad within the AI Core BU builds and scales the foundational cloud data and AI infrastructure that powers machine learning workloads across the organization. We focus on performance reliability and scalability in AI systems - working on everything from training infrastructure to low-latency inference.


As a Software Engineer in the AI Core BU we expect you to demonstrate:

  • Deep experience with GPU programming (CUDA Triton or OpenCL) with a focus on performance optimization for deep learning workloads.
  • Strong understanding of large language model architectures (e.g. Transformer variants) and experience profiling and tuning their performance.
  • Familiarity with memory management kernel fusion quantization tensor parallelism and GPU-accelerated inference.
  • Experience with PyTorch internals or custom kernel development for AI workloads.
  • Hands-on knowledge of low-level optimizations in training and inference pipelines such as FlashAttention fused ops and mixed-precision computation.
  • Proficiency in Python and C
  • Familiarity with inference acceleration frameworks like TensorRT DeepSpeed vLLM or ONNX Runtime.
Project Experience:
  • Demonstrated experience profiling and debugging GPU performance bottlenecks in LLM training or inference pipelines.
  • Has optimized large-scale ML workloads for throughput latency or costespecially in production or research environments.
  • Experience contributing to or implementing custom GPU kernels for high-impact components (e.g. attention normalization or activation layers).
  • Proven ability to work across research and engineering teams to bridge model design and system performance.
  • Has designed infrastructure that scales across hundreds or thousands of GPUs in cloud or on-prem clusters.

Were looking for individuals who are passionate about pushing the boundaries of LLM inference and training performance. In this role youll work in a fast-paced environment helping to design and scale cutting-edge AI infrastructure. Youll think like an owner balancing engineering rigor with practical constraints to deliver impactful systems that support our most ambitious AI workloads.

Youll collaborate closely with other engineers share performance learnings across the team and mentor others as we continuously evolve our platform. We value curiosity and a self-driven mindset youll be encouraged to stay up to date with the latest in AI performance research GPU architecture advancements and open-source tooling.

What we have to offer

  • High-Impact Cross-Functional Work Collaborate with researchers ML engineers and infrastructure teams to design systems that support training and inference for the companys most critical AI models.
  • Cutting-Edge GPU & LLM Optimization Tackle core performance challenges in LLM serving and training. Dive deep into GPU internals custom kernels and distributed execution.
  • Greenfield & Production-Scale Systems Build both new foundational components (e.g. custom ops inference runtimes) and improve large-scale infrastructure already powering production AI workloads.
  • Ownership & Growth Influence architecture mentor others and lead technical initiatives with autonomy and visibility.
  • Engineering-Driven Culture Work in a team that values deep technical work collaboration and pragmatic innovation at the edge of AI systems performance.


Our Benefits

  • Remote work with quarterly trips to Sao Paulo to build relationships with coworkers.
  • Top Tier Medical Insurance
  • Top Tier Dental and Vision Insurance
  • 20 days time off 14 company holidays and great culture that emphasizes work life balance.
  • Life Insurance and AD&D
  • Extended maternity and paternity leaves
  • Nucleo - Our learning platform of courses
  • NuLanguage - Our language learning program
  • NuCare - Our mental health and wellness assistance program
  • Extended maternity and paternity leaves
  • 401K
  • Saving Plans - Health Saving Account and Flexible Spending Account


    #LI-Remote

Required Experience:

Staff IC

Employment Type

Full-Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.