drjobs Research Engineer, Large Scale Pre-Training Performance

Research Engineer, Large Scale Pre-Training Performance

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

London - UK

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Snapshot

We are seeking a research engineer to define drive and critically contribute to the next generation of the stateoftheart ML models on TPU. As part of the PreTraining team you will codesign the model and implement critical components across Model architecture ML frameworks custom kernels and platform to deliver frontier models with maximum efficiency.

About Us

Artificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.

The Role

Were looking for a Research Engineer to redefine efficient training of frontier LLMs at massive scale. This role offers an opportunity to influence the design of frontier LLM models and drive an effort to ensure efficient training and inference.

Key responsibilities:

  • Being responsible for PreTraining efficiency and optimising the performance of the latest models on Googles fleet of hardware accelerators throughout the entire LLM research training and deployment lifecycle.
  • Being responsible for guiding model design to ensure inferenceefficiency.
  • Greatly improving the performance of LLM models on hardware accelerators by optimizing at all levels including developing custom kernels when necessary.
  • Collaborating with the compiler framework and platform teams. And ensure efficient training at industrylargest scale.
  • Profile models to identify performance bottlenecks and opportunities for optimization.
  • Develop lowlevel custom kernels for maximum performance of the most critical operators.
  • Collaborating with research teams by enabling new critical operators in advance of their availability in frameworks and compilers.

About You

In order to set you up for success as a Research Engineer at Google DeepMind we look for the following skills and experience:

  • A proven track record of critical contributions to the distributed training of LLMs at 1e25 FLOPs scale on modern GPU/TPU clusters
  • Experience in programming hardware accelerators GPU/TPUs via ML frameworks (e.g. JAX PyTorch) and lowlevel programming models (e.g. CUDA OpenCL)
  • Experience in leveraging custom kernels and compiler infrastructure to improve performance on hardware
  • Experience with Python and neural network training (publications opensource projects relevant work experience etc.)

At Google DeepMind we value diversity of experience knowledge backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex race religion or belief ethnic or national origin disability age citizenship marital domestic or civil partnership status sexual orientation gender identity pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation please do not hesitate to let us know.

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.