- Develop models tools metrics and datasets for assessing and evaluating the safety of generative models over the model deployment lifecycle- Develop methods models and tools to interpret and explain failures in language and diffusion models- Build and maintain human annotation and red teaming pipelines to assess quality and risk of various Apple products- Prototype implement and evaluate new ML models and algorithms for red teaming LLMs
Strong engineering skills and experience in writing production-quality code in Python Swift or other programming languages
Background in generative models natural language processing LLMs or diffusion models
Experience with failure analysis quality engineering or robustness analysis for AI/ML based features
Experience working with crowd-based annotations and human evaluations
Experience working on explainability and interpretation of AI/ML models
Work with highly-sensitive content with exposure to offensive and controversial content
BS MS or PhD in Computer Science Machine Learning or related fields or an equivalent qualification acquired through other avenues
Proven track record of contributing to diverse teams in a collaborative environment
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.