Design and develop machine learning and LLM-based solutions for ML model and system evaluation use cases such as: - Automatic large scale data generation - Automatic UI and Non UI test evaluation - Run evaluation jobs at scale - Build and optimize LLM judges - Intelligent log summarization and anomaly detection - Fine-tune or prompt-engineer foundation models (e.g. Apple GPT Claude) for Evaluation-specific applications - Collaborate with QA teams to integrate models into testing frameworks- Continuously evaluate and improve model performance through A/B testing human feedback loops and retraining- Monitor advances in LLMs and NLP and propose innovative applications within the ML evaluation domain
3 years of proven ability in machine learning including hands-on work with LLMs.
Strong programming skills in Python and experience with ML/NLP libraries
Experience building or fine-tuning LLMs for software engineering tasks
Understanding of prompt engineering and retrieval-augmented generation (RAG)
Experience developing LLM based automated evaluation frameworks
Excellent knowledge of software testing methodologies & practices
Experience in Swift/XCTest/XCUITest is preferred
Ability to thrive in a collaborative working environment within your team and beyond
Ability to triage problems prioritize accordingly and propose resolutions
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.