In this role you will architect and build core evaluation frameworks systems and tools that empower applied research and accelerate AI development. Your work will help researchers and developers close the gap between prototypes and production-grade capabilities giving them a stable foundation to build on and enabling them iterate more quickly and produce more durable work. The tools you will develop will serve as the primary delivery mechanism for evaluation research innovations distributing them to partner teams across the organization. This will involve deep collaboration with researchers developers engineers and other users to understand their needs and ensure that solutions are easy to adopt and scalable. This is an opportunity to help build foundational systems and tools that translate cutting-edge methods and proven best practices into reliable production-ready tools for development teams across ASE and partner organizations.
Demonstrated mastery in engineering robust maintainable and operable software systems
Deep expertise in Python with demonstrated excellence in designing high-quality extensible APIs and SDKs
Deep experience (5 years) in platform engineering or adjacent roles with a proven track record of designing building and scaling internal or developer-facing platforms
Demonstrated track record of driving adoption of developer tools with experience gathering user feedback and iterating on developer experience
Experience with ML development lifecycle and ML platform development including understanding of model training evaluation and deployment workflows
Understanding of standard AI stack components such as retrieval systems (vector databases hybrid search etc.) model serving platforms and LLM application frameworks
Proven ability to collaborate with cross-functional teams including researchers engineers product owners and leadership
Experience designing platforms or frameworks that shorten the path from prototype to production
Familiarity with distributed data processing (e.g. Spark Dask PySpark) and large-scale compute for AI workloads
Experience with inference optimization frameworks (e.g. vLLM TensorRT-LLM)
Experience with LLM orchestration frameworks (e.g. LangChain LangGraph)
Contributions to or deep engagement with open-source developer tooling or ML frameworks
Experience mentoring other engineers and raising a teams engineering standards
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.