Research Engineer, Agentic AI Evals

HUD

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Department:

Engineering

Job Summary

About HUD

HUD (YC W25) is developing agentic evals for Computer Use Agents (CUAs) that browse the web. Our CUA Evals framework is the first comprehensive evaluation tool for CUAs.

Our Mission: People dont actually know if AI agents are working. To make AI agents work in the real world we need detailed evals for a huge range of tasks.

Were backed by Y Combinator and work closely with frontier AI labs to provide agent evaluation infrastructure at scale.

About the role

Were looking for a research engineer to help build out task configs and environments for evaluation datasets on HUDs CUA evaluation framework.

Responsibilities

  • Build out environments for HUDs CUA evaluation datasets including evals for safety redteaming general business tasks long-horizon agentic tasks etc.

  • Deliver custom CUA datasets and evaluation pipelines requested by clients

  • Contribute to improving the HUD evaluation harness depending on your interests skills and current organizational priorities. (Optional but highly valued!)

Experience

Technical Skills

  • Proficiency in Python Docker and Linux environments

  • React experience for frontend development

  • Production-level software development experience preferred

  • Strong technical aptitude and demonstrated problem-solving ability

Strong candidates may have:

  • Startup experience in early-stage technology companies with ability to work independently in fast-paced environments

  • Strong communication skills for remote collaboration across time zones

  • Familiarity with current AI tools and LLM capabilities

  • Understanding of safety and alignment considerations in AI systems

  • Evidence of rapid learning and adaptability in technical environments (e.g. programming competitions)

  • Have hands-on experience with or contributed to LLM evaluation frameworks (EleutherAI Inspect or similar)

  • Built custom evaluation pipelines or datasets

  • Worked with agentic or multimodal AI evaluation systems

We prioritize technical aptitude and learning potential over years of experience. Motivated candidates are encouraged to apply even if they dont meet all criteria.

Representative projects:

  • Creating and solving challenging competitive programming problem-sets

  • Curating large high-quality datasets especially for research and evaluation of multimodal AI agents

  • Designing complex functional fullstack applications. Bonus points if they have users / adopters.

We prioritise contributions that show quality and quantity such as building out large high-quality datasets. Imagine making about 10 small puzzles in mock web environments a day.

Team & Company Details

  • Team Size: 15 people currently mostly full-time in-person but some remote.

  • Our team: Our team includes 4 international Olympiad medallists (IOI ILO IPhO) serial AI startup founders and researchers with publications at ICLR NeurIPS etc

  • Company stage: We have received $2 million in seed funding plus very strong demand and revenue growth beyond that. We are scaling profitably and fast to meet demand.

Logistics

  • Employment: Fulltime preferred but willing to consider part-time/internship arrangements for exceptional candidates.

  • Location: Fully remote-friendly. We already have several fulltime 100% remote hires. But if youre in the San Francisco Bay Area or Singapore we do have an office you can work together in. We do prefer applicants who can show up to meetings in Pacific Time (UTC-7:00/8:00) or China/Singapore Time (UTC 8:00).

  • Visa Sponsorship: We provide support for relocation and visas for strong full-time candidates to USA or Singapore. For part-time/contract/internship arrangements well work fully remote (which makes things simpler anyway).

  • Timeline: Applications are rolling. The process should involve 1 initial call 1 five-hour take-home assignment and 1 paid weeklong work trial before final offer.

Due to high volume we may not actively respond to every application but feel free to contact us at or elsewhere if we missed your application!


Required Experience:

Unclear Seniority

About HUDHUD (YC W25) is developing agentic evals for Computer Use Agents (CUAs) that browse the web. Our CUA Evals framework is the first comprehensive evaluation tool for CUAs.Our Mission: People dont actually know if AI agents are working. To make AI agents work in the real world we need detailed...
View more view more

Key Skills

  • Robotics
  • Machine Learning
  • Python
  • AI
  • C/C++
  • OS Kernels
  • Research Experience
  • Matlab
  • Rust
  • Research & Development
  • Natural Language Processing
  • Tensorflow