AI Engineer Responsible AI

Not Interested
Bookmark
Report This Job

profile Job Location:

Seattle, OR - USA

profile Salary: Not Disclosed
Posted on: 6 hours ago
Vacancies: 1 Vacancy

Job Summary

About Job

Role: AI Engineer - Responsible AI

Location: Seattle WA Palo Alto CA Remote

Type: Full-time

Build the Future of Safe and Responsible AI

Are you an experienced AI engineer advancing the frontiers of AI safety LLM jailbreak detection and defense and agentic AI with publications and production deployments to show for it Join us to translate pioneering research into robust scalable security systems and trustworthy LLM platforms that resist adversarial and behavioral exploits at enterprise scale.

The Mission

Were tackling cutting-edge AI safety across adversarial robustness jailbreak defense agentic workflows and human-in-the-loop risk modeling. As an AI Engineer youll own high-impact projects from research conception through production deployment directly contributing to our platforms security guarantees while building scalable maintainable infrastructure.

What Youll Do

Advance AI Safety: Design implement and evaluate attack and defense strategies for LLM jailbreaks (prompt injection obfuscation narrative red teaming) and deploy them as production-grade services.

Build Scalable Safety Infrastructure: Architect and deploy distributed safety evaluation pipelines handling millions of requests with real-time monitoring alerting and incident response capabilities.

Large-Scale Data Engineering: Design ETL pipelines for processing terabytes of safety-related data (attack patterns behavioral logs model outputs); build data lakes and feature stores for safety ML systems.

Evaluate AI Behavior: Analyze and simulate human-AI interaction patterns at scale to uncover behavioral vulnerabilities social engineering risks and over-defensive vs. permissive response tradeoffs.

Agentic AI Security: Build production workflows for multi-agent safety (agent self-checks regulatory compliance defense chains) spanning perception reasoning and action.

MLOps & Model Deployment: Deploy safety models to production using containerized microservices implement CI/CD pipelines for model updates and manage model versioning and A/B testing infrastructure.

Benchmark & Harden LLMs: Create reproducible automated evaluation protocols for safety over-defensiveness and adversarial resilience across diverse models with continuous integration.

Example Problems You Might Tackle

Production Red-Teaming Platform: Build and operate an automated red-teaming infrastructure that continuously probes advanced LLMs (GPT-4o GPT-5 LLaMA Mistral Gemma) at scale with dashboards and alerting.

Real-Time Defense Systems: Implement context-aware multi-turn attack detection and guardrail mechanisms as low-latency services handling 10K requests per second.

Agent Self-Regulation at Scale: Develop agentic architectures for autonomous self-check and self-correct with distributed orchestration and fault tolerance.

Safety Data Platform: Design and build data infrastructure for collecting storing and analyzing petabyte-scale safety telemetry with streaming analytics.

Minimum Qualifications

Masters degree in CS/EE/ML/Security or related field (Ph.D. preferred)

2 years of industry experience in applied ML/AI research or ML engineering

Track record of publications in AI Safety NLP robustness or adversarial ML (ACL NeurIPS ICML EMNLP IEEE S&P etc.) or equivalent applied research impact

Strong Python and PyTorch/JAX skills with experience deploying ML models to production

Demonstrated experience in at least one of: LLM jailbreak attacks/defense agentic AI safety adversarial ML or human-AI interaction vulnerabilities

Experience with containerization (Docker Kubernetes) and cloud platforms (AWS GCP or Azure)

Proven ability to take research from concept to code to production deployment with rigorous testing and monitoring

Preferred Qualifications

Experience in adversarial prompt engineering jailbreak detection (narrative obfuscated sequential attacks)

Prior work on multi-agent architectures or robust defense strategies for LLMs in production environments

Experience with large-scale data processing frameworks (Spark Flink Kafka) and data warehousing

MLOps expertise: model serving (Triton TensorRT vLLM) experiment tracking (W&B MLflow) and CI/CD for ML

Infrastructure as Code experience (Terraform Pulumi) and DevOps best practices

Experience with distributed computing frameworks (Ray Dask) for scalable training and evaluation

Familiarity with observability stacks (Prometheus Grafana DataDog) and incident management

First-author publications strong GitHub profile or significant open-source contributions

Our Stack

Modeling: PyTorch/JAX Hugging Face vLLM Mistral LLaMA OpenAI APIs

Safety: Red-teaming frameworks LLM benchmarking (SODE ART HarmBench) human behavior simulation

Infrastructure: Kubernetes Docker Terraform AWS/GCP Ray Spark

MLOps: Triton Inference Server Weights & Biases MLflow Airflow ArgoCD

Data: PostgreSQL Redis Kafka Snowflake/BigQuery dbt

Observability: Prometheus Grafana DataDog PagerDuty

What Success Looks Like

Production systems that measurably improve safety KPIs: adversarial robustness over-defensiveness rates and incident response latency

Publishable research outcomes (with company approval) demonstrating novel contributions to AI safety

Well-documented tested and maintainable code with comprehensive CI/CD and monitoring

Infrastructure that scales reliably and enables the broader team to iterate quickly on safety research

Why this Company

Real Impact: Your research ships directly securing our core features and AI infrastructure at scale

Research to Production: Bridge the gap between cutting-edge research and production systems

Mentorship: Collaborate with Principal Architects and senior researchers in AI safety and adversarial ML

Velocity Rigor: Balance high-quality research with mission-critical product focus.

About JobRole: AI Engineer - Responsible AILocation: Seattle WA Palo Alto CA RemoteType: Full-timeBuild the Future of Safe and Responsible AIAre you an experienced AI engineer advancing the frontiers of AI safety LLM jailbreak detection and defense and agentic AI with publications and production d...
View more view more