AI Security Engineer

Not Interested
Bookmark
Report This Job

profile Job Location:

New York City, NY - USA

profile Monthly Salary: Not Disclosed
Posted on: 12 hours ago
Vacancies: 1 Vacancy

Job Summary

Title: AI Security Engineer

Location: Remote across USA

Department: AI Security Engineering

Reports To: Head of Security Engineering

Role Overview

The AI Security Engineer designs evaluates and implements secure architectures for Large Language Model (LLM) and Agentic AI ecosystems across the enterprise. This includes securing platforms like ChatGPT Enterprise Claude Enterprise Gemini Enterprise Google AI/Vertex AI/LM Notebooks Azure OpenAI Azure AI Foundry and Model Context Protocol (MCP) environments. The role ensures robust data protection model governance runtime security alignment and compliance bridging security architecture AI engineering legal compliance and risk governance.

Key Responsibilities

AI Security Engineering & Design

  • Engineer secure environments for enterprise LLM platforms (ChatGPT Claude Gemini Azure OpenAI).
  • Design zero-trust architectures for AI ecosystems including MCP servers/clients and agentic workflows.
  • Secure LLM model lifecycle: training fine-tuning evaluation deployment inference endpoints.
  • Define agent-to-agent (A2A) trust boundaries cryptographic trust chains message integrity controls.
  • Establish guardrails for Retrieval-Augmented Generation (RAG) tool use plugins function calling enterprise embeddings contextual memory.
  • Implement runtime sandboxing prompt firewalling data path isolation interaction filtering.

AI Risk Governance & Compliance

  • Apply frameworks: NIST AI RMF MAESTRO OWASP Top 10 for LLM & Agentic AI MITRE ATLAS ISO/IEC 23894 & 42001 Google SAIF Microsoft Responsible AI Standard.
  • Establish model governance evaluation criteria audit logs chain-of-thought protection policy configuration.

AI Security Threat Modeling & Controls

  • Conduct threat modeling using: LLM-specific Agentic AI Self-Propagation & Tool Abuse RAG Architecture Security A2A Trust Exploitation MCP Supply-Chain & Man-in-the-Middle models.
  • Define adversarial defenses: prompt injection mitigation jailbreak prevention indirect prompt poisoning model exfiltration protection data poisoning countermeasures model inversion & membership inference prevention.

Platform Security

  • Design secure Azure OpenAI & Azure AI Foundry deployments: private endpoints VNet isolation mTLS/encryption model filtering enterprise data security.
  • Secure Gemini Enterprise & Google LM Notebooks: VPC Service Controls IAM conditional access DLP context filtering confidential computing.

Agentic AI & MCP Security

  • Govern MCP tools input/output sanitization policy-guarded capability authorization.
  • Define secure orchestration and oversight for multi-agent LLM systems: autonomy limits escalation rules tool use governance.

Model Training Security & Supply Chain Integrity

  • Implement Secure MLOps: dataset lineage provenance quality checks differential privacy secure gradient computation adversarial training signed/documented model artifacts.
  • Secure confidential training data prevent leakage to public models.

AI Monitoring & Incident Response

  • Enable runtime protection anomaly detection exploit signal monitoring.
  • Build AI-specific incident playbooks: hallucination incidents governance policy drift unauthorized agent actions emergent harmful behavior.

Required Technical Skills

6 10 years in cybersecurity including 2 years in AI/ML security or LLM platform engineering.

Core AI Security Expertise

  • Deep understanding of generative AI security: LLM jailbreak defense guardrails engineering AI alignment content filtering advanced prompt-level security.
  • Knowledge of LLM tool ecosystems (functions plugins RAG).

Enterprise AI Platforms

  • Security configurations for ChatGPT Enterprise Claude Enterprise Gemini Enterprise Google LM Notebooks OpenAI on Azure Azure AI Foundry.

Cybersecurity & Cloud Architecture

  • Zero-trust architectures KMS/HSM/secrets management API/function calling security encryption controls network/IAM/private routing DSPM CASB CSPM AIRS tools.

Programming & Tooling (Preferred)

  • Python TypeScript/ Terraform/IaC for secure AI deployments.
  • Agentic AI frameworks: LangChain LangGraph OpenAI Agents CrewAI AutoGen. ADK

AI Security Tooling Hands-On Skills & Experience

AI Runtime Security & Agent Guardrails

  • OpenAI Security Capabilities Anthropic Claude Admin APIs Google SAIF Controls Vertex AI Guardrails Azure AI Foundry Governance.
  • Content Filtering/Toxicity Classifiers: OpenAI Risk Filters Perspective API Azure Content Safety.
  • Prompt Firewalls/Guardrails Engines: Prompt Armor Guardrails AI Prompt Shield NeMo Guardrails.
  • AI Agent Monitoring: Protect AI Lakera Robust Intelligence CalypsoAI.

LLM Supply Chain Security / Secure MLOps

  • Model artifact signing/integrity: Sigstore in-toto SLSA compliance.
  • Dataset provenance: BastionML Cleanlab Alectio.
  • Adversarial Training/Validation: IBM ART CleverHans TextAttack ShieldGemma.
  • Model Watermarking/Exfiltration Prevention: Watermark-LM RIME DeepMind SynthID.
  • Pipeline enforcement: Kubeflow Azure ML Vertex AI Pipelines MLflow.

Agentic AI Security & MCP Ecosystem

  • MCP secure configuration tooling policy enforcement signed client tools.
  • Secure tool API integration capability authorization dynamic context redaction scope-limited tool exposure.
  • Agentic AI orchestration hardening: LangGraph OpenAI Agents AutoGen Studio CrewAI.
  • A2A Trust Models: mTLS token-based capability scoping replay-attack defense real-time behavior anomaly analytics.

Cloud Platform AI Security Tooling

  • Azure: Microsoft Purview Defender for AI Synapse secure RAG vector DB controls.
  • Google Cloud: VPC-SC Confidential Space/Computing DLP API IAM ReBAC.

Threat Intel & Offensive Security Tools for LLM

  • LLM Pentesting: PentestGPT LLM-Guard Azure AI Red Team Tools.
  • Prompt Injection Scanners: PIA picoGPT Security Test Kit.
  • Model behavior fuzzing: GARAK.
  • Membership inference/property leakage evaluation: PrivacyRaven
Title: AI Security Engineer Location: Remote across USA Department: AI Security Engineering Reports To: Head of Security Engineering Role Overview The AI Security Engineer designs evaluates and implements secure architectures for Large Language Model (LLM) and Agentic AI ecosystems across...
View more view more

Key Skills

  • Splunk
  • IDS
  • Network security
  • Computer Networking
  • Identity & Access Management
  • PKI
  • PCI
  • NIST Standards
  • Security System Experience
  • Information Security
  • Encryption
  • Siem