Overview
StoneX is strengthening its cybersecurity defenses in a rapidly evolving 2026 threat landscape where adversaries are increasingly using AI to evade traditional detection. Our Information Security team is building next-generation detection capabilities that go beyond static rules and signaturesleveraging autonomous AI agents and targeted use cases to detect investigate and respond to sophisticated threats in real time.
We are expanding our Detection Engineering team with a dedicated focus on AI agent development and security use case engineering. This specialized group designs prototypes and productionizes intelligent AI agents that automate complex detection workflows enrich alerts with contextual intelligence and continuously adapt to emerging attack patterns.
We are seeking a motivated AI Engineer to join the Detection Engineering team with a primary emphasis on AI agent development and use case development. In this hands-on entry-level role you will work directly with senior detection engineers to identify high-impact security use cases and build production-ready AI agents that enhance threat detection accuracy accelerate alert triage and investigation reduce false positives and enable proactive defense.
Collaborating closely with senior detection engineers threat hunters security analysts incident responders and SOC teams you will gain immediate exposure to cutting-edge agentic AI applied to real-world cybersecurity challenges. This is an outstanding opportunity for an early-career professional to develop expertise in LLM-powered agents agent orchestration frameworks and security-specific use cases while making tangible contributions to protecting the organization.
Responsibilities
- Collaborate with detection engineers and security stakeholders to identify prioritize and document high-value AI agent use cases for threat detection alert enrichment automated investigation and response (e.g. autonomous triage agents threat-hunting agents log-analysis agents).
- Design develop and iterate on production-grade AI agents using LLM frameworks to handle multi-step reasoning tool integration and decision-making in security workflows.
- Build and optimize retrieval-augmented generation (RAG) agents that combine internal threat intelligence MITRE ATT&CK mappings and external knowledge sources for contextual threat analysis.
- Support end-to-end agent development lifecycle: prompt engineering tool creation (e.g. querying SIEM enriching with threat intel) memory management evaluation and safety guardrails.
- Integrate AI agents into existing detection pipelines SIEM/XDR platforms and security orchestration tools to enable real-time alerting automated playbooks and human-in-the-loop workflows.
- Assist with data preparation feature engineering and model/agent fine-tuning using security telemetry logs and labeled datasets for improved detection performance and reduced false positives.
- Participate in agent evaluation monitoring (drift performance cost) versioning and continuous improvement to ensure agents remain effective against evolving threats.
- Contribute to documentation of agent architectures use cases decision logic and operational runbooks to support team adoption and knowledge transfer.
- Stay current with advancements in agentic AI cybersecurity TTPs (including AI-augmented attacks) and best practices through guided projects and cross-team collaboration.
Qualifications
- Bachelors degree (or equivalent) in Computer Science Cybersecurity Data Science Engineering or a related technical field.
- 02 years of relevant experience (internships academic projects personal/agent projects or bootcamps in AI/ML or cybersecurity are highly valued).
- Solid understanding of core machine learning and LLM concepts including prompt engineering RAG tool-calling and evaluation of agent performance.
- Familiarity with version control (Git) and working with large-scale security or log data.
- Demonstrated problem-solving ability attention to detail and enthusiasm for building practical AI solutions in a security context.
- Excellent communication and collaboration skills for working with both technical and non-technical security stakeholders.
Preferred (but not required)
- Prior exposure to building or prototyping AI agents for any domain (security automation or general applications).
- Familiarity with cybersecurity tools and concepts (SIEM XDR/EDR MITRE ATT&CK threat intelligence platforms).
- Experience with cloud security services (AWS Azure Sentinel GCP) or vector databases for agent memory/retrieval.
- Understanding of MLOps/agentops practices including monitoring evaluation frameworks and deployment (Docker basics).
- Personal projects CTFs or open-source contributions showcasing agent development RAG systems or applied AI in security/anomaly detection.
Required Experience:
Staff IC
OverviewStoneX is strengthening its cybersecurity defenses in a rapidly evolving 2026 threat landscape where adversaries are increasingly using AI to evade traditional detection. Our Information Security team is building next-generation detection capabilities that go beyond static rules and signatur...
Overview
StoneX is strengthening its cybersecurity defenses in a rapidly evolving 2026 threat landscape where adversaries are increasingly using AI to evade traditional detection. Our Information Security team is building next-generation detection capabilities that go beyond static rules and signaturesleveraging autonomous AI agents and targeted use cases to detect investigate and respond to sophisticated threats in real time.
We are expanding our Detection Engineering team with a dedicated focus on AI agent development and security use case engineering. This specialized group designs prototypes and productionizes intelligent AI agents that automate complex detection workflows enrich alerts with contextual intelligence and continuously adapt to emerging attack patterns.
We are seeking a motivated AI Engineer to join the Detection Engineering team with a primary emphasis on AI agent development and use case development. In this hands-on entry-level role you will work directly with senior detection engineers to identify high-impact security use cases and build production-ready AI agents that enhance threat detection accuracy accelerate alert triage and investigation reduce false positives and enable proactive defense.
Collaborating closely with senior detection engineers threat hunters security analysts incident responders and SOC teams you will gain immediate exposure to cutting-edge agentic AI applied to real-world cybersecurity challenges. This is an outstanding opportunity for an early-career professional to develop expertise in LLM-powered agents agent orchestration frameworks and security-specific use cases while making tangible contributions to protecting the organization.
Responsibilities
- Collaborate with detection engineers and security stakeholders to identify prioritize and document high-value AI agent use cases for threat detection alert enrichment automated investigation and response (e.g. autonomous triage agents threat-hunting agents log-analysis agents).
- Design develop and iterate on production-grade AI agents using LLM frameworks to handle multi-step reasoning tool integration and decision-making in security workflows.
- Build and optimize retrieval-augmented generation (RAG) agents that combine internal threat intelligence MITRE ATT&CK mappings and external knowledge sources for contextual threat analysis.
- Support end-to-end agent development lifecycle: prompt engineering tool creation (e.g. querying SIEM enriching with threat intel) memory management evaluation and safety guardrails.
- Integrate AI agents into existing detection pipelines SIEM/XDR platforms and security orchestration tools to enable real-time alerting automated playbooks and human-in-the-loop workflows.
- Assist with data preparation feature engineering and model/agent fine-tuning using security telemetry logs and labeled datasets for improved detection performance and reduced false positives.
- Participate in agent evaluation monitoring (drift performance cost) versioning and continuous improvement to ensure agents remain effective against evolving threats.
- Contribute to documentation of agent architectures use cases decision logic and operational runbooks to support team adoption and knowledge transfer.
- Stay current with advancements in agentic AI cybersecurity TTPs (including AI-augmented attacks) and best practices through guided projects and cross-team collaboration.
Qualifications
- Bachelors degree (or equivalent) in Computer Science Cybersecurity Data Science Engineering or a related technical field.
- 02 years of relevant experience (internships academic projects personal/agent projects or bootcamps in AI/ML or cybersecurity are highly valued).
- Solid understanding of core machine learning and LLM concepts including prompt engineering RAG tool-calling and evaluation of agent performance.
- Familiarity with version control (Git) and working with large-scale security or log data.
- Demonstrated problem-solving ability attention to detail and enthusiasm for building practical AI solutions in a security context.
- Excellent communication and collaboration skills for working with both technical and non-technical security stakeholders.
Preferred (but not required)
- Prior exposure to building or prototyping AI agents for any domain (security automation or general applications).
- Familiarity with cybersecurity tools and concepts (SIEM XDR/EDR MITRE ATT&CK threat intelligence platforms).
- Experience with cloud security services (AWS Azure Sentinel GCP) or vector databases for agent memory/retrieval.
- Understanding of MLOps/agentops practices including monitoring evaluation frameworks and deployment (Docker basics).
- Personal projects CTFs or open-source contributions showcasing agent development RAG systems or applied AI in security/anomaly detection.
Required Experience:
Staff IC
View more
View less