As a Senior Product Manager on Fusion AI Agent Studio you will lead a defined product area end-to-endfrom discovery and requirements through roadmap delivery and launch. Youll translate powerful agent platform capabilities into simple repeatable experiences that help teams build test deploy and operate AI agents reliably at enterprise scale. Youll partner closely with Engineering Design Applied AI and Fusion application teams to improve usability predictability governance and operational visibility.
Responsibilities:
Own a product area end-to-end: define scope roadmap and milestones aligned to Fusion AI strategy and measurable customer value (adoption time-to-value reliability governance).
Translate customer workflows into platform capabilities: build reusable primitives and patterns that work across multiple agents and teams (not one-off implementations).
Ship guided power experiences: deliver fast setup flows with sensible defaults while enabling advanced controls for extensibility repeatability and scale.
Design governed autonomy: define user controls constraints policies tool permissions safe fallback behaviors and review checkpoints where needed.
Partner on evaluation reliability: work with Applied AI to define quality signals regression testing approach reliability targets and launch criteria.
Operationalize for enterprise: define instrumentation and success metrics (activation reuse runtime health escalations/defects); improve observability and debugging.
Drive execution: write crisp PRDs user stories and acceptance criteria; align stakeholders; manage dependencies; deliver iterative releases.
Minimum Qualifications:
5 years total experience including 35 years in Product Management building and shipping SaaS and/or platform capabilities.
1 year shipping GenAI/LLM features in production (customer-facing or internal platform services) with clear outcomes (adoption time saved quality reliability).
1-year hands-on building agentic agents/workflows - multi-step orchestration with tool-calling state/memory routing fallbacks and human-in-the-loop checkpoints.
Proven ability to execute on complex cross-functional products/services from discovery PRD build launch iterate partnering effectively with Engineering UX and Applied AI/DS teams.
Strong product judgment and execution: can turn ambiguity into a scoped plan make trade-offs and deliver simple UX on top of complex systems (builders admin consoles configuration-heavy experiences).
Practical understanding of AI reliability governance: evaluation/regression testing monitoring/telemetry guardrails/policies and safe behavior in edge cases.
Excellent written communication: crisp PRDs user stories acceptance criteria and decision docs that align stakeholders and unblock execution.
Preferred Qualifications:
Experience with Agentic platforms like LangChain LangGraph LlamaIndex VertexAI or other similar products.
Experience designing and running agent evaluation programs- building test suites (golden sets) defining quality/reliability metrics and using eval results to prevent regressions and improve agent performance.
Experience integrating different LLMs in the products and services.
Experience building platform primitives: APIs/SDKs integrations/tool binding templates reusable components and extensibility patterns.
Exposure to enterprise SaaS domains (HCM/ERP/SCM/CX) and large-scale deployment realities.
Responsibilities:
Own a product area end-to-end: define scope roadmap and milestones aligned to Fusion AI strategy and measurable customer value (adoption time-to-value reliability governance).
Translate customer workflows into platform capabilities: build reusable primitives and patterns that work across multiple agents and teams (not one-off implementations).
Ship guided power experiences: deliver fast setup flows with sensible defaults while enabling advanced controls for extensibility repeatability and scale.
Design governed autonomy: define user controls constraints policies tool permissions safe fallback behaviors and review checkpoints where needed.
Partner on evaluation reliability: work with Applied AI to define quality signals regression testing approach reliability targets and launch criteria.
Operationalize for enterprise: define instrumentation and success metrics (activation reuse runtime health escalations/defects); improve observability and debugging.
Drive execution: write crisp PRDs user stories and acceptance criteria; align stakeholders; manage dependencies; deliver iterative releases.
Career Level - IC3
Required Experience:
Senior IC
As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when eve ... View more