Senior Cloud AI Software Engineer- RNT AI Cloud engg

Not Interested
Bookmark
Report This Job

profile Job Location:

New York City, NY - USA

profile Monthly Salary: Not Disclosed
Posted on: 3 hours ago
Vacancies: 1 Vacancy

Job Summary

Cloud AI Software Engineer

QE Agentic AI RNT Test Hub Platform on AWS Amazon Bedrock Agentic Workflows

LOCATION

New York NY Hybrid

FUNCTION

Cloud Engineering / Agentic AI / QE

PLATFORM

AWS Amazon Bedrock

Role Summary

We are seeking a Cloud Software Engineer with quality engineering and Agentic AI experience to build deploy and operate the QE RNT Test Hub platform on AWS. This is primarily a software development and cloud engineering role - the engineer will own the design and delivery of full-stack applications including React frontends FastAPI or backends and relational databases all deployed and operated on AWS.

A key and differentiating dimension of this role is the design and integration of Agentic AI capabilities powered by Amazon Bedrock. The engineer will build intelligent autonomous agents that augment the Test Hub platform - accelerating test generation automating quality analysis enabling AI-driven decision workflows and reducing manual effort across the QE lifecycle.

Role Breakdown

75%

Cloud Application Development AWS Platform Engineering & Agentic AI Integration

25%

Quality Engineering - Test Automation API Validation & Platform Support

Primary Responsibilities - Cloud Development & Agentic AI (75%)
Full-Stack Application Development
  • Design and build React-based frontend interfaces for the RNT Test Hub portal dashboards admin tooling and AI-powered QE workflows
  • Develop RESTful backend services using Python (FastAPI) or to power Test Hub workflows and Agentic AI agent orchestration
  • Design and manage relational database schemas using Aurora PostgreSQL or RDS; write optimized SQL for test result storage reporting and trend analysis
  • Build reusable API layers service integrations and internal SDKs for Test Hub consumers and AI agent tool interfaces
  • Implement authentication authorization role-based access control and secure configuration patterns
  • Deliver features end-to-end from design through deployment with clean documented production-grade code

Agentic AI Development - Amazon Bedrock
  • Design and build Agentic AI workflows using Amazon Bedrock leveraging foundation models such as Claude (Anthropic) and Titan to power intelligent QE automation
  • Develop multi-agent systems using the Amazon Bedrock Agents framework including agent definitions action groups knowledge bases and tool invocations
  • Build AI agents that autonomously perform QE tasks such as: test scenario generation from acceptance criteria API contract analysis and drift detection data quality assessment test failure triage and root cause summarization and regression impact analysis from code changes
  • Integrate Bedrock agents with internal tools and APIs through Lambda-backed action groups enabling agents to query databases trigger test runs read Jira tickets and interact with GitHub
  • Design Bedrock Knowledge Bases using S3-backed vector stores to give agents contextual awareness of test history platform documentation and QE standards
  • Implement prompt engineering system prompt design and chain-of-thought patterns to optimize agent reasoning accuracy and output reliability
  • Build human-in-the-loop approval workflows for high-impact agent actions using Step Functions and EventBridge
  • Instrument agent executions with CloudWatch logging trace capture and performance metrics to support observability and continuous improvement
  • Evaluate and iterate on agent output quality using structured test harnesses ensuring agents produce reliable actionable and explainable results

AWS Platform Engineering
  • Architect deploy and maintain the RNT Test Hub on AWS using ECS/Fargate ECR ALB VPC IAM S3 CloudWatch Secrets Manager Parameter Store and Bedrock
  • Containerize applications and services using Docker; manage image builds versioning and ECR lifecycle policies
  • Design and implement CI/CD pipelines using GitHub Actions Jenkins or AWS CodePipeline for automated build test and deployment workflows
  • Configure infrastructure-as-code using Terraform or AWS CloudFormation for repeatable environment-consistent deployments
  • Implement environment promotion patterns across development QA UAT and production
  • Set up CloudWatch dashboards log groups alarms and metrics for platform and agent observability
  • Manage IAM roles policies and permission boundaries for both platform services and Bedrock agent execution roles
  • Integrate AWS Step Functions Lambda and EventBridge for orchestration of both platform workflows and agentic execution pipelines

Platform Integrations and Tooling
  • Integrate the Test Hub with Jira APIs for defect linking test case tracking and AI-assisted sprint analysis
  • Connect with GitHub for pull request status code change tracking and AI-triggered test execution
  • Build or enhance test result dashboards with AI-generated summaries historical trend reporting environment health views and failure analysis
  • Develop test artifact storage and retrieval patterns using S3 for logs screenshots reports and agent execution outputs
  • Support onboarding of new QE teams and automation suites onto the Test Hub platform

Secondary Responsibilities - Quality Engineering (25%)
Test Automation Support
  • Develop and maintain automated API test suites for RNT Test Hub backend services using Pytest REST-assured or Postman/Newman
  • Build Playwright-based UI tests for critical Test Hub workflows and regression coverage
  • Integrate automated smoke sanity and regression suites into CI/CD pipelines for continuous validation
  • Write SQL-based data validation checks to verify backend data integrity test result accuracy and reporting correctness

QE Platform Reliability
  • Improve test execution stability by addressing flaky tests strengthening assertions and hardening test data setup and teardown
  • Instrument test execution with structured logging retry logic and failure categorization to improve triage speed
  • Support troubleshooting of failed test runs using CloudWatch logs execution traces and API response data
  • Help define and document test standards patterns and onboarding guides for teams using the Test Hub

Required Skills
Cloud and Application Development
  • 7 years of hands-on AWS experience with services including ECS/Fargate ECR S3 CloudWatch IAM VPC Secrets Manager Lambda Step Functions and ALB
  • Strong full-stack development experience: React or similar frontend framework Python (FastAPI/Flask) or backend PostgreSQL or Aurora relational database
  • Proficiency in Docker: writing Dockerfiles building images managing container deployments on ECS
  • Experience building and maintaining CI/CD pipelines with GitHub Actions Jenkins GitLab CI or AWS CodePipeline
  • Solid understanding of REST API design HTTP JSON authentication patterns (JWT OAuth2) and API versioning
  • Experience with infrastructure-as-code using Terraform or AWS CloudFormation
  • Strong Git practices: branching strategies pull requests code reviews and release management
  • Working knowledge of SQL schema design and relational database operations

Agentic AI and Amazon Bedrock
  • Hands-on experience with Amazon Bedrock including model invocation Bedrock Agents Knowledge Bases and action group development
  • Experience building agentic or LLM-powered workflows: multi-step reasoning tool use retrieval-augmented generation (RAG) and autonomous task execution
  • Proficiency in Python for AI application development: prompt construction response parsing agent orchestration and LLM API integration
  • Understanding of prompt engineering principles system prompt design and techniques to improve model output reliability and accuracy
  • Experience integrating LLM agents with external tools and APIs via Lambda REST endpoints or MCP-style tool interfaces
  • Familiarity with vector databases embedding models and semantic search for knowledge base construction

Quality Engineering
  • Familiarity with test automation frameworks: Playwright Pytest REST-assured or Postman/Newman
  • Understanding of QE concepts: regression smoke API contract testing data validation and test data management
  • Ability to write and maintain automated tests as part of standard development delivery
  • Experience reading and triaging test failures using logs traces and backend data

Preferred Skills
  • Experience with Amazon Bedrock Guardrails model evaluation or agent tracing and debugging
  • Experience with LangChain LlamaIndex or other agent orchestration frameworks
  • Familiarity with MCP (Model Context Protocol) for structured tool and API integration with AI agents
  • Experience with Snowflake Redshift or other cloud data warehouses as agent data sources
  • Familiarity with observability tools such as Dynatrace Splunk Grafana or OpenSearch
  • Experience with Jira APIs or Atlassian tooling integrations
  • Experience in banking financial services compliance or enterprise platform engineering
  • Experience building internal developer portals QE platforms or shared AI-powered tooling services

Responsibilities by Area

React Frontend

Build and maintain Test Hub portal dashboards and AI-powered QE interfaces

Backend Services

Develop FastAPI or APIs powering Test Hub workflows and agent orchestration

Agentic AI

Design and build Bedrock-powered agents for test generation triage and QE analysis

Database

Design Aurora/PostgreSQL schemas; own test result storage and reporting queries

AWS Infrastructure

Deploy and operate platform on ECS S3 CloudWatch IAM Lambda Bedrock

CI/CD Pipelines

Automate build test and deployment workflows via GitHub Actions or CodePipeline

IaC

Manage infrastructure with Terraform or CloudFormation for repeatable deployments

Integrations

Connect Test Hub with Jira GitHub Bedrock Knowledge Bases and reporting tools

Test Automation

Build API and UI test suites; integrate into CI/CD for continuous validation

Platform Support

Onboard QE teams document standards and evolve Test Hub AI capabilities

Example Day-to-Day Activities
  • Design a Bedrock agent that reads a Jira story extracts acceptance criteria and generates a structured test scenario set
  • Build a Lambda-backed action group that allows a Bedrock agent to query Aurora for historical test failure patterns
  • Ship a new React dashboard panel showing AI-generated test run summaries and recommended actions
  • Develop a FastAPI endpoint that invokes a Bedrock agent to analyze an OpenAPI spec and flag contract drift
  • Write a Terraform module to provision a Bedrock Knowledge Base backed by an S3 test artifact store
  • Push a Docker image to ECR and deploy an updated Test Hub service through the CI/CD pipeline
  • Investigate a CloudWatch alarm on a Bedrock agent invocation failure and trace the execution path
  • Write a Pytest suite to validate a new backend endpoint and integrate it into the GitHub Actions pipeline
  • Pair with a QE lead to design an AI-assisted triage workflow that auto-categorizes test failures by root cause
  • Review a pull request adding a new agent action group and provide feedback on tool schema design and error handling

Success Measures

The person in this role will be successful when they:

  • Deliver reliable production-grade features to the RNT Test Hub on a consistent cadence
  • Ship Agentic AI capabilities that measurably reduce manual QE effort and accelerate test cycle time
  • Maintain a stable observable and well-instrumented AWS platform with low incident rate
  • Reduce manual deployment effort through automated CI/CD pipelines and infrastructure-as-code
  • Enable QE teams to leverage AI-powered workflows for test generation triage and analysis
  • Improve test result visibility and reporting for QE leads and engineering leadership
  • Build quality into deliverables through integrated automated testing and clear documentation

Ideal Candidate Profile

The ideal candidate is a cloud-native software engineer who builds and ships full-stack applications on AWS has hands-on experience developing Agentic AI workflows with Amazon Bedrock and brings enough QE fluency to build quality into the platform from the ground up. They are comfortable owning infrastructure CI/CD application code and AI agent design end-to-end.

This person is excited about the intersection of software engineering cloud platform work and applied AI. They understand that Agentic AI is not a prototype exercise - it requires the same rigor as production software: reliable orchestration observability testability and clear human oversight. QE experience is a genuine differentiator for this role not a checkbox.

Keywords

Amazon Bedrock Agentic AI AI Agents LLM RAG Retrieval-Augmented Generation Prompt Engineering Knowledge Base Action Groups Claude AWS ECS Fargate ECR S3 CloudWatch IAM Secrets Manager Step Functions Lambda VPC React FastAPI Express PostgreSQL Aurora Docker Terraform CloudFormation GitHub Actions CI/CD Python TypeScript JavaScript REST API Playwright Pytest Postman Newman REST-assured SQL Test Automation QE Platform Test Hub Cloud Engineer Full-Stack DevOps MCP LangChain

Cloud AI Software Engineer QE Agentic AI RNT Test Hub Platform on AWS Amazon Bedrock Agentic Workflows LOCATION New York NY Hybrid FUNCTION Cloud Engineering / Agentic AI / QE PLATFORM AWS Amazon Bedrock Role Summary We are seeking a Cloud Software Engineer with qual...
View more view more