BackendEngineerAIJD
Backend Engineer (AI & Infrastructure) Full-time .
Engineering .
1–3 Years Experience .
Onsite/ Hybrid
About the Role We are looking for a curious and hands-on Backend Engineer who sits at the intersection of robust systems engineering and applied AI. You will design and build scalable backend services architect AI-powered agent workflows and own the infrastructure that keeps them running reliably in production. This is an early-career role with high ownership ideal for someone who loves going deep on low-level design as much as they love experimenting with LLMs and vector databases.
Backend Engineering Design and build scalable REST and gRPC APIs using Python or Go Apply solid low-level design patterns: SOLID principles design patterns clean
architecture Work with relational (PostgreSQL MySQL) and NoSQL (MongoDB DynamoDB)
databases Write efficient testable and well-documented code with async/concurrent
programming Participate in code reviews and contribute to engineering best practices
Distributed Systems & Real-Time Communication Build event-driven architectures using Apache Kafka for high-throughput data
pipelines Implement caching and pub/sub messaging with Redis Develop real-time features using WebSockets and Server-Sent Events (SSE) Work with message queues such as RabbitMQ or AWS SQS Apply rate limiting throttling and backpressure strategies in production systems
AI Agents & Large Language Models Integrate with LLM APIs (OpenAI Anthropic Claude Google Gemini) for
production use cases Build autonomous agent workflows using patterns like ReAct Plan-and-Execute
and multi-agent hierarchies Design and manage tool/function calling context window budgeting and memory
strategies
Develop Retrieval-Augmented Generation (RAG) pipelines end-to-end Work with agent frameworks such as LangChain or LlamaIndex Apply prompt engineering techniques for reliability and consistency
Vector Databases & Semantic Retrieval Hands-on experience with vector databases: Pinecone Weaviate Qdrant or
ChromaDB Understand embedding models and how to select generate and index them
effectively Implement chunking indexing and hybrid search strategies (BM25 dense
retrieval) Design retrieval pipelines optimised for accuracy latency and cost
Infrastructure & Cloud Containerise services with Docker and understand Kubernetes for orchestration
basics Hands-on experience with at least one major cloud provider: AWS GCP or
Azure Work with serverless compute (Lambda Cloud Functions) and managed services Exposure to Infrastructure as Code tools such as Terraform or AWS CDK
CI/CD & Developer Experience Build and maintain CI/CD pipelines using GitHub Actions or GitLab CI Manage Docker image builds container registries and automated deployments Apply deployment strategies: blue-green deployments canary releases and
rollback procedures Handle secrets and environment configuration management securely Write automated tests (unit integration end-to-end) as part of the pipeline
Nice to Have Experience with observability tooling (Datadog Grafana OpenTelemetry)
especially for LLM latency and token cost monitoring Familiarity with guardrails output validation and safety layers for AI agents Exposure to fine-tuning or LoRA workflows on open-source models Knowledge of streaming architectures: chunked responses token streaming to
the frontend API security best practices: OAuth 2.0 JWT RBAC Open-source contributions or personal AI projects on GitHub
What We Value Ownership mindset you take a problem from an ambiguous brief to production
without hand-holding Clear written communication especially for async and distributed teams Opinionated about code quality and design but pragmatic about trade-offs Stays current with the fast-moving AI ecosystem without getting distracted by
hype Curiosity and a habit of building things to learn side projects experiments and
open source count
Experience range: 1–3 years. We care more about depth and curiosity than years on a résumé. A strong GitHub profile or portfolio with real-world projects will weigh heavily in our evaluation.
- Backend Engineer (AI & Infrastructure)
- About the Role
- Backend Engineering
- Distributed Systems & Real-Time Communication
- AI Agents & Large Language Models
- Vector Databases & Semantic Retrieval
- Infrastructure & Cloud
- CI/CD & Developer Experience
- Nice to Have
- What We Value
Required Skills:
Backend Engineering scalable REST APIs scalable gRPC APIs Python Go low-level design patterns SOLID principles design patterns clean architecture relational databases PostgreSQL MySQL NoSQL databases MongoDB DynamoDB async programming concurrent programming code reviews engineering best practices Distributed Systems Real-Time Communication event-driven architectures Apache Kafka caching pub/sub messaging Redis real-time features WebSockets Server-Sent Events SSE message queues RabbitMQ AWS SQS rate limiting throttling backpressure strategies AI Agents Large Language Models LLM APIs OpenAI Anthropic Claude Google Gemini autonomous agent workflows ReAct Plan-and-Execute multi-agent hierarchies tool/function calling context window budgeting memory strategies Retrieval-Augmented Generation RAG pipelines agent frameworks LangChain LlamaIndex prompt engineering Vector Databases Pinecone Weaviate Qdrant ChromaDB embedding models chunking indexing hybrid search strategies BM25 dense retrieval retrieval pipelines Infrastructure Cloud Docker Kubernetes AWS GCP Azure serverless compute Lambda Cloud Functions managed services Infrastructure as Code Terraform AWS CDK CI/CD pipelines GitHub Actions GitLab CI Docker image builds container registries automated deployments deployment strategies blue-green deployments canary releases rollback procedures secrets management environment configuration automated tests unit tests integration tests end-to-end tests observability tooling Datadog Grafana OpenTelemetry guardrails output validation safety layers fine-tuning LoRA workflows streaming architectures chunked responses token streaming API security OAuth 2.0 JWT RBAC open-source contributions GitHub Ownership mindset written communication async communication distributed teams code quality pragmatic trade-offs AI ecosystem knowledge curiosity side projects experiments open source projects
BackendEngineerAIJD Backend Engineer (AI & Infrastructure) Full-time . Engineering . 1–3 Years Experience . Onsite/ HybridAbout the Role We are looking for a curious and hands-on Backend Engineer who sits at the intersection of robust systems engineering and applied AI. You will design and build sca...
BackendEngineerAIJD
Backend Engineer (AI & Infrastructure) Full-time .
Engineering .
1–3 Years Experience .
Onsite/ Hybrid
About the Role We are looking for a curious and hands-on Backend Engineer who sits at the intersection of robust systems engineering and applied AI. You will design and build scalable backend services architect AI-powered agent workflows and own the infrastructure that keeps them running reliably in production. This is an early-career role with high ownership ideal for someone who loves going deep on low-level design as much as they love experimenting with LLMs and vector databases.
Backend Engineering Design and build scalable REST and gRPC APIs using Python or Go Apply solid low-level design patterns: SOLID principles design patterns clean
architecture Work with relational (PostgreSQL MySQL) and NoSQL (MongoDB DynamoDB)
databases Write efficient testable and well-documented code with async/concurrent
programming Participate in code reviews and contribute to engineering best practices
Distributed Systems & Real-Time Communication Build event-driven architectures using Apache Kafka for high-throughput data
pipelines Implement caching and pub/sub messaging with Redis Develop real-time features using WebSockets and Server-Sent Events (SSE) Work with message queues such as RabbitMQ or AWS SQS Apply rate limiting throttling and backpressure strategies in production systems
AI Agents & Large Language Models Integrate with LLM APIs (OpenAI Anthropic Claude Google Gemini) for
production use cases Build autonomous agent workflows using patterns like ReAct Plan-and-Execute
and multi-agent hierarchies Design and manage tool/function calling context window budgeting and memory
strategies
Develop Retrieval-Augmented Generation (RAG) pipelines end-to-end Work with agent frameworks such as LangChain or LlamaIndex Apply prompt engineering techniques for reliability and consistency
Vector Databases & Semantic Retrieval Hands-on experience with vector databases: Pinecone Weaviate Qdrant or
ChromaDB Understand embedding models and how to select generate and index them
effectively Implement chunking indexing and hybrid search strategies (BM25 dense
retrieval) Design retrieval pipelines optimised for accuracy latency and cost
Infrastructure & Cloud Containerise services with Docker and understand Kubernetes for orchestration
basics Hands-on experience with at least one major cloud provider: AWS GCP or
Azure Work with serverless compute (Lambda Cloud Functions) and managed services Exposure to Infrastructure as Code tools such as Terraform or AWS CDK
CI/CD & Developer Experience Build and maintain CI/CD pipelines using GitHub Actions or GitLab CI Manage Docker image builds container registries and automated deployments Apply deployment strategies: blue-green deployments canary releases and
rollback procedures Handle secrets and environment configuration management securely Write automated tests (unit integration end-to-end) as part of the pipeline
Nice to Have Experience with observability tooling (Datadog Grafana OpenTelemetry)
especially for LLM latency and token cost monitoring Familiarity with guardrails output validation and safety layers for AI agents Exposure to fine-tuning or LoRA workflows on open-source models Knowledge of streaming architectures: chunked responses token streaming to
the frontend API security best practices: OAuth 2.0 JWT RBAC Open-source contributions or personal AI projects on GitHub
What We Value Ownership mindset you take a problem from an ambiguous brief to production
without hand-holding Clear written communication especially for async and distributed teams Opinionated about code quality and design but pragmatic about trade-offs Stays current with the fast-moving AI ecosystem without getting distracted by
hype Curiosity and a habit of building things to learn side projects experiments and
open source count
Experience range: 1–3 years. We care more about depth and curiosity than years on a résumé. A strong GitHub profile or portfolio with real-world projects will weigh heavily in our evaluation.
- Backend Engineer (AI & Infrastructure)
- About the Role
- Backend Engineering
- Distributed Systems & Real-Time Communication
- AI Agents & Large Language Models
- Vector Databases & Semantic Retrieval
- Infrastructure & Cloud
- CI/CD & Developer Experience
- Nice to Have
- What We Value
Required Skills:
Backend Engineering scalable REST APIs scalable gRPC APIs Python Go low-level design patterns SOLID principles design patterns clean architecture relational databases PostgreSQL MySQL NoSQL databases MongoDB DynamoDB async programming concurrent programming code reviews engineering best practices Distributed Systems Real-Time Communication event-driven architectures Apache Kafka caching pub/sub messaging Redis real-time features WebSockets Server-Sent Events SSE message queues RabbitMQ AWS SQS rate limiting throttling backpressure strategies AI Agents Large Language Models LLM APIs OpenAI Anthropic Claude Google Gemini autonomous agent workflows ReAct Plan-and-Execute multi-agent hierarchies tool/function calling context window budgeting memory strategies Retrieval-Augmented Generation RAG pipelines agent frameworks LangChain LlamaIndex prompt engineering Vector Databases Pinecone Weaviate Qdrant ChromaDB embedding models chunking indexing hybrid search strategies BM25 dense retrieval retrieval pipelines Infrastructure Cloud Docker Kubernetes AWS GCP Azure serverless compute Lambda Cloud Functions managed services Infrastructure as Code Terraform AWS CDK CI/CD pipelines GitHub Actions GitLab CI Docker image builds container registries automated deployments deployment strategies blue-green deployments canary releases rollback procedures secrets management environment configuration automated tests unit tests integration tests end-to-end tests observability tooling Datadog Grafana OpenTelemetry guardrails output validation safety layers fine-tuning LoRA workflows streaming architectures chunked responses token streaming API security OAuth 2.0 JWT RBAC open-source contributions GitHub Ownership mindset written communication async communication distributed teams code quality pragmatic trade-offs AI ecosystem knowledge curiosity side projects experiments open source projects