Description Senior Backend Engineer (LLM / AI Experience)
Hybrid Tech Team Full-time
Barcelona Spain
In a few words
- Build and scale backend systems powering conversational AI & digital humans
- Hands-on senior role balancing architecture production code
- Work deeply with LLMs RAG pipelines and real-time systems
- Barcelona (hybrid) or remote in Europe 50k65k
Why this role is exciting: Youll shape the backend foundations of a cutting-edge digital human platform where your architectural and performance decisions directly impact real-time AI experiences used by enterprise customers.
About UNITH
AtUNITH were transforming customer journeys with conversational AI. Listed on the ASX we createlifelike digital humansusing cutting-edge synthetic facial movement voice engineering and conversational design.
Our digital humans speak60 languageswith600 voices redefining how businesses interact with customers worldwide.
The Role
Were looking for a Senior Backend Engineer with recent LLM product experience who combines strong architectural thinking with hands-on development.
Youll work closely with the Head of Engineering playing a key role in technical decisions while remaining deeply involved in day-to-day coding feature delivery and system optimization. Youll be a senior technical voice on the team someone who designs systems and builds them.
What Youll Do
Architecture & System Operations (50%)
- Actively participate in architectural decisions with the Head of Engineering
- Collaborate with the Cloud Infrastructure Engineer on platform architecture observability (monitoring logging alerting) and deployment strategies
- Design and optimize systems: performance profiling database queries caching and resource usage
- Own production operations: troubleshooting incident response and on-call
- Provide technical guidance through code reviews design discussions and best practices
- Collaborate on real-time streaming architecture with the Video Synthesis Engineer
Feature Development & Implementation (50%)
- Implement backend features and services in production-grade Python and Go
- Build conversation features: state management history and intelligence improvements
- Implement multi-document knowledge bases using AWS Bedrock
- Integrate LLM APIs (OpenAI AWS Bedrock) and build RAG pipelines
- Develop APIs and service integrations (gRPC REST)
- Work on core backend services: orchestration caching and platform APIs
- Own testing CI/CD pipelines and deployment automation
Tech Stack
- Python (FastAPI) and Go (gRPC services)
- AWS (S3 EC2 Lambda Bedrock managed services)
- Docker Kubernetes RabbitMQ Redis
- LLM APIs (OpenAI AWS Bedrock)
What Were Looking For
Must-Have
- 5 years of backend engineering experience with distributed systems microservices and real-time architectures (WebSocket gRPC event-driven)
- Experience building and deploying complex highly-performant Python applications
- 2 years building LLM-powered products in production () including hands-on experience with LLM APIs (OpenAI Anthropic AWS Bedrock) and RAG systems
- Comfortable balancing architecture design with hands-on implementation
- Strong AWS experience with focus on performance optimization observability and production operations
- Proven ability to optimize production systems (latency throughput technical debt)
- Excellent collaboration skills across backend infrastructure and AI/ML teams
Bonus Points
- Golang experience
- Experience with video streaming media processing or conversational AI platforms
- Data engineering or ML model serving infrastructure experience
What Success Looks Like
First 6 months
- Knowledge transfer completed and ownership of critical backend services established
- Multi-document knowledge base and conversation features live in production
- Active contributor to architecture discussions with measurable performance improvements
- Production systems well-monitored with improved observability
First 12 months
- Core backend services and RAG pipeline running reliably in production
- Platform-wide performance optimizations delivered (Q1Q3 targets met)
- Backend engineers unblocked and supported through your technical guidance
- Recognized as the go-to expert for backend LLM implementation
What We Offer
Compensation & Flexibility
- Salary: depending on experience
- Hybrid work in Barcelona or remote options within Europe
Impact & Growth
- Ownership of critical backend services used daily by enterprise customers
- Hands-on technical work alongside architectural responsibility
- Deep technical challenges across LLMs RAG pipelines real-time systems and scalability
- High-impact role in a small senior team (12 people)
- Close collaboration with Engineering AI research and infrastructure teams
- Opportunity to build expertise in the fast-evolving digital humans domain
Additional Perks
Office in the center of Barcelona
Work from anywhere
Lunch compensation when in the office
Private health insurance with Alan
Travel allowance (for team members living 10km from the office)
Flexible benefits (tax-free under Spanish legislation)
ClassPass discount
How to Apply:
Submit:
Your CV highlightingML production experience
A short motivation (35 sentences) covering:
- Backend systems youve built and maintained
- LLM features youve implemented in production
- A performance optimization project you worked on
- Your experience with RAG or knowledge bases
- Why digital humans excite you
Apply via theEasy Applybutton or reach out directly to creativity is welcome
Recruitment Process
1. Intro call with Joyce (30 min)
interview with Head of Engineering & Product Manager (90 min)
meeting with backend video synthesis and/or infrastructure engineers (90 min)
exercise (short relevant implementation task)
check
Timeline:23 weeks from application to offer
Ready to make digital humans faster better and more reliable
Apply now
Required Experience:
Senior IC
Description Senior Backend Engineer (LLM / AI Experience)Hybrid Tech Team Full-time Barcelona SpainIn a few wordsBuild and scale backend systems powering conversational AI & digital humansHands-on senior role balancing architecture production codeWork deeply with LLMs RAG pipelines and real-time...
Description Senior Backend Engineer (LLM / AI Experience)
Hybrid Tech Team Full-time
Barcelona Spain
In a few words
- Build and scale backend systems powering conversational AI & digital humans
- Hands-on senior role balancing architecture production code
- Work deeply with LLMs RAG pipelines and real-time systems
- Barcelona (hybrid) or remote in Europe 50k65k
Why this role is exciting: Youll shape the backend foundations of a cutting-edge digital human platform where your architectural and performance decisions directly impact real-time AI experiences used by enterprise customers.
About UNITH
AtUNITH were transforming customer journeys with conversational AI. Listed on the ASX we createlifelike digital humansusing cutting-edge synthetic facial movement voice engineering and conversational design.
Our digital humans speak60 languageswith600 voices redefining how businesses interact with customers worldwide.
The Role
Were looking for a Senior Backend Engineer with recent LLM product experience who combines strong architectural thinking with hands-on development.
Youll work closely with the Head of Engineering playing a key role in technical decisions while remaining deeply involved in day-to-day coding feature delivery and system optimization. Youll be a senior technical voice on the team someone who designs systems and builds them.
What Youll Do
Architecture & System Operations (50%)
- Actively participate in architectural decisions with the Head of Engineering
- Collaborate with the Cloud Infrastructure Engineer on platform architecture observability (monitoring logging alerting) and deployment strategies
- Design and optimize systems: performance profiling database queries caching and resource usage
- Own production operations: troubleshooting incident response and on-call
- Provide technical guidance through code reviews design discussions and best practices
- Collaborate on real-time streaming architecture with the Video Synthesis Engineer
Feature Development & Implementation (50%)
- Implement backend features and services in production-grade Python and Go
- Build conversation features: state management history and intelligence improvements
- Implement multi-document knowledge bases using AWS Bedrock
- Integrate LLM APIs (OpenAI AWS Bedrock) and build RAG pipelines
- Develop APIs and service integrations (gRPC REST)
- Work on core backend services: orchestration caching and platform APIs
- Own testing CI/CD pipelines and deployment automation
Tech Stack
- Python (FastAPI) and Go (gRPC services)
- AWS (S3 EC2 Lambda Bedrock managed services)
- Docker Kubernetes RabbitMQ Redis
- LLM APIs (OpenAI AWS Bedrock)
What Were Looking For
Must-Have
- 5 years of backend engineering experience with distributed systems microservices and real-time architectures (WebSocket gRPC event-driven)
- Experience building and deploying complex highly-performant Python applications
- 2 years building LLM-powered products in production () including hands-on experience with LLM APIs (OpenAI Anthropic AWS Bedrock) and RAG systems
- Comfortable balancing architecture design with hands-on implementation
- Strong AWS experience with focus on performance optimization observability and production operations
- Proven ability to optimize production systems (latency throughput technical debt)
- Excellent collaboration skills across backend infrastructure and AI/ML teams
Bonus Points
- Golang experience
- Experience with video streaming media processing or conversational AI platforms
- Data engineering or ML model serving infrastructure experience
What Success Looks Like
First 6 months
- Knowledge transfer completed and ownership of critical backend services established
- Multi-document knowledge base and conversation features live in production
- Active contributor to architecture discussions with measurable performance improvements
- Production systems well-monitored with improved observability
First 12 months
- Core backend services and RAG pipeline running reliably in production
- Platform-wide performance optimizations delivered (Q1Q3 targets met)
- Backend engineers unblocked and supported through your technical guidance
- Recognized as the go-to expert for backend LLM implementation
What We Offer
Compensation & Flexibility
- Salary: depending on experience
- Hybrid work in Barcelona or remote options within Europe
Impact & Growth
- Ownership of critical backend services used daily by enterprise customers
- Hands-on technical work alongside architectural responsibility
- Deep technical challenges across LLMs RAG pipelines real-time systems and scalability
- High-impact role in a small senior team (12 people)
- Close collaboration with Engineering AI research and infrastructure teams
- Opportunity to build expertise in the fast-evolving digital humans domain
Additional Perks
Office in the center of Barcelona
Work from anywhere
Lunch compensation when in the office
Private health insurance with Alan
Travel allowance (for team members living 10km from the office)
Flexible benefits (tax-free under Spanish legislation)
ClassPass discount
How to Apply:
Submit:
Your CV highlightingML production experience
A short motivation (35 sentences) covering:
- Backend systems youve built and maintained
- LLM features youve implemented in production
- A performance optimization project you worked on
- Your experience with RAG or knowledge bases
- Why digital humans excite you
Apply via theEasy Applybutton or reach out directly to creativity is welcome
Recruitment Process
1. Intro call with Joyce (30 min)
interview with Head of Engineering & Product Manager (90 min)
meeting with backend video synthesis and/or infrastructure engineers (90 min)
exercise (short relevant implementation task)
check
Timeline:23 weeks from application to offer
Ready to make digital humans faster better and more reliable
Apply now
Required Experience:
Senior IC
View more
View less