We are
At Cross River were building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments cards and lending solutions we enable millions of businesses and consumers to transact seamlessly and securely.
With 900 employees worldwide and an R&D center of over 160 employees in Jerusalem - were reshaping how financial technology is developed and delivered. .
The Role:
We are looking for a backend engineer who can design build and operate highly reliable services on AWS that enable generativeAI capabilities across our products and internal workflows.
You will create scalable APIs data pipelines and serverless architectures that integrate largelanguagemodel (LLM) services such as Amazon Bedrock OpenAI and opensource models enabling teams to safely and efficiently leverage generative AI.
Who You Are:
- You have experience building RetrievalAugmented Generation (RAG) systems or knowledgebase chatbots.
- Youre Handson with vector databases such as Pinecone Chroma or pgvector on Postgres/Aurora.
- Have AWS certification (Developer Solutions Architect or Machine Learning Specialty).
- Experience with observability tooling (Datadog New Relic) and costoptimization strategies for AI workloads.
- Background in microservices domaindriven design or eventsourcing patterns.
What Youll Actually Be Doing:
- Design and implement REST/GraphQL APIs in to serve generativeAI features such as chat summarization and content generation.
- Build and maintain AWSnative architectures using Lambda API Gateway ECS/Fargate DynamoDB S3 and Step Functions.
- Integrate and orchestrate LLM services (Amazon Bedrock OpenAI selfhosted models) and vector databases (Amazon Aurora pgvector Pinecone Chroma) to power RetrievalAugmented Generation (RAG) pipelines.
- Create secure observable and costefficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline.
- Implement monitoring tracing and logging (CloudWatch XRay OpenTelemetry) to track latency cost and output quality of AI endpoints.
- Collaborate with ML engineers product managers and frontend teams in agile sprints; participate in design reviews and knowledgesharing sessions.
- Establish best practices for prompt engineering model evaluation and data governance to ensure responsible AI usage.
Why Youll Love Working Here:
- Flexible hybrid model: 3 days a week in the office A must
- 1000 net monthly wellness benefit from therapy to Pilates to your kids art class
- Full Keren Hishtalmut private health & dental insurance
- Donation matching volunteering days team outings and mentorship programs
- A mission-driven culture that values ownership trust and meaningful impact
Next Step:
Hit Apply!
Requirements:
What You Bring to the Table
- Available working some US hours
- Proficient in Hebrew and English both written and verbal sufficient for achieving consensus and success in a remote and largely asynchronous work environment - Must
- 4 years professional experience building production services with years handson with AWS including Lambda API Gateway DynamoDB and at least one container service (ECS EKS or Fargate).
- Experience integrating thirdparty or cloudnative LLM services (e.g. Amazon Bedrock OpenAI API) into production systems.
- Strong understanding of RESTful design GraphQL fundamentals and eventdriven architectures (SNS/SQS EventBridge).
- Proficiency with infrastructureascode (AWS CDK Terraform or CloudFormation) and CI/CD pipelines.
- Familiarity with secure coding authentication/authorization patterns (Cognito OAuth) and data privacy best practices for AI workloads.
Technical Environment:
- Languages: TypeScript JavaScript SQL
- Frameworks & Libraries: Fastify Apollo Server LangChainJS AWS SDK v3
- Datastores: DynamoDB Aurora (Postgres pgvector) Redis S3
- Infra & DevOps: AWS Lambda API Gateway ECS/Fargate Step Functions CDK Terraform Docker GitHub Actions
- AI Stack: Amazon Bedrock OpenAI API HuggingFace Inference Endpoints Pinecone Chroma
We areAt Cross River were building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments cards and lending solutions we enable millions of businesses and consumers to transact seamlessly and securely. With 900 employees worldwide and an R&D cent...
We are
At Cross River were building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments cards and lending solutions we enable millions of businesses and consumers to transact seamlessly and securely.
With 900 employees worldwide and an R&D center of over 160 employees in Jerusalem - were reshaping how financial technology is developed and delivered. .
The Role:
We are looking for a backend engineer who can design build and operate highly reliable services on AWS that enable generativeAI capabilities across our products and internal workflows.
You will create scalable APIs data pipelines and serverless architectures that integrate largelanguagemodel (LLM) services such as Amazon Bedrock OpenAI and opensource models enabling teams to safely and efficiently leverage generative AI.
Who You Are:
- You have experience building RetrievalAugmented Generation (RAG) systems or knowledgebase chatbots.
- Youre Handson with vector databases such as Pinecone Chroma or pgvector on Postgres/Aurora.
- Have AWS certification (Developer Solutions Architect or Machine Learning Specialty).
- Experience with observability tooling (Datadog New Relic) and costoptimization strategies for AI workloads.
- Background in microservices domaindriven design or eventsourcing patterns.
What Youll Actually Be Doing:
- Design and implement REST/GraphQL APIs in to serve generativeAI features such as chat summarization and content generation.
- Build and maintain AWSnative architectures using Lambda API Gateway ECS/Fargate DynamoDB S3 and Step Functions.
- Integrate and orchestrate LLM services (Amazon Bedrock OpenAI selfhosted models) and vector databases (Amazon Aurora pgvector Pinecone Chroma) to power RetrievalAugmented Generation (RAG) pipelines.
- Create secure observable and costefficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline.
- Implement monitoring tracing and logging (CloudWatch XRay OpenTelemetry) to track latency cost and output quality of AI endpoints.
- Collaborate with ML engineers product managers and frontend teams in agile sprints; participate in design reviews and knowledgesharing sessions.
- Establish best practices for prompt engineering model evaluation and data governance to ensure responsible AI usage.
Why Youll Love Working Here:
- Flexible hybrid model: 3 days a week in the office A must
- 1000 net monthly wellness benefit from therapy to Pilates to your kids art class
- Full Keren Hishtalmut private health & dental insurance
- Donation matching volunteering days team outings and mentorship programs
- A mission-driven culture that values ownership trust and meaningful impact
Next Step:
Hit Apply!
Requirements:
What You Bring to the Table
- Available working some US hours
- Proficient in Hebrew and English both written and verbal sufficient for achieving consensus and success in a remote and largely asynchronous work environment - Must
- 4 years professional experience building production services with years handson with AWS including Lambda API Gateway DynamoDB and at least one container service (ECS EKS or Fargate).
- Experience integrating thirdparty or cloudnative LLM services (e.g. Amazon Bedrock OpenAI API) into production systems.
- Strong understanding of RESTful design GraphQL fundamentals and eventdriven architectures (SNS/SQS EventBridge).
- Proficiency with infrastructureascode (AWS CDK Terraform or CloudFormation) and CI/CD pipelines.
- Familiarity with secure coding authentication/authorization patterns (Cognito OAuth) and data privacy best practices for AI workloads.
Technical Environment:
- Languages: TypeScript JavaScript SQL
- Frameworks & Libraries: Fastify Apollo Server LangChainJS AWS SDK v3
- Datastores: DynamoDB Aurora (Postgres pgvector) Redis S3
- Infra & DevOps: AWS Lambda API Gateway ECS/Fargate Step Functions CDK Terraform Docker GitHub Actions
- AI Stack: Amazon Bedrock OpenAI API HuggingFace Inference Endpoints Pinecone Chroma
اعرض المزيد
عرض أقل