We are seeking a skilled and innovative AI Principal Java Software Engineer experienced in working with Generative AI (GenAI) models such as Large Language Models (LLMs) and integrating these solutions into business applications. This role combines software engineering responsibilities with deep knowledge of LLMS APIs and cloud infrastructure - focused on building modern AIenhanced business applications.
Key Responsibilities
- Drive the technical architecture across the domain with a focus on modernization scalability and AI integration.
- Lead the design and implementation of microservices and cloud-native systems.
- Guide the transition from legacy systems to modern distributed systems.
- Collaborate with senior stakeholders (EMs Staff and Principal Engineers Directors) to align on technology direction.
- Champion engineering excellence fostering a culture of autonomy accountability and quality.
- Provide mentorship and leadership across engineering teams.
Model Integration & API Development
- Integrate LLMs and other GenAI models into web applications through efficient API design and implementation.
- Build and optimize API endpoints enabling seamless real-time communication between front-end applications and back-end AI services.
- Design and develop secure scalable and high-performing Java-based microservices for AI model deployment.
Back-End Development & AI Pipelines
- Develop robust back-end systems in Java to support deployment scalability and ongoing maintenance of GenAI models.
- Build and maintain data pipelines including preprocessing input data and post-processing model outputs for application use.
- Implement best practices for sensitive data handling and maintaining high model performance.
Infrastructure & Deployment
- Use Kubernetes and Docker for containerization and orchestration to ensure scalable deployment of AI applications.
- Implement CI/CD pipelines for automated testing and delivery of code changes.
- Maintain scalable and secure cloud infrastructure using platforms such as Google Cloud Platform or Azure for model training storage and deployment.
LLM and GenAI Ecosystem Expertise
- Utilize vector databases (e.g. Pinecone Weaviate Faiss) for embedding management and similarity search.
- Work with frameworks supporting model development and deployment including Hugging Face LangChain and OpenAI ecosystem tools.
- Optimize and fine-tune LLMs based on specific application needs.
Qualifications :
- Bachelors degree in Computer Science Engineering or a related field (minimum).
- 7 years of relevant experiencer ideally with a focus on AI model integration.
- Proficiency in Java for backend developmentt.
- Strong knowledge of GenAI/LLMs including model selection tuning and embedding strategies.
- Experience developing APIs enabling communication between front-end applications and AI systems.
- Working knowledge of Docker and Kubernetes.
- Familiarity with cloud platforms (AWS GCP Azure) for scalable AI deployment.
- Experience with vector databases and their integration with LLM-driven applications.
- Familiarity with SQL and NoSQL databases as well as caching solutions (e.g. Redis).
- Experience with CI/CD pipelines Git and DevOps practices.
- Excellent command of English AND Polish.
Preferred Qualifications
- Knowledge of streaming architectures for real-time data processing (e.g. Apache Kafka).
- Familiarity with serverless architectures (e.g. AWS Lambda) for scalable AI features.
- Prior experience with ML frameworks such as TensorFlow PyTorch or ONNX.
- Strong understanding of data privacy and security in AI applications.
Soft Skills
- Strong problem-solving abilities with both independent and team-based work styles.
- Excellent communication skills with the ability to translate technical requirements into actionable development tasks.
- Proactive approach to staying current with evolving AI technologies and frameworks.
Additional Information :
Why Join InPost
- The option to work from the office or 100% remotely
- Opportunity to work in a diverse international and cross-functional environment along with leading experts.
- Fulfilling careers with a range of benefits for employees and invests in providing training opportunities for their development.
- Involvement in technology monitoring and choices
- Your impact will be visible instantly and you will be making a difference in our users lives
- Participation in building new Centre of Excellence at InPost
Remote Work :
Yes
Employment Type :
Full-time
We are seeking a skilled and innovative AI Principal Java Software Engineer experienced in working with Generative AI (GenAI) models such as Large Language Models (LLMs) and integrating these solutions into business applications. This role combines software engineering responsibilities with deep kno...
We are seeking a skilled and innovative AI Principal Java Software Engineer experienced in working with Generative AI (GenAI) models such as Large Language Models (LLMs) and integrating these solutions into business applications. This role combines software engineering responsibilities with deep knowledge of LLMS APIs and cloud infrastructure - focused on building modern AIenhanced business applications.
Key Responsibilities
- Drive the technical architecture across the domain with a focus on modernization scalability and AI integration.
- Lead the design and implementation of microservices and cloud-native systems.
- Guide the transition from legacy systems to modern distributed systems.
- Collaborate with senior stakeholders (EMs Staff and Principal Engineers Directors) to align on technology direction.
- Champion engineering excellence fostering a culture of autonomy accountability and quality.
- Provide mentorship and leadership across engineering teams.
Model Integration & API Development
- Integrate LLMs and other GenAI models into web applications through efficient API design and implementation.
- Build and optimize API endpoints enabling seamless real-time communication between front-end applications and back-end AI services.
- Design and develop secure scalable and high-performing Java-based microservices for AI model deployment.
Back-End Development & AI Pipelines
- Develop robust back-end systems in Java to support deployment scalability and ongoing maintenance of GenAI models.
- Build and maintain data pipelines including preprocessing input data and post-processing model outputs for application use.
- Implement best practices for sensitive data handling and maintaining high model performance.
Infrastructure & Deployment
- Use Kubernetes and Docker for containerization and orchestration to ensure scalable deployment of AI applications.
- Implement CI/CD pipelines for automated testing and delivery of code changes.
- Maintain scalable and secure cloud infrastructure using platforms such as Google Cloud Platform or Azure for model training storage and deployment.
LLM and GenAI Ecosystem Expertise
- Utilize vector databases (e.g. Pinecone Weaviate Faiss) for embedding management and similarity search.
- Work with frameworks supporting model development and deployment including Hugging Face LangChain and OpenAI ecosystem tools.
- Optimize and fine-tune LLMs based on specific application needs.
Qualifications :
- Bachelors degree in Computer Science Engineering or a related field (minimum).
- 7 years of relevant experiencer ideally with a focus on AI model integration.
- Proficiency in Java for backend developmentt.
- Strong knowledge of GenAI/LLMs including model selection tuning and embedding strategies.
- Experience developing APIs enabling communication between front-end applications and AI systems.
- Working knowledge of Docker and Kubernetes.
- Familiarity with cloud platforms (AWS GCP Azure) for scalable AI deployment.
- Experience with vector databases and their integration with LLM-driven applications.
- Familiarity with SQL and NoSQL databases as well as caching solutions (e.g. Redis).
- Experience with CI/CD pipelines Git and DevOps practices.
- Excellent command of English AND Polish.
Preferred Qualifications
- Knowledge of streaming architectures for real-time data processing (e.g. Apache Kafka).
- Familiarity with serverless architectures (e.g. AWS Lambda) for scalable AI features.
- Prior experience with ML frameworks such as TensorFlow PyTorch or ONNX.
- Strong understanding of data privacy and security in AI applications.
Soft Skills
- Strong problem-solving abilities with both independent and team-based work styles.
- Excellent communication skills with the ability to translate technical requirements into actionable development tasks.
- Proactive approach to staying current with evolving AI technologies and frameworks.
Additional Information :
Why Join InPost
- The option to work from the office or 100% remotely
- Opportunity to work in a diverse international and cross-functional environment along with leading experts.
- Fulfilling careers with a range of benefits for employees and invests in providing training opportunities for their development.
- Involvement in technology monitoring and choices
- Your impact will be visible instantly and you will be making a difference in our users lives
- Participation in building new Centre of Excellence at InPost
Remote Work :
Yes
Employment Type :
Full-time
View more
View less