Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailWere looking for a technically strong Product Manager to join our PhariaAI Inference Team and help shape the future of our advanced inference platform. This role is ideal for someone with product management experience a deep understanding of AI infrastructure and a strong grasp of the performance and economics of large language model deployment.
Shape the product strategy and roadmap for our inference platform in close collaboration with engineering and research aligning OKRs with business goals and user needs
Provide clarity on goals and constraints enabling the team to explore and deliver the best solutions.
Work closely with engineering to prioritize and deliver highimpact features ensuring a fast reliable and scalable inference stack
Define clear actionable requirements and success criteria that balance technical feasibility with user and business impact
Continuously learn from realworld usage incorporating performance metrics user feedback and experimentation results into iterative improvements
Stay informed about the latest in inference technologies optimization techniques and the broader LLM landscape to inform product direction
Partner with customerfacing teams to articulate the value and differentiation of our inference capabilities in a fastmoving competitive environment
Experience in product management for software products ideally with exposure to developer tools AI/ML systems or technical platforms
Familiarity with modern product discovery and agile delivery practices
Strong technical curiosity fluency and willingness to learn about AI inference technologies
Strong communication skills especially when distilling technical complexity for nontechnical audiences
Strong analytical skills to evaluate market trends and competitive offerings
A customerobsessed mindset and the ability to deeply understand user needseven when those users are internal AI teams
Ability to thrive in a fastpaced environment and manage multiple priorities
Basic understanding of inference optimization techniques such as quantization LoRA adapters function calling structured outputs and batch processing
Familiarity with the economics of LLM inference including GPU utilization token economics and performance tradeoffs
Exposure to inference engines such as vLLM SGLang TGI or similar technologies
Experience with retrievalaugmented generation (RAG) pipelines embeddings and multimodal systems
Understanding of the challenges in longcontext handling and advanced sampling methods
Experience with multimodal AI systems
Access to a variety of fitness & wellness offerings via Wellhub
Mental health support through nilo.health
Substantially subsidized company pension plan for your future security
Subsidized Germanywide transportation ticket
Budget for additional technical equipment
Regular team events to stay connected
Flexible working hours for better worklife balance
Required Experience:
Senior IC
Full-Time