Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailWe are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. However they also need to be aware of the practicalities of making a difference in the real world whilst we love innovative advanced solutions we also believe that sometimes a simple solution can have the most impact.
Our AI Engineer is someone who feels the most comfortable around solving problems answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts and experience working on difficult projects and/or with demanding stakeholders is always appreciated.
What can you expect from the role
Own tasks end-to-end and lead on project delivery and project governance
Management of AI Engineer(s)
Preparing and presenting data driven solutions to stakeholders
Design develop deploy and maintain AI solutions.
Use a variety of AI Engineering tools and methods to deliver
Contributing to solutions design and proposal submissions
Supporting the development of the AI engineering team within Blend
Maintain in-depth knowledge of AI ecosystems and trends
Mentor junior colleagues
Contributing to proposal submissions and business development initiatives under the direction of the Leadership team
Qualifications :
Proven ability to design develop test deploy maintain and improve robust scalable and reliable software systems following best practices.
Expertise in Python programming language for both software development and AI/ML tasks.
Strong analytical and problem-solving skills with the ability to debug complex software infrastructure and AI integration issues.
Proficient in using version control systems especially Git and ML/LLMOps model versioning protocols.
Ability to analyse complex or ambiguous AI problems break them down into smaller manageable and independently evaluatable tasks and think conceptually to design solutions in the rapidly evolving field of generative AI.
Experience working within a standard software development lifecycle (e.g. Agile Scrum).
Skilled in designing and utilising scalable systems using cloud services (AWS Azure GCP) including compute storage and ML/AI services. (Preferred Azure)
Experience designing and building scalable and reliable infrastructure to support AI inference workloads including implementing APIs microservices and orchestration layers.
Experience designing building or working with event-driven architectures and relevant technologies (e.g. Kafka RabbitMQ cloud event services) for asynchronous processing and system integration.
Experience with containerisation (e.g. Docker) and orchestration tools (e.g. Kubernetes Airflow Kubeflow Databricks Jobs etc).
Experience implementing CI/CD pipelines and optionally using IaC principles/tools for deploying and managing infrastructure and ML/LLM models.
Experience developing and deploying LLM-powered features into production systems translating experimental outputs into robust services with clear APIs.
Familiarity with transformer model architectures and practical understanding of LLM specifics like context handling.
Experience designing implementing and optimising prompt strategies (e.g. chaining templates dynamic inputs); practical understanding of output post-processing.
Experience integrating with third-party LLM providers managing API usage rate limits token efficiency and applying best practices for versioning retries and failover.
Experience coordinating multi-step AI workflows potentially involving multiple models or services and optimising for latency and cost (sequential vs. parallel execution).
Must have hands-on experience implementing and automating MLOps/LLMOps practices including model tracking versioning deployment monitoring (latency cost throughput reliability) logging and retraining workflows.
Must have worked extensively with MLOps/experiment tracking and operational tools (e.g. MLflow Weights & Biases) and have a demonstrable track record.
Proven ability to monitor evaluate and optimise AI/LLM solutions for performance (latency throughput reliability) accuracy and cost in production environments.
Additional Information :
Experience specifically with the Databricks MLOps platform.
In-depth experience fine-tuning classical LLM models.
Experience ensuring security and observability for AI services.
Contribution to relevant open-source projects.
Proven record of building agentic GenAI modules or systems.
Remote Work :
No
Employment Type :
Full-time
Full-time