Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailWe are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. However they also need to be aware of the practicalities of making a difference in the real world whilst we love innovative advanced solutions we also believe that sometimes a simple solution can have the most impact.
Our AI Engineer is someone who feels the most comfortable around solving problems answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts and experience working on difficult projects and/or with demanding stakeholders is always appreciated.
What can you expect from the role
Contribute to design develop deploy and maintain AI solutions
Use a variety of AI Engineering tools and methods to deliver
Own parts of projects end-to-end
Contributing to solutions design and proposal submissions
Supporting the development of the AI engineering team within Blend
Maintain in-depth knowledge of the AI ecosystems and trends
Mentor junior colleagues
Qualifications :
Contribute to the design development testing deployment maintenance and improvement of robust scalable and reliable software systems adhering to best practices.
Apply Python programming skills for both software development and AI/ML tasks.
Utilize analytical and problem-solving skills to debug complex software infrastructure and AI integration issues.
Proficiently use version control systems especially Git and ML/LLMOps model versioning protocols.
Assist in analysing complex or ambiguous AI problems breaking them down into manageable tasks and contributing to conceptual solution design within the rapidly evolving field of generative AI.
Work effectively within a standard software development lifecycle (e.g. Agile Scrum).
Contribute to the design and utilization of scalable systems using cloud services (AWS Azure GCP) including compute storage and ML/AI services. (Preferred: Azure)
Participate in designing and building scalable and reliable infrastructure to support AI inference workloads including implementing APIs microservices and orchestration layers.
Contribute to the design building or working with event-driven architectures and relevant technologies (e.g. Kafka RabbitMQ cloud event services) for asynchronous processing and system integration.
Experience with containerization (e.g. Docker) and orchestration tools (e.g. Kubernetes Airflow Kubeflow Databricks Jobs etc).
Assist in implementing CI/CD pipelines and optionally using IaC principles/tools for deploying and managing infrastructure and ML/LLM models.
Contribute to developing and deploying LLM-powered features into production systems translating experimental outputs into robust services with clear APIs.
Demonstrate familiarity with transformer model architectures and a practical understanding of LLM specifics like context handling.
Assist in designing implementing and optimising prompt strategies (e.g. chaining templates dynamic inputs); practical understanding of output post-processing.
Experience integrating with third-party LLM providers managing API usage rate limits token efficiency and applying best practices for versioning retries and failover.
Contribute to coordinating multi-step AI workflows potentially involving multiple models or services and optimising for latency and cost (sequential vs. parallel execution).
Assist in monitoring evaluating and optimising AI/LLM solutions for performance (latency throughput reliability) accuracy and cost in production environments.
Additional Information :
Experience specifically with the Databricks MLOps platform.
Familiarity with fine-tuning classical LLM models.
Experience ensuring security and observability for AI services.
Contribution to relevant open-source projects.
Familiarity with building agentic GenAI modules or systems.
Have hands-on experience implementing and automating MLOps/LLMOps practices including model tracking versioning deployment monitoring (latency cost throughput reliability) logging and retraining workflows.
Experience working with MLOps/experiment tracking and operational tools (e.g. MLflow Weights & Biases).
Remote Work :
No
Employment Type :
Full-time
Full-time