Position: MLOps Engineer
Company: WillWare Technologies
Location: Bangalore
Work Mode: WFORequired Qualifications:Orchestration: Deep experience with Valohai (Preferred) Kubeflow Airflow or AWS SageMaker Pipelines.
Model Lifecycle: Expert-level knowledge of MLflow for tracking experiments and managing model registries.
Cloud Proficiency: Hands-on experience with both Azure and AWS ecosystems.
Coding: Strong proficiency in Python and shell scripting.
Containers: Docker and container orchestration.Key Responsibilities:
MLOps as Code & Orchestration - Design and implement MLOps as Code methodologies. pipelines infrastructure and configurations must be versioned reproducible and automated (GitOps).
- Manage and optimize deep learning orchestration platforms (specifically Valohai or similar tools like Kubeflow/SageMaker Pipelines) to automate training fine-tuning and deployment workflows.
- Standardize execution environments using Docker and ensure reproducibility across local dev and production environments.
- Central Registry & Governance
- Own the Central Model Registry strategy using MLflow. Ensure strict versioning lineage tracking and stage transitions (Staging to Prod) for all models.
- Enforce governance policies for model artifacts ensuring security and compliance across the model lifecycle.
- Multi-Cloud Architecture (Azure & AWS)
- Operate in a hybrid cloud environment. You will leverage Azure (AI Foundry OpenAI Service) and AWS (SageMaker Bedrock EC2/GPU instances) based on workload requirements.
- Design seamless integrations between cloud storage (S3/Blob) compute and the orchestration layer.
- Experience creating custom execution environments for specialized hardware (NVIDIA GPUs TPUs).
- CI/CD & Automation
- Build robust CI/CD pipelines (GitHub Actions/Azure DevOps) that trigger automatic training or deployment based on code or data changes.
- Automate the hand-off process between Data Scientists and production environments.
Position: MLOps EngineerCompany: WillWare TechnologiesLocation: BangaloreWork Mode: WFORequired Qualifications:Orchestration: Deep experience with Valohai (Preferred) Kubeflow Airflow or AWS SageMaker Pipelines. Model Lifecycle: Expert-level knowledge of MLflow for tracking experiments and managing ...
Position: MLOps Engineer
Company: WillWare Technologies
Location: Bangalore
Work Mode: WFORequired Qualifications:Orchestration: Deep experience with Valohai (Preferred) Kubeflow Airflow or AWS SageMaker Pipelines.
Model Lifecycle: Expert-level knowledge of MLflow for tracking experiments and managing model registries.
Cloud Proficiency: Hands-on experience with both Azure and AWS ecosystems.
Coding: Strong proficiency in Python and shell scripting.
Containers: Docker and container orchestration.Key Responsibilities:
MLOps as Code & Orchestration - Design and implement MLOps as Code methodologies. pipelines infrastructure and configurations must be versioned reproducible and automated (GitOps).
- Manage and optimize deep learning orchestration platforms (specifically Valohai or similar tools like Kubeflow/SageMaker Pipelines) to automate training fine-tuning and deployment workflows.
- Standardize execution environments using Docker and ensure reproducibility across local dev and production environments.
- Central Registry & Governance
- Own the Central Model Registry strategy using MLflow. Ensure strict versioning lineage tracking and stage transitions (Staging to Prod) for all models.
- Enforce governance policies for model artifacts ensuring security and compliance across the model lifecycle.
- Multi-Cloud Architecture (Azure & AWS)
- Operate in a hybrid cloud environment. You will leverage Azure (AI Foundry OpenAI Service) and AWS (SageMaker Bedrock EC2/GPU instances) based on workload requirements.
- Design seamless integrations between cloud storage (S3/Blob) compute and the orchestration layer.
- Experience creating custom execution environments for specialized hardware (NVIDIA GPUs TPUs).
- CI/CD & Automation
- Build robust CI/CD pipelines (GitHub Actions/Azure DevOps) that trigger automatic training or deployment based on code or data changes.
- Automate the hand-off process between Data Scientists and production environments.
View more
View less