Key Responsibilities
- Design develop and optimize classical machine learning models (e.g. regression classification clustering time-series forecasting anomaly detection)
- Build and deploy deep learning models using frameworks such as TensorFlow or PyTorch for structured unstructured and multimodal data
- Fine-tune and evaluate language models (LLMs/SLMs) for tasks such as text classification summarization information extraction and domain-specific reasoning
- Implement and maintain MLOps and LLMOps pipelines including model training versioning CI/CD deployment rollback and lifecycle management
- Develop model monitoring and observability solutions covering performance drift detection bias latency and cost metrics
- Apply AIOps concepts to automate detection root cause analysis and predictive insights using operational and telemetry data
- Collaborate with API Manager and platform teams to expose ML/AI capabilities as secure scalable and well-documented APIs
- Participate in data preparation and feature engineering working closely with data engineering teams and feature stores
- Perform rigorous model validation experimentation and benchmarking ensuring reliability and reproducibility
- Contribute to technical design documents architecture reviews and best-practice guidelines
- Mentor junior engineers/interns and contribute to raising overall data science and engineering standards within the team
- Stay up to date with advancements in machine learning deep learning and generative AI and assess their applicability to business use cases
Person Specifications
- Bachelors degree in IT/Computer Science Data Science Engineering Mathematics or a related field
- 03 years of hands-on experience in data science or machine learning engineering roles
- Strong experience with Python and common ML/DL libraries (scikit-learn PyTorch TensorFlow NumPy pandas)
- Proven experience developing and deploying production-grade ML models
- Hands-on experience with MLOps platforms and tools (e.g. MLflow Kubeflow SageMaker Vertex AI or equivalent)
- Practical exposure to LLMOps including prompt engineering fine-tuning evaluation and model serving
- Experience working with APIs microservices and integrating ML models into enterprise applications
- Solid understanding of data pipelines feature engineering and model lifecycle management
- Experience with cloud platforms (AWS Azure or GCP) and containerization (Docker Kubernetes)
- Experience applying AIOps techniques in monitoring observability or IT/network operations contexts
- Knowledge of time-series analysis anomaly detection or large-scale telemetry data
- Familiarity with vector databases RAG pipelines and embedding models
- Exposure to API management platforms and security concepts (authentication rate limiting governance)
- Experience with CI/CD pipelines for ML and AI systems
- Prior experience in telecommunications fintech or large-scale enterprise environments
- Strong analytical and problem-solving skills with a pragmatic engineering-first mindset
- Ability to communicate complex technical concepts clearly to both technical and non-technical stakeholders
- Comfortable working in cross-functional agile teams
- Self-driven accountable and capable of owning solutions end-to-end
- Passion for continuous learning and applying emerging AI technologies responsibly
Key ResponsibilitiesDesign develop and optimize classical machine learning models (e.g. regression classification clustering time-series forecasting anomaly detection)Build and deploy deep learning models using frameworks such as TensorFlow or PyTorch for structured unstructured and multimodal dataF...
Key Responsibilities
- Design develop and optimize classical machine learning models (e.g. regression classification clustering time-series forecasting anomaly detection)
- Build and deploy deep learning models using frameworks such as TensorFlow or PyTorch for structured unstructured and multimodal data
- Fine-tune and evaluate language models (LLMs/SLMs) for tasks such as text classification summarization information extraction and domain-specific reasoning
- Implement and maintain MLOps and LLMOps pipelines including model training versioning CI/CD deployment rollback and lifecycle management
- Develop model monitoring and observability solutions covering performance drift detection bias latency and cost metrics
- Apply AIOps concepts to automate detection root cause analysis and predictive insights using operational and telemetry data
- Collaborate with API Manager and platform teams to expose ML/AI capabilities as secure scalable and well-documented APIs
- Participate in data preparation and feature engineering working closely with data engineering teams and feature stores
- Perform rigorous model validation experimentation and benchmarking ensuring reliability and reproducibility
- Contribute to technical design documents architecture reviews and best-practice guidelines
- Mentor junior engineers/interns and contribute to raising overall data science and engineering standards within the team
- Stay up to date with advancements in machine learning deep learning and generative AI and assess their applicability to business use cases
Person Specifications
- Bachelors degree in IT/Computer Science Data Science Engineering Mathematics or a related field
- 03 years of hands-on experience in data science or machine learning engineering roles
- Strong experience with Python and common ML/DL libraries (scikit-learn PyTorch TensorFlow NumPy pandas)
- Proven experience developing and deploying production-grade ML models
- Hands-on experience with MLOps platforms and tools (e.g. MLflow Kubeflow SageMaker Vertex AI or equivalent)
- Practical exposure to LLMOps including prompt engineering fine-tuning evaluation and model serving
- Experience working with APIs microservices and integrating ML models into enterprise applications
- Solid understanding of data pipelines feature engineering and model lifecycle management
- Experience with cloud platforms (AWS Azure or GCP) and containerization (Docker Kubernetes)
- Experience applying AIOps techniques in monitoring observability or IT/network operations contexts
- Knowledge of time-series analysis anomaly detection or large-scale telemetry data
- Familiarity with vector databases RAG pipelines and embedding models
- Exposure to API management platforms and security concepts (authentication rate limiting governance)
- Experience with CI/CD pipelines for ML and AI systems
- Prior experience in telecommunications fintech or large-scale enterprise environments
- Strong analytical and problem-solving skills with a pragmatic engineering-first mindset
- Ability to communicate complex technical concepts clearly to both technical and non-technical stakeholders
- Comfortable working in cross-functional agile teams
- Self-driven accountable and capable of owning solutions end-to-end
- Passion for continuous learning and applying emerging AI technologies responsibly
View more
View less