This position will be contracted through Boschs external vendor under a one-year agreement.
The Senior AI Engineer for this global workflow product plays a specialized role at the intersection of reverse engineering and forward-looking product development. Unlike standard AI roles this position focuses heavily on unlocking decades of tribal knowledge and operational data hidden in legacy silos to power the next generation of AI-driven automation.
You will lead the technical strategy for extracting mining and transforming massive volumes of legacy data into high-quality training sets for AI models. Your mission is to evolve our global workflow product from a reactive IT tool into a proactive AI-driven platform capable of predictive maintenance anomaly detection and automated decision-making.
Key Responsibilities
- Legacy Data Mining: Architect and implement advanced data mining techniques to extract valuable operational insights from structured and unstructured legacy systems (e.g. old SQL procedures mainframe logs siloed documentation).
- AI Product Evolution: Design and deploy production-ready ML/Deep Learning models that drive core product features such as intelligent workflow routing natural language interfaces (LLMs) and predictive analytics.
- Feature Engineering for Workflows: Lead the discovery and creation of features from legacy data that specifically improve IT workflow efficiency such as identifying bottleneck patterns or predicting system failures.
- Global Model Management: Oversee the deployment and lifecycle (MLOps) of models across global regions ensuring low-latency performance and compliance with regional data residency laws (e.g. GDPR SOC2).
- Reverse Engineering via GenAI: Utilize Generative AI and LLMs to automate the modernization of legacy codebases turning outdated scripts into documented cloud-native services.
- Cross-Functional Leadership: Mentor junior engineers and collaborate with Product Managers to translate legacy business logic into modern AI-powered user experiences.
Qualifications :
Technical Skills & Experience
- Education: Bachelors or Masters degree in Computer Science Artificial Intelligence or a related quantitative field.
- Experience: 8 years in software or data engineering with at least 4 years specialized in AI/ML deployment in a global production environment.
- AI/ML Mastery: Expert proficiency in Python and frameworks like PyTorch TensorFlow or Scikit-learn. Strong experience with LLM Orchestration (e.g. LangChain LangGraph).
- Data Mining & Engineering: Deep experience with ETL/ELT pipelines NoSQL/Graph databases and tools like Apache Spark or Flink for large-scale data processing.
- Infrastructure: Hands-on experience with MLOps (MLflow Airflow) containerization (Docker Kubernetes) and global cloud platforms (AWS Azure or GCP).
Specialized Hybrid Skills
- Legacy System Knowledge: Ability to navigate and extract data from older relational databases or siloed enterprise architectures.
- Analytical Mindset: Proven track record of identifying high-value AI use cases from messy real-world dark data.
- Strategic Communication: Ability to justify the ROI of legacy data mining projects to senior leadership.
Additional Information :
Further details regarding benefits will be shared during the interview process
Remote Work :
No
Employment Type :
Full-time
This position will be contracted through Boschs external vendor under a one-year agreement.The Senior AI Engineer for this global workflow product plays a specialized role at the intersection of reverse engineering and forward-looking product development. Unlike standard AI roles this position focus...
This position will be contracted through Boschs external vendor under a one-year agreement.
The Senior AI Engineer for this global workflow product plays a specialized role at the intersection of reverse engineering and forward-looking product development. Unlike standard AI roles this position focuses heavily on unlocking decades of tribal knowledge and operational data hidden in legacy silos to power the next generation of AI-driven automation.
You will lead the technical strategy for extracting mining and transforming massive volumes of legacy data into high-quality training sets for AI models. Your mission is to evolve our global workflow product from a reactive IT tool into a proactive AI-driven platform capable of predictive maintenance anomaly detection and automated decision-making.
Key Responsibilities
- Legacy Data Mining: Architect and implement advanced data mining techniques to extract valuable operational insights from structured and unstructured legacy systems (e.g. old SQL procedures mainframe logs siloed documentation).
- AI Product Evolution: Design and deploy production-ready ML/Deep Learning models that drive core product features such as intelligent workflow routing natural language interfaces (LLMs) and predictive analytics.
- Feature Engineering for Workflows: Lead the discovery and creation of features from legacy data that specifically improve IT workflow efficiency such as identifying bottleneck patterns or predicting system failures.
- Global Model Management: Oversee the deployment and lifecycle (MLOps) of models across global regions ensuring low-latency performance and compliance with regional data residency laws (e.g. GDPR SOC2).
- Reverse Engineering via GenAI: Utilize Generative AI and LLMs to automate the modernization of legacy codebases turning outdated scripts into documented cloud-native services.
- Cross-Functional Leadership: Mentor junior engineers and collaborate with Product Managers to translate legacy business logic into modern AI-powered user experiences.
Qualifications :
Technical Skills & Experience
- Education: Bachelors or Masters degree in Computer Science Artificial Intelligence or a related quantitative field.
- Experience: 8 years in software or data engineering with at least 4 years specialized in AI/ML deployment in a global production environment.
- AI/ML Mastery: Expert proficiency in Python and frameworks like PyTorch TensorFlow or Scikit-learn. Strong experience with LLM Orchestration (e.g. LangChain LangGraph).
- Data Mining & Engineering: Deep experience with ETL/ELT pipelines NoSQL/Graph databases and tools like Apache Spark or Flink for large-scale data processing.
- Infrastructure: Hands-on experience with MLOps (MLflow Airflow) containerization (Docker Kubernetes) and global cloud platforms (AWS Azure or GCP).
Specialized Hybrid Skills
- Legacy System Knowledge: Ability to navigate and extract data from older relational databases or siloed enterprise architectures.
- Analytical Mindset: Proven track record of identifying high-value AI use cases from messy real-world dark data.
- Strategic Communication: Ability to justify the ROI of legacy data mining projects to senior leadership.
Additional Information :
Further details regarding benefits will be shared during the interview process
Remote Work :
No
Employment Type :
Full-time
View more
View less