Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Build and train deep learning models (e.g. YOLO RT-DETR) for object detection and classification
Fuse data from RGB thermal LiDAR/ToF IMU and encoder sensors for real-time perception
Implement advanced image processing (deconvolution motion isolation low-SNR detection)
Work closely with hardware teams to integrate and debug sensors over GigE Vision USB3 SPI and IC
Develop embedded firmware (C/C or Rust) for microcontrollers and FPGAs in RTOS environments
Create scalable data pipelines for ingestion labeling and training
Optimize inference for deployment on edge platforms (GPU FPGA)
Build internal tools for diagnostics performance monitoring and auto-retraining
Document your work and mentor other engineers on vision and embedded best practices
Qualifications :
36 years of experience in computer vision robotics perception or sensor fusion
Strong programming skills in C and Python
Hands-on with TensorFlow PyTorch and real-time inference tools like TensorRT or OpenVINO
Experience with Docker and CI/CD in ML workflows
Background in multi-sensor calibration and synchronization
Degree in CS EE Robotics or similar (PhD a plus)
Strong communication and cross-functional collaboration skills
Obsessive builder mindset you thrive solving hard technical challenges
Experience with edge-AI optimization (quantization pruning)
Familiarity with embedded GPU platforms or FPGAs
Background in safety-critical or defense systems
Knowledge of secure coding and cybersecurity best practices
Remote Work :
No
Employment Type :
Full-time
Full-time