Position: VLM Data Science Expert
Experience: -10 Years
Location: -San Jose CA or Waukesha WI (Onsite)
Duration: -6 Months
Responsibilities: -
- Design train and deploy efficient Vision-Language Models (e.g. VILA Isaac Sim) for multimodal applications.
- The outstanding concern is that we dont yet have a candidate that has successfully implemented a video-based VLM in an autonomous use case (either robotic industrial or car navigation for example). There must be developers out there with this experience as everyone in the autonomous robotic and self driving cars is working on this.
- Explore cost-effective methods such as knowledge distillation modal-adaptive pruning and LoRA fine-tuning to optimize training and inference.
- Implement scalable pipelines for training/testing VLMs on cloud platforms (AWS SageMaker Azure ML).
- Multimodal AI Solutions:
- Develop solutions that integrate vision and language capabilities for applications like image-text matching visual question answering (VQA) and document data extraction.
- Leverage interleaved image-text datasets and advanced techniques (e.g. cross-attention layers) to enhance model performance.
- Healthcare Domain Expertise:
- Apply VLMs to healthcare-specific use cases such as medical imaging analysis position detection motion detection and measurements.
- Ensure compliance with healthcare standards while handling sensitive data.
- Efficiency Optimization:
- Evaluate trade-offs between model size performance and cost using techniques like elastic visual encoders or lightweight architectures.
- Benchmark different VLMs (e.g. GPT-4V Claude 3.5) for accuracy speed and cost-effectiveness on specific tasks.
- Collaboration & Leadership:
- Collaborate with cross-functional teams including engineers and domain experts to define project requirements.
- Mentor junior team members and provide technical leadership on complex projects.
Educational Qualifications: -
- Education: Masters or Ph.D. in Computer Science Data Science Machine Learning or a related field.