DescriptionThe Mayo Clinic Platform AI team is seeking an experienced Senior Platform Engineer to join our innovative efforts in developing and implementing cuttingedge generative AI solutions. In this role you will lead the design and development of stateoftheart generative AI models establish comprehensive safety guardrails for responsible AI deployment and drive the creation of autonomous AI agents. Youll collaborate closely with a diverse team of data scientists product managers and engineers as we shape the future of AI applications while ensuring our systems remain safe ethical and scalable.
Key Responsibilities
- Generative AI Model Development:Architect design and implement advanced generative AI models and architectures that support varied departmental applications and cuttingedge research initiatives.
- GenAI Safety & Ethics: Develop comprehensive safety guardrails and ethical guidelines to ensure responsible AI development and deployment incorporating best practices in AI alignment and security.
- CrossFunctional Collaboration: Partner with crossfunctional teams to integrate AI solutions seamlessly within the Mayo Clinic Platform translating business needs into robust technical implementations.
- Autonomous AI Agents:Lead the creation and optimization of intelligent AI agents designed for autonomous decisionmaking leveraging techniques in prompt engineering and model finetuning.
System Enhancement: Evaluate and enhance existing generative AI deployments across departmental applications continually iterating to improve performance safety and scalability. - Performance Optimization: Identify bottlenecks in AI/ML pipelines and propose solutions to improve system performance efficiency and scalability.
- Monitoring & Troubleshooting: Develop and maintain observability tools including logging monitoring and alerting to diagnose and resolve production issues.
- Documentation: Create and maintain technical documentation including architectural diagrams API specifications and onboarding guides for internal and external stakeholders.
- Thought Leadership: Stay updated with the latest trends and advancements in federated learning distributed computing and machine learning frameworks to continually enhance the platform.
Qualifications- Bachelors degree in a relevant information technology field or a minimum 7 years of direct fullstack engineering with increasing complexity.
- 35 years working in diverse environments utilizing Agile principles of software development.
- Proven experience as a Full Stack Engineer with a strong emphasis on healthcare interoperability.
- Proficiency in Java and/or .NET for backend development including API / service design and implementation.
- Expertise in Javascript with a focus on React and react frameworks (e.g. NextJS) for building responsive and intuitive frontend applications.
- Handson experience with Google Cloud Platform (GCP) (or equivalent) services and cloudnative application development.
- Familiarity with healthcare interoperability standards such as HL7 FHIR and OMOP.
- Strong problemsolving skills and the ability to work in a collaborative crossfunctional team environment.
- Experience with DevOps practices CI/CD pipelines and containerization technologies (e.g. Docker Kubernetes) is a plus.
- Knowledge of healthcare data security and compliance requirements including HIPAA is highly desirable.
- Excellent communication skills and the ability to convey complex technical concepts to nontechnical stakeholders.
- A proactive and selfdriven mindset with a passion for staying uptodate with emerging technologies and industry best practices.
- Experience with Interoperability standards such as HL7 FHIR and OMOP.
- Experience with solutions integration/delivery in a healthcare setting.
Preferred Experience
2 years in a senior or lead capacity ideally in a distributed systems AI/ML or largescale data environment. Programming Skills: Strong proficiency in languages such as Python Java C or Go with demonstrated experience building productiongrade services. Machine Learning Frameworks: Familiarity with common ML libraries and frameworks (e.g. TensorFlow PyTorch) especially those supporting federated learning (e.g. TensorFlow Federated). Distributed Systems: Solid understanding of distributed computing principles including concurrency data partitioning and scaling strategies. Cloud & DevOps: Handson experience with cloud platforms (AWS Azure or GCP) and container orchestration (Docker Kubernetes). Familiarity with CI/CD pipelines and infrastructureascode tools. Security & Compliance: Working knowledge of data privacy and protection standards (GDPR HIPAA or similar) encryption and secure data handling practices. LLM FineTuning: Experience in finetuning large language models and advanced prompt engineering techniques. AI Alignment & Safety: Handson background in AI alignment strategies and the development of robust safety mechanisms. RAG Expertise: Familiarity with RetrievalAugmented Generation (RAG) techniques to improve model responsiveness and efficiency. Production Deployment: Proven experience with AI model deployment and scaling in production environments including container orchestration and cloudbased solutions. MultiModal AI: Understanding and experience working with multimodal AI systems that integrate text image and other data types.
Required Experience:
Senior IC