We are seeking a highly senior hands-on AI Data Infrastructure Engineer (potentially at a Lead level) to architect and own our institutional AI foundation. This is a specialized role at the intersection of Data Engineering and Software Engineering.
Unlike a traditional AI Developer your focus will be on the infrastructure tooling and ecosystem that powers AI rather than building individual end-user solutions. You will modernize our data environment making it AI-ready and ensure our platform is robust scalable and cost-optimized to support the next generation of online education and healthcare simulations.
About the Client
Our customer is a leader in online education dedicated to empowering professionals through innovative simulation and learning platforms. We are a certified great workplace ranked consistently by Fortune as a top employer for Millennials and Women.
You will join the Data AI & Automation (DAIA) teama tight-knit remote-first group of passionate experts driven by curiosity. We work in a fast-paced environment where we value human-centric AI that solves real-world problems in the healthcare and allied health fields.
Responsibilities
- Platform Ownership: Set up maintain and own the core AI platform infrastructure with a primary focus on Snowflake Cortex and its surrounding ecosystem.
- Infrastructure as Code & Tooling: Configure and maintain MCP (Model Context Protocol) servers and manage the integration of open-source packages (e.g. Goose).
- Cost & Performance Optimization: Actively manage Snowflake credits token usage and overall system performance to ensure a cost-effective and resilient environment.
- Data Architecture: Modernize and refine high-level platform architecture ensuring external datasets are seamlessly integrated and AI-ready.
- Observability: Implement and maintain high standards for system monitoring observability and reliability.
- Technical Leadership: Act as a self-starting independent lead who can translate high-level infrastructure needs into functional production-grade code.
Must-Have:
- Expert-level Snowflake: Extensive hands-on experience with Cortex including setup management and cost optimization.
- Snowflake Suite: Deep expertise (SME) in Snowpark and Streamlit.
- Programming: Advanced proficiency in Python and a strong background in Software Engineering.
- AI Infrastructure: Proven experience in MCP (Model Context Protocol) server development and configuration.
- Cloud & Data:
- Deep understanding of data modeling data architecture and AWS environments (specifically AWS Bedrock).
- Proficiency in core AWS infrastructure: S3 (data lakes) IAM (permissions/security) Lambda (serverless compute) and VPC/Networking (secure cloud connectivity).
- Seniority: Minimum 5 years of experience in data/infrastructure engineering showing the ability to work independently and interface directly with internal technical stakeholders.
Nice-to-Have:
- GenAI/RAG: Practical experience deploying Generative AI and Retrieval-Augmented Generation (RAG) systems in a production setting.
- Machine Learning: A background in ML engineering or MLOps (e.g. experience with AWS SageMaker).
Open Source: Experience contributing to or managing open-source AI tooling like Goose.
We offer*:
- Flexible working format - remote office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program tech talks and trainings centers of excellence and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
Required Experience:
Senior IC
We are seeking a highly senior hands-on AI Data Infrastructure Engineer (potentially at a Lead level) to architect and own our institutional AI foundation. This is a specialized role at the intersection of Data Engineering and Software Engineering.Unlike a traditional AI Developer your focus will be...
We are seeking a highly senior hands-on AI Data Infrastructure Engineer (potentially at a Lead level) to architect and own our institutional AI foundation. This is a specialized role at the intersection of Data Engineering and Software Engineering.
Unlike a traditional AI Developer your focus will be on the infrastructure tooling and ecosystem that powers AI rather than building individual end-user solutions. You will modernize our data environment making it AI-ready and ensure our platform is robust scalable and cost-optimized to support the next generation of online education and healthcare simulations.
About the Client
Our customer is a leader in online education dedicated to empowering professionals through innovative simulation and learning platforms. We are a certified great workplace ranked consistently by Fortune as a top employer for Millennials and Women.
You will join the Data AI & Automation (DAIA) teama tight-knit remote-first group of passionate experts driven by curiosity. We work in a fast-paced environment where we value human-centric AI that solves real-world problems in the healthcare and allied health fields.
Responsibilities
- Platform Ownership: Set up maintain and own the core AI platform infrastructure with a primary focus on Snowflake Cortex and its surrounding ecosystem.
- Infrastructure as Code & Tooling: Configure and maintain MCP (Model Context Protocol) servers and manage the integration of open-source packages (e.g. Goose).
- Cost & Performance Optimization: Actively manage Snowflake credits token usage and overall system performance to ensure a cost-effective and resilient environment.
- Data Architecture: Modernize and refine high-level platform architecture ensuring external datasets are seamlessly integrated and AI-ready.
- Observability: Implement and maintain high standards for system monitoring observability and reliability.
- Technical Leadership: Act as a self-starting independent lead who can translate high-level infrastructure needs into functional production-grade code.
Must-Have:
- Expert-level Snowflake: Extensive hands-on experience with Cortex including setup management and cost optimization.
- Snowflake Suite: Deep expertise (SME) in Snowpark and Streamlit.
- Programming: Advanced proficiency in Python and a strong background in Software Engineering.
- AI Infrastructure: Proven experience in MCP (Model Context Protocol) server development and configuration.
- Cloud & Data:
- Deep understanding of data modeling data architecture and AWS environments (specifically AWS Bedrock).
- Proficiency in core AWS infrastructure: S3 (data lakes) IAM (permissions/security) Lambda (serverless compute) and VPC/Networking (secure cloud connectivity).
- Seniority: Minimum 5 years of experience in data/infrastructure engineering showing the ability to work independently and interface directly with internal technical stakeholders.
Nice-to-Have:
- GenAI/RAG: Practical experience deploying Generative AI and Retrieval-Augmented Generation (RAG) systems in a production setting.
- Machine Learning: A background in ML engineering or MLOps (e.g. experience with AWS SageMaker).
Open Source: Experience contributing to or managing open-source AI tooling like Goose.
We offer*:
- Flexible working format - remote office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program tech talks and trainings centers of excellence and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
Required Experience:
Senior IC
View more
View less