Data Engineer and Data scientist
Hybrid role 2 to 4 days in Toronto downtown / week.
Combination role of AI data scientist Data Engineer Must have
JD below a mix of Data engineer and data scientist.
What will you do
- Design build and scale GenAI-driven systems that power research digitization banking workflows global markets and monetization pipelines
- Develop and productionize APIs and intelligent services backed by large language models (LLMs) and semantic search
- Build and manage Kubernetes-deployed MCP servers using FastAPI supporting dynamic routing prompt orchestration and multi-source data access
- Implement high-performance Spark workloads on Databricks and Delta Lake to support structured and unstructured data flows
- Collaborate with platform teams AI scientists and business stakeholders to deliver context-aware AI-integrated tools
- Drive CI/CD automation testing and infrastructure-as-code for scalable and secure releases
Must Have
- Strong backend development skills in Python FastAPI and async programming
- Solid hands-on experience with Kubernetes Docker and API deployment at scale
- Deep understanding of Databricks Delta Lake PySpark and distributed data workflows
- Proven experience building or integrating with LLM-based applications including prompt routing or semantic matching
- Excellent debugging profiling and optimization skills in high-throughput environments
- Comfort working with cloud platforms especially Azure
Nice to Have
- Familiarity with model orchestration frameworks (LangChain LlamaIndex or similar)
- Experience designing or contributing to MCP-style architectures (multi-modal intent-aware tool-executing systems)
- Working knowledge of MLflow Airflow or Snowflake
- Exposure to alternative data sources (web satellite social geospatial) and their AI use cases
- Understanding of enterprise CI/CD secrets management and secure API gateways
Remote Work :
No
Employment Type :
Contract