We develop AI systems that help insurers better understand and manage risk so they can offer coverage tailored to the businesses they protect.
Our SaaS platform helps leading insurers make faster more informed underwriting decisions using structured and unstructured data. With 75% of our team dedicated to R&D were on a mission to transform the way insurance operates.
We recently raised a 10M Series A round to scale our impact across Europe.
Were kicking off a major transformation of one of our core product modules into a fully agentic system and are working to evolve our data infrastructure to be more flexible and scalable in support of this shift.
Internship overview
Join our team to modernize a critical analytics infrastructure that processes millions of application events and operational data daily. Youll work on migrating legacy analytics pipelines to a modern scalable architecture using cutting-edge tools while preparing for integration with our existing orchestration framework.
This internship offers hands-on experience with real production data pipelines modern data engineering practices and the opportunity to significantly impact our analytics capabilities. Youll work closely with our data team to deliver measurable performance improvements and lay the groundwork for our next-generation analytics platform.
You will directly engage with cross-functional teams (product operations customer success sales) to understand real-world data challenges then architect solutions that power our AI-driven business decisions. Youll have significant input into technical tool selection and architecture decisions making this a truly collaborative learning experience.
Main responsibilities
Core projects
1. Pipeline modularization & refactoring
- Refactor existing analytics code into clean testable modules
- Implement separation of concerns for data extraction transformation and loading
- Develop comprehensive unit tests for all new modules
2. Next-gen implementation for data transformations
- Migrate existing complex Python ingestion code to production-grade data pipeline
- Implement DBT-style transformation layers (staging intermediate marts)
- Optimize SQL queries for improved processing speed and better maintainability
3. Data Quality & Validation Framework
- Build automated data validation comparing old vs. new pipeline outputs
- Implement schema comparison data distribution analysis and row-level validation
- Create monitoring dashboards for data pipeline health
- Ensure zero data loss during migration process
4. Performance optimization
- Benchmark current vs. new pipeline performance
- Optimize data processing to significantly reduce execution times
- Document performance improvements and bottleneck analysis
Deliverables
- Modular Python codebase with comprehensive documentation
- SQL-based transformation layer and associated data cataloging
- Automated testing and validation framework
- Performance benchmarking reports
- Technical documentation and migration guides
What were looking for
- Final-year Masters or Engineering student with a focus on Data Engineering or related field
- Prior Python experience SQL knowledge and interest in data systems
- Experience with dataframe libraries (pandas polars) dbt or cloud services is a plus
- Fluent in English and French is mandatory
We encourage all applications
Dont meet every single requirement Thats okay.
We know that the best candidates dont always fit a perfect checklist. If this role sparks your interest wed love to hear from you even if youre not sure you meet every criterion.
Were looking for curious motivated people who are eager to learn and make a real impact.
At Continuity we value diversity in all its forms and are committed to fostering an inclusive supportive workplace where everyone can thrive.
Interview process
First introductory call with a Data Engineering team member (20 minutes)
Case study and feedback with our Data Engineering team (1h)
Team fit Meeting with R&D team members (1h)
HR interview with Talent Manager (30 minutes)
Welcome to the team
Good to know
- Swile card
- 50% of travel expenses covered
- Saint-Lazare - Paris 9e central offices
- Remuneration: 1400-1600 euros/ month
Why join us
- Work on cutting-edge high-impact data problems with strong mentorship from senior engineers
- Experience with modern data engineering technologies
- Work with real business-critical data pipelines
- Join a dynamic ambitious team with a strong technical culture and real-world impact
We develop AI systems that help insurers better understand and manage risk so they can offer coverage tailored to the businesses they protect.Our SaaS platform helps leading insurers make faster more informed underwriting decisions using structured and unstructured data. With 75% of our team dedicat...
We develop AI systems that help insurers better understand and manage risk so they can offer coverage tailored to the businesses they protect.
Our SaaS platform helps leading insurers make faster more informed underwriting decisions using structured and unstructured data. With 75% of our team dedicated to R&D were on a mission to transform the way insurance operates.
We recently raised a 10M Series A round to scale our impact across Europe.
Were kicking off a major transformation of one of our core product modules into a fully agentic system and are working to evolve our data infrastructure to be more flexible and scalable in support of this shift.
Internship overview
Join our team to modernize a critical analytics infrastructure that processes millions of application events and operational data daily. Youll work on migrating legacy analytics pipelines to a modern scalable architecture using cutting-edge tools while preparing for integration with our existing orchestration framework.
This internship offers hands-on experience with real production data pipelines modern data engineering practices and the opportunity to significantly impact our analytics capabilities. Youll work closely with our data team to deliver measurable performance improvements and lay the groundwork for our next-generation analytics platform.
You will directly engage with cross-functional teams (product operations customer success sales) to understand real-world data challenges then architect solutions that power our AI-driven business decisions. Youll have significant input into technical tool selection and architecture decisions making this a truly collaborative learning experience.
Main responsibilities
Core projects
1. Pipeline modularization & refactoring
- Refactor existing analytics code into clean testable modules
- Implement separation of concerns for data extraction transformation and loading
- Develop comprehensive unit tests for all new modules
2. Next-gen implementation for data transformations
- Migrate existing complex Python ingestion code to production-grade data pipeline
- Implement DBT-style transformation layers (staging intermediate marts)
- Optimize SQL queries for improved processing speed and better maintainability
3. Data Quality & Validation Framework
- Build automated data validation comparing old vs. new pipeline outputs
- Implement schema comparison data distribution analysis and row-level validation
- Create monitoring dashboards for data pipeline health
- Ensure zero data loss during migration process
4. Performance optimization
- Benchmark current vs. new pipeline performance
- Optimize data processing to significantly reduce execution times
- Document performance improvements and bottleneck analysis
Deliverables
- Modular Python codebase with comprehensive documentation
- SQL-based transformation layer and associated data cataloging
- Automated testing and validation framework
- Performance benchmarking reports
- Technical documentation and migration guides
What were looking for
- Final-year Masters or Engineering student with a focus on Data Engineering or related field
- Prior Python experience SQL knowledge and interest in data systems
- Experience with dataframe libraries (pandas polars) dbt or cloud services is a plus
- Fluent in English and French is mandatory
We encourage all applications
Dont meet every single requirement Thats okay.
We know that the best candidates dont always fit a perfect checklist. If this role sparks your interest wed love to hear from you even if youre not sure you meet every criterion.
Were looking for curious motivated people who are eager to learn and make a real impact.
At Continuity we value diversity in all its forms and are committed to fostering an inclusive supportive workplace where everyone can thrive.
Interview process
First introductory call with a Data Engineering team member (20 minutes)
Case study and feedback with our Data Engineering team (1h)
Team fit Meeting with R&D team members (1h)
HR interview with Talent Manager (30 minutes)
Welcome to the team
Good to know
- Swile card
- 50% of travel expenses covered
- Saint-Lazare - Paris 9e central offices
- Remuneration: 1400-1600 euros/ month
Why join us
- Work on cutting-edge high-impact data problems with strong mentorship from senior engineers
- Experience with modern data engineering technologies
- Work with real business-critical data pipelines
- Join a dynamic ambitious team with a strong technical culture and real-world impact
View more
View less