Looking for a Data Engineer who combines solid datapipeline craftsmanship with a collaborative productoriented mindset. You will join the same agile product team as our analysts working in rapid productdiscovery cycles to deliver trustworthy data that drives access to lifechanging therapies.
Translate business requirements into technical data solutions
Collaborate with analysts and product owners
Align data models and metrics to support single source of truth analytics
Ensure data supports decision-making and self-service analytics
Lead whiteboard sessions and spike explorations
Participate in agile ceremonies (backlog refinement sprint reviews retrospectives)
Partner closely with cross-functional stakeholders (analysts product owners engineers)
Champion a culture of data craftsmanship
Implement pharma-compliance data governance practices
Maintain proper documentation and traceability of data flows
Strong communicator and facilitator
Proactive in continuous improvement
Attention to performance cost efficiency and security
Collaborative team mindset
Key responsibilities
Engineer reliable endtoend data flows using Spark SQL and Pyspark notebooks in Databricks (nice to have).
Orchestrate and schedule batch & streaming jobs in Azure Data Factory or similar tools.
Develop and maintain dbt models (nice to have) to standardise transformations and documentation.
Implement datacataloguing lineage and governance practices that meet pharmacompliance requirements.
Drive alignment with analysts to optimise queries refine schemas and ensure metrics are consistent across dashboards to ensure stakeholders have a single source of truth for decision-making
Contribute to agile ceremonies (backlog refinement sprint reviews retros) and actively champion a culture of data craftsmanship.
What you bring
Experience 37 years in Data Engineering or Analytics Engineering ideally in pharma biotech or another regulated datarich environment.
Core toolkit Advanced SQL and Python. Solid experience in dimensional modelling.
Nicetohave Handson with Databricks notebooks/Lakehouse and dbt for transformation & testing.
Product mindset Comfortable iterating fast demoing early and measuring impact.
Communication & teamwork Able to explain tradeoffs write clear documentation and collaborate closely with analysts product managers and business and other stakeholders.
Quality focus Passion for clean maintainable code automated testing and robust data governance.
Requirements
1. Data Engineering
| Design build and maintain scalable ELT pipelines that ingest Patient Support Program Specialty Pharmacy Claims CRM Marketing Medical and Financial data.
|
| Implement automated dataquality checks monitoring and alerting.
|
2. Data Modeling
| Develop canonical data models and dimensional marts in our up-coming Databricks Lakehouse to enable selfservice analytics.
|
| Apply bestpractice naming documentation and version control.
|
3. Collaboration & Facilitation
| Work sidebyside with analysts and product owners to translate business questions into robust technical solutions.
|
| Lead whiteboard sessions and spike explorations during product discovery.
|
4. DevOps & Continuous Improvement
| Configure CI/CD pipelines for data code automate testing and promote reusable patterns.
|
| Keep an eye on performance cost and security driving iterative enhancements.
|
Required Skills:
1. Data Engineering Design build and maintain scalable ELT pipelines that ingest Patient Support Program Specialty Pharmacy Claims CRM Marketing Medical and Financial data. Implement automated dataquality checks monitoring and alerting. 2. Data Modeling Develop canonical data models and dimensional marts in our up-coming Databricks Lakehouse to enable selfservice analytics. Apply bestpractice naming documentation and version control. 3. Collaboration & Facilitation Work sidebyside with analysts and product owners to translate business questions into robust technical solutions. Lead whiteboard sessions and spike explorations during product discovery. 4. DevOps & Continuous Improvement Configure CI/CD pipelines for data code automate testing and promote reusable patterns. Keep an eye on performance cost and security driving iterative enhancements.
Looking for a Data Engineer who combines solid datapipeline craftsmanship with a collaborative productoriented mindset. You will join the same agile product team as our analysts working in rapid productdiscovery cycles to deliver trustworthy data that drives access to lifechanging therapies. Transl...
Looking for a Data Engineer who combines solid datapipeline craftsmanship with a collaborative productoriented mindset. You will join the same agile product team as our analysts working in rapid productdiscovery cycles to deliver trustworthy data that drives access to lifechanging therapies.
Translate business requirements into technical data solutions
Collaborate with analysts and product owners
Align data models and metrics to support single source of truth analytics
Ensure data supports decision-making and self-service analytics
Lead whiteboard sessions and spike explorations
Participate in agile ceremonies (backlog refinement sprint reviews retrospectives)
Partner closely with cross-functional stakeholders (analysts product owners engineers)
Champion a culture of data craftsmanship
Implement pharma-compliance data governance practices
Maintain proper documentation and traceability of data flows
Strong communicator and facilitator
Proactive in continuous improvement
Attention to performance cost efficiency and security
Collaborative team mindset
Key responsibilities
Engineer reliable endtoend data flows using Spark SQL and Pyspark notebooks in Databricks (nice to have).
Orchestrate and schedule batch & streaming jobs in Azure Data Factory or similar tools.
Develop and maintain dbt models (nice to have) to standardise transformations and documentation.
Implement datacataloguing lineage and governance practices that meet pharmacompliance requirements.
Drive alignment with analysts to optimise queries refine schemas and ensure metrics are consistent across dashboards to ensure stakeholders have a single source of truth for decision-making
Contribute to agile ceremonies (backlog refinement sprint reviews retros) and actively champion a culture of data craftsmanship.
What you bring
Experience 37 years in Data Engineering or Analytics Engineering ideally in pharma biotech or another regulated datarich environment.
Core toolkit Advanced SQL and Python. Solid experience in dimensional modelling.
Nicetohave Handson with Databricks notebooks/Lakehouse and dbt for transformation & testing.
Product mindset Comfortable iterating fast demoing early and measuring impact.
Communication & teamwork Able to explain tradeoffs write clear documentation and collaborate closely with analysts product managers and business and other stakeholders.
Quality focus Passion for clean maintainable code automated testing and robust data governance.
Requirements
1. Data Engineering
| Design build and maintain scalable ELT pipelines that ingest Patient Support Program Specialty Pharmacy Claims CRM Marketing Medical and Financial data.
|
| Implement automated dataquality checks monitoring and alerting.
|
2. Data Modeling
| Develop canonical data models and dimensional marts in our up-coming Databricks Lakehouse to enable selfservice analytics.
|
| Apply bestpractice naming documentation and version control.
|
3. Collaboration & Facilitation
| Work sidebyside with analysts and product owners to translate business questions into robust technical solutions.
|
| Lead whiteboard sessions and spike explorations during product discovery.
|
4. DevOps & Continuous Improvement
| Configure CI/CD pipelines for data code automate testing and promote reusable patterns.
|
| Keep an eye on performance cost and security driving iterative enhancements.
|
Required Skills:
1. Data Engineering Design build and maintain scalable ELT pipelines that ingest Patient Support Program Specialty Pharmacy Claims CRM Marketing Medical and Financial data. Implement automated dataquality checks monitoring and alerting. 2. Data Modeling Develop canonical data models and dimensional marts in our up-coming Databricks Lakehouse to enable selfservice analytics. Apply bestpractice naming documentation and version control. 3. Collaboration & Facilitation Work sidebyside with analysts and product owners to translate business questions into robust technical solutions. Lead whiteboard sessions and spike explorations during product discovery. 4. DevOps & Continuous Improvement Configure CI/CD pipelines for data code automate testing and promote reusable patterns. Keep an eye on performance cost and security driving iterative enhancements.
View more
View less