Roles and Responsibilities:
- Design and implement data architectures for Japan commercial pharma datasets
- Build scalable data integration transformation and governance pipelines using Informatica IDMC/IICS Databricks PySpark and SQL
- Define and implement Data Vault / Lakehouse architectures ensuring flexibility lineage and governance
- Ensure all solutions adhere to Japans data privacy compliance and regulatory frameworks
- Apply strong knowledge of Japan-specific datasets (IQVIA Japan Veeva CRM NHI price revisions DPC hospital data wholesaler sales claims Rx patient data)
- Translate Japan commercial KPIs and reporting requirements into scalable data engineering and analytics solutions
- Partner with Japan business stakeholders for requirement validation and compliance alignment
- Provide end-to-end architecture oversight across ingestion integration quality governance and analytics layers
- Guide teams on coding standards best practices and optimization in Python PySpark SQL
- Mentor engineers and analysts in cloud adoption DevOps and automation
- Collaborate with BA/PM/QA teams to ensure requirement-to-delivery traceability
- Implement data quality metadata management lineage and reconciliation frameworks using IDMC/IICS and Informatica DQ
- Drive Agile-based delivery excellence using JIRA Confluence CI/CD DevOps pipelines
- Conduct architecture reviews performance tuning and scalability assessments
Requirements
- 12 years of IT experience with at least 5 years in data solution architecture
- Expertise in Informatica IDMC/IICS (CDI CDQ) and Databricks (Delta Lake PySpark SQL)
- Proven experience designing and implementing Data Vault / Lakehouse architectures
- Strong programming in Python PySpark SQL for data transformation
- Hands-on with Japan pharma commercial datasets (IQVIA Veeva NHI pricing DPC hospital wholesaler claims Rx)
- Solid experience with AWS/Azure/GCP cloud platforms and DevOps CI/CD pipelines
- Deep understanding of data governance catalog lineage and compliance standards in Japan pharma
- Excellent communication and stakeholder engagement skills with ability to collaborate across IndiaJapan delivery model
Preferred:
- Exposure to machine learning pipelines on Databricks
- Knowledge of data governance platforms data productization advanced analytics enablement and regulatory compliance in Japan pharma sector
- Exposure to Agile delivery practices (JIRA Confluence CI/CD pipelines)
Required Skills:
Roles and Responsibilities:
- Design and implement data architectures for Japan commercial pharma datasets
- Build scalable data integration transformation and governance pipelines using Informatica IDMC/IICS Databricks PySpark and SQL
- Define and implement Data Vault / Lakehouse architectures ensuring flexibility lineage and governance
- Ensure all solutions adhere to Japans data privacy compliance and regulatory frameworks
- Apply strong knowledge of Japan-specific datasets (IQVIA Japan Veeva CRM NHI price revisions DPC hospital data wholesaler sales claims Rx patient data)
- Translate Japan commercial KPIs and reporting requirements into scalable data engineering and analytics solutions
- Partner with Japan business stakeholders for requirement validation and compliance alignment
- Provide end-to-end architecture oversight across ingestion integration quality governance and analytics layers
- Guide teams on coding standards best practices and optimization in Python PySpark SQL
- Mentor engineers and analysts in cloud adoption DevOps and automation
- Collaborate with BA/PM/QA teams to ensure requirement-to-delivery traceability
- Implement data quality metadata management lineage and reconciliation frameworks using IDMC/IICS and Informatica DQ
- Drive Agile-based delivery excellence using JIRA Confluence CI/CD DevOps pipelines
- Conduct architecture reviews performance tuning and scalability assessments
Requirements
- 12 years of IT experience with at least 5 years in data solution architecture
- Expertise in Informatica IDMC/IICS (CDI CDQ) and Databricks (Delta Lake PySpark SQL)
- Proven experience designing and implementing Data Vault / Lakehouse architectures
- Strong programming in Python PySpark SQL for data transformation
- Hands-on with Japan pharma commercial datasets (IQVIA Veeva NHI pricing DPC hospital wholesaler claims Rx)
- Solid experience with AWS/Azure/GCP cloud platforms and DevOps CI/CD pipelines
- Deep understanding of data governance catalog lineage and compliance standards in Japan pharma
- Excellent communication and stakeholder engagement skills with ability to collaborate across IndiaJapan delivery model
Preferred:
- Exposure to machine learning pipelines on Databricks
- Knowledge of data governance platforms data productization advanced analytics enablement and regulatory compliance in Japan pharma sector
- Exposure to Agile delivery practices (JIRA Confluence CI/CD pipelines)
Roles and Responsibilities:Design and implement data architectures for Japan commercial pharma datasetsBuild scalable data integration transformation and governance pipelines using Informatica IDMC/IICS Databricks PySpark and SQLDefine and implement Data Vault / Lakehouse architectures ensuring flex...
Roles and Responsibilities:
- Design and implement data architectures for Japan commercial pharma datasets
- Build scalable data integration transformation and governance pipelines using Informatica IDMC/IICS Databricks PySpark and SQL
- Define and implement Data Vault / Lakehouse architectures ensuring flexibility lineage and governance
- Ensure all solutions adhere to Japans data privacy compliance and regulatory frameworks
- Apply strong knowledge of Japan-specific datasets (IQVIA Japan Veeva CRM NHI price revisions DPC hospital data wholesaler sales claims Rx patient data)
- Translate Japan commercial KPIs and reporting requirements into scalable data engineering and analytics solutions
- Partner with Japan business stakeholders for requirement validation and compliance alignment
- Provide end-to-end architecture oversight across ingestion integration quality governance and analytics layers
- Guide teams on coding standards best practices and optimization in Python PySpark SQL
- Mentor engineers and analysts in cloud adoption DevOps and automation
- Collaborate with BA/PM/QA teams to ensure requirement-to-delivery traceability
- Implement data quality metadata management lineage and reconciliation frameworks using IDMC/IICS and Informatica DQ
- Drive Agile-based delivery excellence using JIRA Confluence CI/CD DevOps pipelines
- Conduct architecture reviews performance tuning and scalability assessments
Requirements
- 12 years of IT experience with at least 5 years in data solution architecture
- Expertise in Informatica IDMC/IICS (CDI CDQ) and Databricks (Delta Lake PySpark SQL)
- Proven experience designing and implementing Data Vault / Lakehouse architectures
- Strong programming in Python PySpark SQL for data transformation
- Hands-on with Japan pharma commercial datasets (IQVIA Veeva NHI pricing DPC hospital wholesaler claims Rx)
- Solid experience with AWS/Azure/GCP cloud platforms and DevOps CI/CD pipelines
- Deep understanding of data governance catalog lineage and compliance standards in Japan pharma
- Excellent communication and stakeholder engagement skills with ability to collaborate across IndiaJapan delivery model
Preferred:
- Exposure to machine learning pipelines on Databricks
- Knowledge of data governance platforms data productization advanced analytics enablement and regulatory compliance in Japan pharma sector
- Exposure to Agile delivery practices (JIRA Confluence CI/CD pipelines)
Required Skills:
Roles and Responsibilities:
- Design and implement data architectures for Japan commercial pharma datasets
- Build scalable data integration transformation and governance pipelines using Informatica IDMC/IICS Databricks PySpark and SQL
- Define and implement Data Vault / Lakehouse architectures ensuring flexibility lineage and governance
- Ensure all solutions adhere to Japans data privacy compliance and regulatory frameworks
- Apply strong knowledge of Japan-specific datasets (IQVIA Japan Veeva CRM NHI price revisions DPC hospital data wholesaler sales claims Rx patient data)
- Translate Japan commercial KPIs and reporting requirements into scalable data engineering and analytics solutions
- Partner with Japan business stakeholders for requirement validation and compliance alignment
- Provide end-to-end architecture oversight across ingestion integration quality governance and analytics layers
- Guide teams on coding standards best practices and optimization in Python PySpark SQL
- Mentor engineers and analysts in cloud adoption DevOps and automation
- Collaborate with BA/PM/QA teams to ensure requirement-to-delivery traceability
- Implement data quality metadata management lineage and reconciliation frameworks using IDMC/IICS and Informatica DQ
- Drive Agile-based delivery excellence using JIRA Confluence CI/CD DevOps pipelines
- Conduct architecture reviews performance tuning and scalability assessments
Requirements
- 12 years of IT experience with at least 5 years in data solution architecture
- Expertise in Informatica IDMC/IICS (CDI CDQ) and Databricks (Delta Lake PySpark SQL)
- Proven experience designing and implementing Data Vault / Lakehouse architectures
- Strong programming in Python PySpark SQL for data transformation
- Hands-on with Japan pharma commercial datasets (IQVIA Veeva NHI pricing DPC hospital wholesaler claims Rx)
- Solid experience with AWS/Azure/GCP cloud platforms and DevOps CI/CD pipelines
- Deep understanding of data governance catalog lineage and compliance standards in Japan pharma
- Excellent communication and stakeholder engagement skills with ability to collaborate across IndiaJapan delivery model
Preferred:
- Exposure to machine learning pipelines on Databricks
- Knowledge of data governance platforms data productization advanced analytics enablement and regulatory compliance in Japan pharma sector
- Exposure to Agile delivery practices (JIRA Confluence CI/CD pipelines)
View more
View less