You will join the Corporate Data & Analytics team which partners with Finance HR Indirect Procurement and Legal to deliver data and analytics solutions that are closely aligned with business outcomes. We are committed to collaborating with our partner functions to uncover valuable insights within our data leveraging these insights to enable informed decision-making that drives the business outcomes we strive for as an organization. Our team is globally distributed with members located in Pune (IN) Manchester (UK) and Basel (CH).
The purpose of this role is to design build and operate productiongrade data products and pipelines on Databricks (Spark/Delta Lake/Unity Catalog) that enable a domainoriented data mesh while ensuring robust data quality governance reliability and costefficiency.
Key Responsibilities
- Design develop and maintain data pipelines using Databricks
- Implement ETL/ELT processes following best practices
- Create and optimize data products and data models for analytical and operational use cases
- Collaborate with business stakeholders to understand requirements and deliver data solutions
- Maintain data quality and ensure data governance standards
- Participate in code reviews and technical documentation
- Support the implementation of data mesh architecture and data products
Required Technical Skills
- Strong SQL and Python (incl. PySpark); solid software engineering practices (modular code testing code reviews).
- Handson Databricks on cloud (preferably AWS): Spark Delta Lake Unity Catalog Delta Live Tables Workflows Repos/CLI; familiarity with MLflow for basic model lifecycle is a plus.
- Data warehousing and lakehouse patterns; dimensional and domainoriented data modeling; ETL/ELT design and orchestration.
- Familiarity with extracting and transforming data from SAP S/4HANA SAP BW/4HANA ECC and related systems including understanding of IDocs BAPIs and OData services.
- Exposure to data structures and reporting requirements for Finance HR Indirect Procurement Legal and Facility Management ensuring compliance and alignment with business processes.
- Version control with Git preferably with GitLab experience
- Experience with CI/CD pipelines and deployment automation
- Knowledge of unit testing and test automation
- Understanding of data quality principles and data governance
Preferred Qualifications
- Experience with data mesh architecture
- Knowledge of data product development
- Experience with branching strategies and merge request workflows
- Background in agile development methodologies
- Experience with cloud platforms (AWS preferred)
Soft Skills
- Excellent communication skills with ability to translate technical concepts to business stakeholders
- Strong problem-solving and analytical thinking
- Team player with collaboration mindset
- Self-motivated with ability to work independently
Experience (recommended)
4 years of experience in data engineering
Qualifications :
Bachelors degree in Computer Science Information Systems or related field
Additional Information :
Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment hiring training promotion or any other employment practices for reasons of race color religion gender national origin age sexual orientation gender identity marital or veteran status disability or any other legally protected status.
Follow us on: LinkedIn
LI page - Work :
No
Employment Type :
Full-time
You will join the Corporate Data & Analytics team which partners with Finance HR Indirect Procurement and Legal to deliver data and analytics solutions that are closely aligned with business outcomes. We are committed to collaborating with our partner functions to uncover valuable insights within ou...
You will join the Corporate Data & Analytics team which partners with Finance HR Indirect Procurement and Legal to deliver data and analytics solutions that are closely aligned with business outcomes. We are committed to collaborating with our partner functions to uncover valuable insights within our data leveraging these insights to enable informed decision-making that drives the business outcomes we strive for as an organization. Our team is globally distributed with members located in Pune (IN) Manchester (UK) and Basel (CH).
The purpose of this role is to design build and operate productiongrade data products and pipelines on Databricks (Spark/Delta Lake/Unity Catalog) that enable a domainoriented data mesh while ensuring robust data quality governance reliability and costefficiency.
Key Responsibilities
- Design develop and maintain data pipelines using Databricks
- Implement ETL/ELT processes following best practices
- Create and optimize data products and data models for analytical and operational use cases
- Collaborate with business stakeholders to understand requirements and deliver data solutions
- Maintain data quality and ensure data governance standards
- Participate in code reviews and technical documentation
- Support the implementation of data mesh architecture and data products
Required Technical Skills
- Strong SQL and Python (incl. PySpark); solid software engineering practices (modular code testing code reviews).
- Handson Databricks on cloud (preferably AWS): Spark Delta Lake Unity Catalog Delta Live Tables Workflows Repos/CLI; familiarity with MLflow for basic model lifecycle is a plus.
- Data warehousing and lakehouse patterns; dimensional and domainoriented data modeling; ETL/ELT design and orchestration.
- Familiarity with extracting and transforming data from SAP S/4HANA SAP BW/4HANA ECC and related systems including understanding of IDocs BAPIs and OData services.
- Exposure to data structures and reporting requirements for Finance HR Indirect Procurement Legal and Facility Management ensuring compliance and alignment with business processes.
- Version control with Git preferably with GitLab experience
- Experience with CI/CD pipelines and deployment automation
- Knowledge of unit testing and test automation
- Understanding of data quality principles and data governance
Preferred Qualifications
- Experience with data mesh architecture
- Knowledge of data product development
- Experience with branching strategies and merge request workflows
- Background in agile development methodologies
- Experience with cloud platforms (AWS preferred)
Soft Skills
- Excellent communication skills with ability to translate technical concepts to business stakeholders
- Strong problem-solving and analytical thinking
- Team player with collaboration mindset
- Self-motivated with ability to work independently
Experience (recommended)
4 years of experience in data engineering
Qualifications :
Bachelors degree in Computer Science Information Systems or related field
Additional Information :
Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment hiring training promotion or any other employment practices for reasons of race color religion gender national origin age sexual orientation gender identity marital or veteran status disability or any other legally protected status.
Follow us on: LinkedIn
LI page - Work :
No
Employment Type :
Full-time
View more
View less