Were hiring experienced Data Engineer to support large-scale data transformation project. Youll be embedded within a high-performing team delivering mission-critical data platforms using Databricks and AWS.
This is a hands-on engineering role focused on architecture implementation and optimization of robust data solutions at scale.
Key Responsibilities
- Design build and deploy data pipelines and platforms using Databricks and cloud infrastructure (preferably AWS)
- Lead or contribute to end-to-end implementation of data solutions in enterprise environments
- Collaborate with architects analysts and client stakeholders to define technical requirements
- Optimize data systems for performance scalability and security
- Ensure data governance quality and compliance in all solutions
Required Skills & Experience
- 7 years of experience in data engineering
- Deep expertise with Databricks (Spark Delta Lake MLflow Unity Catalog)
- Strong experience with cloud platforms ideally AWS (S3 Glue Lambda etc.)
- Proven track record of delivering complex data solutions in commercial space like Sales Marketing Pricing Customer Insights
- At least 4 years of hands-on data pipeline design and development experience with Databricks including specific -platform features like Delta Lake Uniform (Iceberg) Delta Live Tables (Lake flow Declarative pipelines) and Unity Catalog.
- Strong programming skills using SQL Stored Procedures and Object-Oriented Programming languages (Python PySpark etc.).
- Experience with CI/CD for data pipelines and infrastructure-as-code tools (e.g. Terraform)
- Strong understanding of data modeling Lakehouse architectures and data security best practices
- Familiarity with NoSQL Databases and Container Management Systems.
- Exposure to AI/ML tools (like mlflow) prompt engineering and modern data and AI agentic workflows.
- An ideal candidate will have Databricks Data Engineering Associate and/or Professional certification completed with multiple Databricks project delivery experience.
Nice to Have
- Experience with Azure or GCP in addition to AWS
- Knowledge of DevOps practices in data engineering
- Familiarity with regulatory frameworks (e.g. GDPR SOC2 PCI-DSS)
- AWS Redshift AWS Glue/ Spark (PythonScala)
Qualifications :
Bachelors of Engineering in CS
Additional Information :
Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment hiring training promotion or any other employment practices for reasons of race color religion gender national origin age sexual orientation gender identity marital or veteran status disability or any other legally protected status.
Follow us on: Twitter & LinkedIn
page
Work :
No
Employment Type :
Full-time
Were hiring experienced Data Engineer to support large-scale data transformation project. Youll be embedded within a high-performing team delivering mission-critical data platforms using Databricks and AWS.This is a hands-on engineering role focused on architecture implementation and optimization of...
Were hiring experienced Data Engineer to support large-scale data transformation project. Youll be embedded within a high-performing team delivering mission-critical data platforms using Databricks and AWS.
This is a hands-on engineering role focused on architecture implementation and optimization of robust data solutions at scale.
Key Responsibilities
- Design build and deploy data pipelines and platforms using Databricks and cloud infrastructure (preferably AWS)
- Lead or contribute to end-to-end implementation of data solutions in enterprise environments
- Collaborate with architects analysts and client stakeholders to define technical requirements
- Optimize data systems for performance scalability and security
- Ensure data governance quality and compliance in all solutions
Required Skills & Experience
- 7 years of experience in data engineering
- Deep expertise with Databricks (Spark Delta Lake MLflow Unity Catalog)
- Strong experience with cloud platforms ideally AWS (S3 Glue Lambda etc.)
- Proven track record of delivering complex data solutions in commercial space like Sales Marketing Pricing Customer Insights
- At least 4 years of hands-on data pipeline design and development experience with Databricks including specific -platform features like Delta Lake Uniform (Iceberg) Delta Live Tables (Lake flow Declarative pipelines) and Unity Catalog.
- Strong programming skills using SQL Stored Procedures and Object-Oriented Programming languages (Python PySpark etc.).
- Experience with CI/CD for data pipelines and infrastructure-as-code tools (e.g. Terraform)
- Strong understanding of data modeling Lakehouse architectures and data security best practices
- Familiarity with NoSQL Databases and Container Management Systems.
- Exposure to AI/ML tools (like mlflow) prompt engineering and modern data and AI agentic workflows.
- An ideal candidate will have Databricks Data Engineering Associate and/or Professional certification completed with multiple Databricks project delivery experience.
Nice to Have
- Experience with Azure or GCP in addition to AWS
- Knowledge of DevOps practices in data engineering
- Familiarity with regulatory frameworks (e.g. GDPR SOC2 PCI-DSS)
- AWS Redshift AWS Glue/ Spark (PythonScala)
Qualifications :
Bachelors of Engineering in CS
Additional Information :
Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment hiring training promotion or any other employment practices for reasons of race color religion gender national origin age sexual orientation gender identity marital or veteran status disability or any other legally protected status.
Follow us on: Twitter & LinkedIn
page
Work :
No
Employment Type :
Full-time
View more
View less