Job Description:
Design develop and maintain robust data pipelines and ETL processes.
Work with large-scale datasets to ensure data integrity quality and availability.
Collaborate with data scientists analysts and other engineers to support data needs.
Optimize data work-flows for performance and scalability in cloud environments.
Implement data governance and security best practices.
Proficiency in Python and SQL for data manipulation and pipeline development - min 6 years of experience.
Hands-on experience with AWS services such as S3 Lambda Glue IAM etc.
Strong understanding of data modelling warehousing and distributed systems.
Experience with version control systems (e.g. Git) and CI/CD pipelines.
Experience Required: 8 years
Required Skills:
Job Description: Design develop and maintain robust data pipelines and ETL processes. Work with large-scale datasets to ensure data integrity quality and availability. Collaborate with data scientists analysts and other engineers to support data needs. Optimize data work-flows for performance and scalability in cloud environments. Implement data governance and security best practices. Proficiency in Python and SQL for data manipulation and pipeline development - min 6 years of experience. Hands-on experience with AWS services such as S3 Lambda Glue IAM etc. Strong understanding of data modelling warehousing and distributed systems. Experience with version control systems (e.g. Git) and CI/CD pipelines. Experience Required: 8 years
Job Description: Design develop and maintain robust data pipelines and ETL processes. Work with large-scale datasets to ensure data integrity quality and availability. Collaborate with data scientists analysts and other engineers to support data needs. Optimize data work-flows for performance and s...
Job Description:
Design develop and maintain robust data pipelines and ETL processes.
Work with large-scale datasets to ensure data integrity quality and availability.
Collaborate with data scientists analysts and other engineers to support data needs.
Optimize data work-flows for performance and scalability in cloud environments.
Implement data governance and security best practices.
Proficiency in Python and SQL for data manipulation and pipeline development - min 6 years of experience.
Hands-on experience with AWS services such as S3 Lambda Glue IAM etc.
Strong understanding of data modelling warehousing and distributed systems.
Experience with version control systems (e.g. Git) and CI/CD pipelines.
Experience Required: 8 years
Required Skills:
Job Description: Design develop and maintain robust data pipelines and ETL processes. Work with large-scale datasets to ensure data integrity quality and availability. Collaborate with data scientists analysts and other engineers to support data needs. Optimize data work-flows for performance and scalability in cloud environments. Implement data governance and security best practices. Proficiency in Python and SQL for data manipulation and pipeline development - min 6 years of experience. Hands-on experience with AWS services such as S3 Lambda Glue IAM etc. Strong understanding of data modelling warehousing and distributed systems. Experience with version control systems (e.g. Git) and CI/CD pipelines. Experience Required: 8 years
View more
View less