Key Responsibilities:Design and develop robust ETL pipelines using Python PySpark and GCP and optimize data models and queries in BigQuery for analytics and transform and load structured and semi-structured data from various with data analysts scientists and business teams to understand data data quality integrity and security across cloud-based data and troubleshoot data workflows and performance data validation and transformation processes using scripting and orchestration Skills & Qualifications:Hands-on experience with Google Cloud Platform (GCP) especially programming skills in Python and/or in designing and implementing ETL workflows and data in SQL and data modeling for with GCP services such as Cloud Storage Dataflow Pub/Sub and of data governance security and compliance in cloud with version control (Git) and agile development practices.