Title: Developer
Job Description:
Design develop and maintain scalable ETLELT pipelines using PySpark Airflow and GCP-native and optimize data warehouses and analytics solutions in and manage workflow orchestration with AirflowCloud complex SQL queries for data transformations analytics and performance data reliability security and governance across performance tuning and cost optimization of BigQuery and PySpark with analysts and product teams to deliver reliable data debug and resolve production issues in large-scale data to best practices reusable frameworks and automation for data engineering.5 years of experience within Data Engineering Data Warehousing using Big Data technologies will be a addon Expert on Distributed ecosystem Hands-on experience with programming using PythonExpert on Hadoop and Spark Architecture and its working principle Hands-on experience on writing and understanding complex SQL(HivePySpark-dataframes)optimizing joins while processing huge amount of data Experience in UNIX shell scripting Ability to design and develop optimized Data pipelines for batch and real time data processing Should have experience in analysis design development testing and implementation of system applications
Title: Developer Job Description: Design develop and maintain scalable ETLELT pipelines using PySpark Airflow and GCP-native and optimize data warehouses and analytics solutions in and manage workflow orchestration with AirflowCloud complex SQL queries for data transformations analytics and perfo...
Title: Developer
Job Description:
Design develop and maintain scalable ETLELT pipelines using PySpark Airflow and GCP-native and optimize data warehouses and analytics solutions in and manage workflow orchestration with AirflowCloud complex SQL queries for data transformations analytics and performance data reliability security and governance across performance tuning and cost optimization of BigQuery and PySpark with analysts and product teams to deliver reliable data debug and resolve production issues in large-scale data to best practices reusable frameworks and automation for data engineering.5 years of experience within Data Engineering Data Warehousing using Big Data technologies will be a addon Expert on Distributed ecosystem Hands-on experience with programming using PythonExpert on Hadoop and Spark Architecture and its working principle Hands-on experience on writing and understanding complex SQL(HivePySpark-dataframes)optimizing joins while processing huge amount of data Experience in UNIX shell scripting Ability to design and develop optimized Data pipelines for batch and real time data processing Should have experience in analysis design development testing and implementation of system applications
View more
View less