Experience – 5- 8 years
NP – 0 - 15 days (We are only looking for immediate joiners or max 2 weeks).
Work Mode - Hybird ( 3 Days Work from Office )
Skills - Bigdata Py-spark SQL GCP ( Data proc Pub-sub big query Cloud Composer etc)
GCP Tools: BigQuery Dataproc Pub/Sub Data Composer
Please find the JD for the role –
Job Description
- 5–8 years of experience in data engineering or ETL development using Google Cloud Dataflow.
- Good hands-on experience in Google Cloud Data Services (like Dataflow Cloud Storage BigQuery Cloud Composer Secret Manager etc.)
- Strong understanding of ETL/ELT concepts and data migration of TB scale of data.
- Develop and optimize end-to-end data pipelines using Dataflow.
- Develop and implement generic reusable pipelines for integrating incremental data.
- Ensure data quality throughout the data pipeline.
Roles & Responsibilities
- Have proficiency in design implementation and optimization of data engineering solutions over large volume (TB PB scale) of data using GCP data services.
- Proven expertise in GCP services including Dataflow BigQuery Cloud Storage Cloud Composer Cloud Functions. Experience building scalable data lakes and pipelines.
- Proficiency in PySpark Python Spark SQL and automating workflows.
- Have good exposure to writing optimized SQL (BigQuery SQL preferred).
- Have good communication and problem-solving skills.
- Able to create POC to achieve the solutions and perform code reviews of team.
- Understand code assist frameworks (CoPilot Windsurf etc.).
- Good to have: Dataproc
Required Skills:
Py Spark BigQuery GCP Tools
Experience – 5- 8 yearsNP – 0 - 15 days (We are only looking for immediate joiners or max 2 weeks).Work Mode - Hybird ( 3 Days Work from Office )Skills - Bigdata Py-spark SQL GCP ( Data proc Pub-sub big query Cloud Composer etc)GCP Tools: BigQuery Dataproc Pub/Sub Data ComposerPlease find the JD for...
Experience – 5- 8 years
NP – 0 - 15 days (We are only looking for immediate joiners or max 2 weeks).
Work Mode - Hybird ( 3 Days Work from Office )
Skills - Bigdata Py-spark SQL GCP ( Data proc Pub-sub big query Cloud Composer etc)
GCP Tools: BigQuery Dataproc Pub/Sub Data Composer
Please find the JD for the role –
Job Description
- 5–8 years of experience in data engineering or ETL development using Google Cloud Dataflow.
- Good hands-on experience in Google Cloud Data Services (like Dataflow Cloud Storage BigQuery Cloud Composer Secret Manager etc.)
- Strong understanding of ETL/ELT concepts and data migration of TB scale of data.
- Develop and optimize end-to-end data pipelines using Dataflow.
- Develop and implement generic reusable pipelines for integrating incremental data.
- Ensure data quality throughout the data pipeline.
Roles & Responsibilities
- Have proficiency in design implementation and optimization of data engineering solutions over large volume (TB PB scale) of data using GCP data services.
- Proven expertise in GCP services including Dataflow BigQuery Cloud Storage Cloud Composer Cloud Functions. Experience building scalable data lakes and pipelines.
- Proficiency in PySpark Python Spark SQL and automating workflows.
- Have good exposure to writing optimized SQL (BigQuery SQL preferred).
- Have good communication and problem-solving skills.
- Able to create POC to achieve the solutions and perform code reviews of team.
- Understand code assist frameworks (CoPilot Windsurf etc.).
- Good to have: Dataproc
Required Skills:
Py Spark BigQuery GCP Tools
View more
View less