Job Title: GCP Data Engineer
Job Location: 100 Remote
Duration: 6 Months C2H
Interview: Virtual
Job Description:
We are seeking a hands-on Data Engineer to support the migration of legacy DataStage ETL pipelines into a modern data stack. The ideal candidate will have experience developing data ingestion pipelines from scratch working with Airflow or Cloud Composer and building on Google BigQuery. Experience with SnapLogic is strongly preferred.
Key Responsibilities:
- Analyze and re-architect legacy ETL jobs from DataStage into SnapLogic Python Spark or Dataflow.
- Build new DAGs and orchestration workflows using Airflow or Cloud Composer.
- Design and develop scalable data pipelines and integrations.
- Work within a Google BigQuery data warehouse environment.
- Collaborate closely with the data modeling team and support broader data architecture efforts.
- Contribute to solutions bring proactive suggestions and communicate effectively with stakeholders.
Required Skills:
- Strong experience with IBM DataStage ETL.
- Hands-on experience with Airflow or Cloud Composer.
- Proven ability to build ETL/data pipelines from scratch.
- Proficiency in Google BigQuery and data warehousing concepts.
- Solid understanding of the end-to-end analytics lifecycle.
- Strong communication collaboration and problem-solving mindset.
Preferred Skills:
- SnapLogic development experience.
- Familiarity with Python Spark Dataflow and data integration tools.
- Exposure to tools like Kafka Java Apache Beam or Alteryx is a plus.
DATA WAREHOUSING , PYTHON , JAVA , APACHE BEAM , SPARK