Job Summary (List Format):
- Develop and maintain ETL/ELT pipelines and data workflows for data warehousing projects.
- Process and transform large datasets using SQL Hadoop Spark and Python.
- Support integration of structured and unstructured data from multiple sources.
- Collaborate with senior engineers to apply best practices in coding performance tuning and data quality.
- Gain experience in real-time/streaming data ingestion using Apache NiFi.
- Contribute to cloud platform projects (AWS Azure or GCP).
- Document data workflows data lineage and provide operational support for data systems.
- Utilize strong analytical and debugging skills to resolve data issues.
- Work collaboratively within an agile team environment.
Job Summary (List Format): - Develop and maintain ETL/ELT pipelines and data workflows for data warehousing projects. - Process and transform large datasets using SQL Hadoop Spark and Python. - Support integration of structured and unstructured data from multiple sources. - Collaborate with senior ...
Job Summary (List Format):
- Develop and maintain ETL/ELT pipelines and data workflows for data warehousing projects.
- Process and transform large datasets using SQL Hadoop Spark and Python.
- Support integration of structured and unstructured data from multiple sources.
- Collaborate with senior engineers to apply best practices in coding performance tuning and data quality.
- Gain experience in real-time/streaming data ingestion using Apache NiFi.
- Contribute to cloud platform projects (AWS Azure or GCP).
- Document data workflows data lineage and provide operational support for data systems.
- Utilize strong analytical and debugging skills to resolve data issues.
- Work collaboratively within an agile team environment.
View more
View less