Job Description:
-
Design develop and maintain scalable data pipelines using Databricks (PySpark/SQL).
-
Build and optimize data models and data warehouses in Snowflake.
-
Develop robust ETL/ELT workflows for ingesting transforming and loading large datasets.
-
Collaborate with data analysts data scientists and business stakeholders to understand data requirements.
-
Implement data quality checks validation and monitoring processes.
-
Optimize performance and cost efficiency of data pipelines and queries.
-
Integrate data from multiple sources including APIs streaming platforms and databases.
-
Ensure data governance security and compliance standards are followed.
-
Troubleshoot and resolve data-related issues in production environments.
-
Document data architecture pipelines and processes.
Job Description: Design develop and maintain scalable data pipelines using Databricks (PySpark/SQL). Build and optimize data models and data warehouses in Snowflake. Develop robust ETL/ELT workflows for ingesting transforming and loading large datasets. Collaborate with data analysts d...
Job Description:
-
Design develop and maintain scalable data pipelines using Databricks (PySpark/SQL).
-
Build and optimize data models and data warehouses in Snowflake.
-
Develop robust ETL/ELT workflows for ingesting transforming and loading large datasets.
-
Collaborate with data analysts data scientists and business stakeholders to understand data requirements.
-
Implement data quality checks validation and monitoring processes.
-
Optimize performance and cost efficiency of data pipelines and queries.
-
Integrate data from multiple sources including APIs streaming platforms and databases.
-
Ensure data governance security and compliance standards are followed.
-
Troubleshoot and resolve data-related issues in production environments.
-
Document data architecture pipelines and processes.
View more
View less