Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via email
Data Engineer
Associate III - Data Engineering
We are seeking a talented and experienced Data Engineer with strong expertise in Azure Databricks DBT and Snowflake to join our this role you will be responsible for designing and building scalable data pipelines transforming raw data into valuable insights and developing robust data models using DBT in combination with Azure Databricks and Snowflake.
The opportunity:
Design develop and maintain data pipelines using Azure Data Factory Databricks DBT and Snowflake for seamless data integration transformation and analysis.
Implement data transformation models using DBT to ensure high-quality and consistent data transformation within the Snowflake ecosystem.
Leverage Databricks and Apache Spark to process large datasets and integrate them with Snowflake for efficient data storage and analytics.
Work closely with data scientists analysts and business teams to define data requirements deliver insights and ensure data quality and consistency.
Develop and maintain data models and data pipelines that support reporting analytics and business intelligence applications.
Automate and orchestrate data workflows with Azure Data Factory and DBT to streamline data processing and delivery.
Optimize Snowflake data structures (e.g. schemas tables and views) to ensure efficient data storage retrieval and performance.
What you need:
The ideal candidate will be passionate about working with cutting-edge technologies to solve complex data engineering challenges.
Bachelors degree in Computer Science Engineering Information Technology or a related field (or equivalent experience).
Proven experience as a Data Engineer with expertise in Azure Databricks DBT and Snowflake.
Strong experience with Azure Data Factory Azure Databricks Azure Data Lake and other Azure cloud services for data integration and processing.
Proficiency with DBT for implementing data transformation workflows creating models and writing SQL-based scripts.
Expertise in working with Snowflake for data warehousing including experience with schema design performance tuning and optimization.
Strong experience with Apache Spark and working in Databricks for large-scale data processing.
Solid programming skills in SQL (advanced) Python and Scala for developing data pipelines and transformation logic.
Experience with ETL/ELT processes data orchestration and automating data workflows using Azure and DBT.
Knowledge of data governance security and best practices for cloud data architectures.
Familiarity with version control systems like Git and experience in Agile environments.
Preferred Qualifications:
DBT Certifications or experience with advanced features such as DBT testing macros and hooks.
Azure Databricks or Snowflake certifications
Experience with Snowflake performance tuning including optimization of queries schemas and data partitioning.
Familiarity with CI/CD practices and experience building automated pipelines for data workflows.
Knowledge of cloud cost optimization in Azure and Snowflake for better resource utilization.
Full-time