Job Summary (List Format):
- Design and implement scalable end-to-end data solutions using cloud platforms (AWS Azure GCP) with a focus on AWS Redshift Apache Airflow dbt and Databricks.
- Develop and maintain robust data pipelines for ingesting transforming and loading data from multiple sources into data warehouses.
- Create and orchestrate complex data workflows and ETL/ELT processes using Apache Airflow.
- Build and optimize data models for performance scalability cost-efficiency and data quality using dbt and cloud data warehousing best practices.
- Support data analytics and business intelligence initiatives by enabling data access through dashboards and reports (primarily Tableau).
- Implement and enforce data governance data quality data lineage data security and compliance standards.
- Mentor junior engineers and promote best practices within the data engineering team.
- Evaluate recommend and adopt new data technologies to continuously improve data architecture.
- Collaborate cross-functionally with data analysts data scientists and business stakeholders.
- Apply strong programming skills (Python Java or Scala) and advanced SQL in data engineering tasks.
- Utilize distributed computing frameworks (e.g. Apache Spark Hadoop) and containerization/orchestration tools (Docker Kubernetes).
- Work within Agile project management methodologies.
- Hold a Bachelors or Masters degree in Computer Science Engineering or a related field with 5 7 years of relevant data engineering experience.
Job Summary (List Format): - Design and implement scalable end-to-end data solutions using cloud platforms (AWS Azure GCP) with a focus on AWS Redshift Apache Airflow dbt and Databricks. - Develop and maintain robust data pipelines for ingesting transforming and loading data from multiple sources i...
Job Summary (List Format):
- Design and implement scalable end-to-end data solutions using cloud platforms (AWS Azure GCP) with a focus on AWS Redshift Apache Airflow dbt and Databricks.
- Develop and maintain robust data pipelines for ingesting transforming and loading data from multiple sources into data warehouses.
- Create and orchestrate complex data workflows and ETL/ELT processes using Apache Airflow.
- Build and optimize data models for performance scalability cost-efficiency and data quality using dbt and cloud data warehousing best practices.
- Support data analytics and business intelligence initiatives by enabling data access through dashboards and reports (primarily Tableau).
- Implement and enforce data governance data quality data lineage data security and compliance standards.
- Mentor junior engineers and promote best practices within the data engineering team.
- Evaluate recommend and adopt new data technologies to continuously improve data architecture.
- Collaborate cross-functionally with data analysts data scientists and business stakeholders.
- Apply strong programming skills (Python Java or Scala) and advanced SQL in data engineering tasks.
- Utilize distributed computing frameworks (e.g. Apache Spark Hadoop) and containerization/orchestration tools (Docker Kubernetes).
- Work within Agile project management methodologies.
- Hold a Bachelors or Masters degree in Computer Science Engineering or a related field with 5 7 years of relevant data engineering experience.
View more
View less