| | Job Purpose |
| | This position is part of the Enterprise Data & Analytics Capability team under the Global Technology this role you will lead the design development and optimization of large-scale data solutions on the Databricks platform |
| | Desired Skills and Experience |
| | Essential skills - Bachelors degree in Computer Science Engineering or a related field
- Minimum of 5 years programming experience which includes at least one year working with a big data platform; experience in data engineering domain Python SQL and cloud platforms such as Azure
- Familiarity with relevant systems tools languages and business domain which includes Data Lakehouse principles relational and Kimball data models (required)
- Experience with CI/CD pipelines and version control tools (required)
- Knowledge of data visualization tools and BI platforms (preferred)
- Certification in Databricks or relevant cloud platforms (preferred)
|
| Key Responsibilities |
- Design build and maintain scalable data pipelines on Databricks (using Spark Delta Lake etc.)
- Write clean efficient and maintainable PySpark or SQL code for data transformation
- Design robust data models for analytics and reporting
- Ensure data quality consistency and governance
- Handle batch and streaming data workflows
- Provide architectural guidance and support in platform usage
- Drive best practices in data engineering across the team
- Monitor and optimize performance of Spark jobs and cluster usage
- Ensure compliance with security and data privacy standards
|
| Key Metrics |
- Python SQL Azure
- Data Lakehouse principles relational and Kimball data models
- Databricks
|
Job Purpose This position is part of the Enterprise Data & Analytics Capability team under the Global Technology this role you will lead the design development and optimization of large-scale data solutions on the Databricks platform Desired Skills and Experience ...
| | Job Purpose |
| | This position is part of the Enterprise Data & Analytics Capability team under the Global Technology this role you will lead the design development and optimization of large-scale data solutions on the Databricks platform |
| | Desired Skills and Experience |
| | Essential skills - Bachelors degree in Computer Science Engineering or a related field
- Minimum of 5 years programming experience which includes at least one year working with a big data platform; experience in data engineering domain Python SQL and cloud platforms such as Azure
- Familiarity with relevant systems tools languages and business domain which includes Data Lakehouse principles relational and Kimball data models (required)
- Experience with CI/CD pipelines and version control tools (required)
- Knowledge of data visualization tools and BI platforms (preferred)
- Certification in Databricks or relevant cloud platforms (preferred)
|
| Key Responsibilities |
- Design build and maintain scalable data pipelines on Databricks (using Spark Delta Lake etc.)
- Write clean efficient and maintainable PySpark or SQL code for data transformation
- Design robust data models for analytics and reporting
- Ensure data quality consistency and governance
- Handle batch and streaming data workflows
- Provide architectural guidance and support in platform usage
- Drive best practices in data engineering across the team
- Monitor and optimize performance of Spark jobs and cluster usage
- Ensure compliance with security and data privacy standards
|
| Key Metrics |
- Python SQL Azure
- Data Lakehouse principles relational and Kimball data models
- Databricks
|
View more
View less