**We only consider candidates local to Arizona. No C2C candidates please**
Overview
The Solution Architect Data is responsible for contributing to the design modernization and optimization of enterprise-scale data systems as well as the maintenance and operations strategy for CHP. This role involves designing and implementing data systems that organize store and manage data within our cloud data platform. The architect will perform continuous maintenance and operations work for CHP in the cloud environment. They will review and analyze CHPs data infrastructure plan future database solutions and implement systems to support data management for CHP this role is accountable for ensuring data integrity making sure the CHP team adheres to data governance standards to maintain accuracy consistency and reliability across all systems. The architect will identify data discrepancies and quality issues and work to resolve position requires a strong blend of architectural leadership technical depth and the ability to collaborate with business stakeholders data engineers machine learning practitioners and domain experts to deliver scalable secure and reliable AI-driven ideal candidate will have a proven track record of delivering end-to-end ETL/ELT pipelines across Databricks Azure and AWS Responsibilities Design scalable data lake and data architectures using Databricks and cloud-native services. Develop metadata-driven parameterized ingestion frameworks and multi-layer data architectures. Optimize data workloads and performance. Define data governance frameworks for CHP. Design and develop robust data pipelines. Architect AI systems including RAG workflows and prompt engineering. Lead cloud migration initiatives from legacy systems to modern data platforms. Provide architectural guidance best practices and technical leadership across teams. Build documentation reusable modules and standardized Skills and Experience Strong expertise in cloud platforms primarily Azure or AWS. Hands-on experience with Databricks. Deep proficiency in Python and SQL. Expertise in building ETL/ELT pipelines and ADF workflows. Experience architecting data lakes and implementing data governance frameworks. Hands-on experience with CI/CD DevOps and Git-based development. Ability to translate business requirements into technical ExpertiseProgramming: Python SQL RBig Data: Hadoop Spark Kafka HiveCloud Platforms: Azure (ADF Databricks Azure OpenAI) AWSData Warehousing: Redshift SQL ServerETL/ELT Tools: SSISRequired Educational Background Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field. 6 years of experience in data engineering development.
Required Skills:
OverviewThe Solution Architect Data is responsible for contributing to the design modernization and optimization of enterprise-scale data systems as well as the maintenance and operations strategy for CHP. This role involves designing and implementing data systems that organize store and manage data within our cloud data platform. The architect will perform continuous maintenance and operations work for CHP in the cloud environment. They will review and analyze CHPs data infrastructure plan future database solutions and implement systems to support data management for CHP this role is accountable for ensuring data integrity making sure the CHP team adheres to data governance standards to maintain accuracy consistency and reliability across all systems. The architect will identify data discrepancies and quality issues and work to resolve position requires a strong blend of architectural leadership technical depth and the ability to collaborate with business stakeholders data engineers machine learning practitioners and domain experts to deliver scalable secure and reliable AI-driven ideal candidate will have a proven track record of delivering end-to-end ETL/ELT pipelines across Databricks Azure and AWS Responsibilities Design scalable data lake and data architectures using Databricks and cloud-native services. Develop metadata-driven parameterized ingestion frameworks and multi-layer data architectures. Optimize data workloads and performance. Define data governance frameworks for CHP. Design and develop robust data pipelines. Architect AI systems including RAG workflows and prompt engineering. Lead cloud migration initiatives from legacy systems to modern data platforms. Provide architectural guidance best practices and technical leadership across teams. Build documentation reusable modules and standardized Skills and Experience Strong expertise in cloud platforms primarily Azure or AWS. Hands-on experience with Databricks. Deep proficiency in Python and SQL. Expertise in building ETL/ELT pipelines and ADF workflows. Experience architecting data lakes and implementing data governance frameworks. Hands-on experience with CI/CD DevOps and Git-based development. Ability to translate business requirements into technical ExpertiseProgramming: Python SQL RBig Data: Hadoop Spark Kafka HiveCloud Platforms: Azure (ADF Databricks Azure OpenAI) AWSData Warehousing: Redshift SQL ServerETL/ELT Tools: SSISRequired Educational Background Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field. 6 years of experience in data engineering development.
Required Education:
Bachelor Degree
**We only consider candidates local to Arizona. No C2C candidates please**OverviewThe Solution Architect Data is responsible for contributing to the design modernization and optimization of enterprise-scale data systems as well as the maintenance and operations strategy for CHP. This role involves ...
**We only consider candidates local to Arizona. No C2C candidates please**
Overview
The Solution Architect Data is responsible for contributing to the design modernization and optimization of enterprise-scale data systems as well as the maintenance and operations strategy for CHP. This role involves designing and implementing data systems that organize store and manage data within our cloud data platform. The architect will perform continuous maintenance and operations work for CHP in the cloud environment. They will review and analyze CHPs data infrastructure plan future database solutions and implement systems to support data management for CHP this role is accountable for ensuring data integrity making sure the CHP team adheres to data governance standards to maintain accuracy consistency and reliability across all systems. The architect will identify data discrepancies and quality issues and work to resolve position requires a strong blend of architectural leadership technical depth and the ability to collaborate with business stakeholders data engineers machine learning practitioners and domain experts to deliver scalable secure and reliable AI-driven ideal candidate will have a proven track record of delivering end-to-end ETL/ELT pipelines across Databricks Azure and AWS Responsibilities Design scalable data lake and data architectures using Databricks and cloud-native services. Develop metadata-driven parameterized ingestion frameworks and multi-layer data architectures. Optimize data workloads and performance. Define data governance frameworks for CHP. Design and develop robust data pipelines. Architect AI systems including RAG workflows and prompt engineering. Lead cloud migration initiatives from legacy systems to modern data platforms. Provide architectural guidance best practices and technical leadership across teams. Build documentation reusable modules and standardized Skills and Experience Strong expertise in cloud platforms primarily Azure or AWS. Hands-on experience with Databricks. Deep proficiency in Python and SQL. Expertise in building ETL/ELT pipelines and ADF workflows. Experience architecting data lakes and implementing data governance frameworks. Hands-on experience with CI/CD DevOps and Git-based development. Ability to translate business requirements into technical ExpertiseProgramming: Python SQL RBig Data: Hadoop Spark Kafka HiveCloud Platforms: Azure (ADF Databricks Azure OpenAI) AWSData Warehousing: Redshift SQL ServerETL/ELT Tools: SSISRequired Educational Background Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field. 6 years of experience in data engineering development.
Required Skills:
OverviewThe Solution Architect Data is responsible for contributing to the design modernization and optimization of enterprise-scale data systems as well as the maintenance and operations strategy for CHP. This role involves designing and implementing data systems that organize store and manage data within our cloud data platform. The architect will perform continuous maintenance and operations work for CHP in the cloud environment. They will review and analyze CHPs data infrastructure plan future database solutions and implement systems to support data management for CHP this role is accountable for ensuring data integrity making sure the CHP team adheres to data governance standards to maintain accuracy consistency and reliability across all systems. The architect will identify data discrepancies and quality issues and work to resolve position requires a strong blend of architectural leadership technical depth and the ability to collaborate with business stakeholders data engineers machine learning practitioners and domain experts to deliver scalable secure and reliable AI-driven ideal candidate will have a proven track record of delivering end-to-end ETL/ELT pipelines across Databricks Azure and AWS Responsibilities Design scalable data lake and data architectures using Databricks and cloud-native services. Develop metadata-driven parameterized ingestion frameworks and multi-layer data architectures. Optimize data workloads and performance. Define data governance frameworks for CHP. Design and develop robust data pipelines. Architect AI systems including RAG workflows and prompt engineering. Lead cloud migration initiatives from legacy systems to modern data platforms. Provide architectural guidance best practices and technical leadership across teams. Build documentation reusable modules and standardized Skills and Experience Strong expertise in cloud platforms primarily Azure or AWS. Hands-on experience with Databricks. Deep proficiency in Python and SQL. Expertise in building ETL/ELT pipelines and ADF workflows. Experience architecting data lakes and implementing data governance frameworks. Hands-on experience with CI/CD DevOps and Git-based development. Ability to translate business requirements into technical ExpertiseProgramming: Python SQL RBig Data: Hadoop Spark Kafka HiveCloud Platforms: Azure (ADF Databricks Azure OpenAI) AWSData Warehousing: Redshift SQL ServerETL/ELT Tools: SSISRequired Educational Background Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field. 6 years of experience in data engineering development.
Required Education:
Bachelor Degree
View more
View less