Job Description: Solutions Architect Data
Location: Phoenix AZ (Hybrid)
Agency: AZDCS
Position Type: Staff Augmentation
Hours: 40 hrs/week
Shift: Day
Projected Start Date: January 5 2026
Projected Duration: 6 Months
Local Candidates Only - In-person interview required
Position Overview The Solutions Architect Data will play a critical role in designing modernizing and optimizing enterprise-scale data systems for the CHP (Comprehensive Health Plan) program within AZDCS. This position focuses on data architecture cloud modernization ETL/ELT pipeline development data governance and ensuring reliability and scalability across the organizations cloud platforms.
The architect will be responsible for both strategic design and hands-on technical implementation. This includes analyzing the existing data infrastructure planning future-state architecture and ensuring operational excellence in cloud environments. A key part of the role includes enforcing data integrity identifying data quality issues and ensuring adherence to governance standards.
This individual will collaborate with data engineers ML practitioners software teams and business stakeholders to deliver secure scalable and AI ready data solutions. The ideal candidate has extensive experience with Databricks Azure/AWS ecosystems and modern data lake architectures.
Key Responsibilities - Design scalable data lake architectures and data solutions using Databricks and cloud-native tools.
- Develop metadata-driven parameterized data ingestion frameworks.
- Implement multi-layer data architectures (raw curated trusted semantic layers).
- Optimize data workloads for performance cost efficiency and scalability.
- Define and maintain data governance frameworks for CHP including quality and lineage.
- Build and maintain robust ETL/ELT data pipelines across cloud platforms.
- Architect and support AI-driven systems including RAG workflows and prompt engineering patterns.
- Lead legacy-to-cloud migrations and modernization of enterprise data systems.
- Provide architectural guidance standards and best practices across teams.
- Develop documentation reusable components frameworks and architectural patterns.
Required Skills & Experience - Strong expertise in Azure or AWS cloud platforms.
- Hands-on experience with Databricks (Delta Lake notebooks jobs MLflow).
- Deep proficiency in Python and SQL.
- Experience building ETL/ELT pipelines and Azure Data Factory (ADF) workflows.
- Proven ability to architect enterprise data lakes and implement governance frameworks.
- Experience with CI/CD DevOps Git-based workflows and automated deployments.
- Strong ability to convert business requirements into scalable technical architectures.
Technical Expertise Programming: Python SQL R
Big Data: Hadoop Spark Kafka Hive
Cloud: Azure (ADF Databricks Azure OpenAI) AWS
Data Warehousing: Redshift SQL Server
ETL/ELT: SSIS
Educational Background - Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field.
- 6 years of experience in data engineering development.
Additional Requirements - No travel required.
- Security/background check required.
- Candidates must be local to Phoenix AZ at time of submission.
- Must be able to attend an in-person interview within 1 week of posting close date.
- Must be able to start within 2 weeks of offer.
Job Description: Solutions Architect Data Location: Phoenix AZ (Hybrid) Agency: AZDCS Position Type: Staff Augmentation Hours: 40 hrs/week Shift: Day Projected Start Date: January 5 2026 Projected Duration: 6 Months Local Candidates Only - In-person interview required Position Overview Th...
Job Description: Solutions Architect Data
Location: Phoenix AZ (Hybrid)
Agency: AZDCS
Position Type: Staff Augmentation
Hours: 40 hrs/week
Shift: Day
Projected Start Date: January 5 2026
Projected Duration: 6 Months
Local Candidates Only - In-person interview required
Position Overview The Solutions Architect Data will play a critical role in designing modernizing and optimizing enterprise-scale data systems for the CHP (Comprehensive Health Plan) program within AZDCS. This position focuses on data architecture cloud modernization ETL/ELT pipeline development data governance and ensuring reliability and scalability across the organizations cloud platforms.
The architect will be responsible for both strategic design and hands-on technical implementation. This includes analyzing the existing data infrastructure planning future-state architecture and ensuring operational excellence in cloud environments. A key part of the role includes enforcing data integrity identifying data quality issues and ensuring adherence to governance standards.
This individual will collaborate with data engineers ML practitioners software teams and business stakeholders to deliver secure scalable and AI ready data solutions. The ideal candidate has extensive experience with Databricks Azure/AWS ecosystems and modern data lake architectures.
Key Responsibilities - Design scalable data lake architectures and data solutions using Databricks and cloud-native tools.
- Develop metadata-driven parameterized data ingestion frameworks.
- Implement multi-layer data architectures (raw curated trusted semantic layers).
- Optimize data workloads for performance cost efficiency and scalability.
- Define and maintain data governance frameworks for CHP including quality and lineage.
- Build and maintain robust ETL/ELT data pipelines across cloud platforms.
- Architect and support AI-driven systems including RAG workflows and prompt engineering patterns.
- Lead legacy-to-cloud migrations and modernization of enterprise data systems.
- Provide architectural guidance standards and best practices across teams.
- Develop documentation reusable components frameworks and architectural patterns.
Required Skills & Experience - Strong expertise in Azure or AWS cloud platforms.
- Hands-on experience with Databricks (Delta Lake notebooks jobs MLflow).
- Deep proficiency in Python and SQL.
- Experience building ETL/ELT pipelines and Azure Data Factory (ADF) workflows.
- Proven ability to architect enterprise data lakes and implement governance frameworks.
- Experience with CI/CD DevOps Git-based workflows and automated deployments.
- Strong ability to convert business requirements into scalable technical architectures.
Technical Expertise Programming: Python SQL R
Big Data: Hadoop Spark Kafka Hive
Cloud: Azure (ADF Databricks Azure OpenAI) AWS
Data Warehousing: Redshift SQL Server
ETL/ELT: SSIS
Educational Background - Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field.
- 6 years of experience in data engineering development.
Additional Requirements - No travel required.
- Security/background check required.
- Candidates must be local to Phoenix AZ at time of submission.
- Must be able to attend an in-person interview within 1 week of posting close date.
- Must be able to start within 2 weeks of offer.
View more
View less