Job Title: Solutions Architect - Data
Location: Phoenix AZ
Work Type: Onsite
Duration: 06 Months of Contract
LinkedIn Profile: Required for Submission
NOTE: NO H1B // Local Candidates with DL
Primary Skills: Solutions Architecture; Cloud Data Architecture; Azure; AWS; Databricks; ETL / ELT Pipeline Design ; Data Lake Architecture; Python; SQL; Azure Data Factory (ADF); Data Governance & Data Quality; Big Data Technologies (Spark Hadoop Kafka); AI Architecture (RAG Prompt Engineering); CI/CD; DevOps; Git; Cloud Migration; Stakeholder & Cross-Team Collaboration
Job Overview
The Solutions Architect - Data is responsible for contributing to the design modernization optimization and ongoing operations of enterprise-scale data systems for CHP. This role focuses on designing and implementing data solutions that organize store and manage data within a cloud-based data platform.
The architect will perform continuous maintenance and operational support within the cloud environment including reviewing existing data infrastructure planning future database solutions and implementing systems that support data management needs for CHP users.
This role is also accountable for ensuring data integrity and governance ensuring adherence to standards that maintain accuracy consistency and reliability across systems. The architect will identify data quality issues analyze discrepancies and drive resolution efforts.
The position requires a strong balance of architectural leadership technical expertise and collaboration with business stakeholders data engineers machine learning practitioners and domain experts to deliver scalable secure and reliable AI-driven solutions.
The ideal candidate will have demonstrated experience delivering end-to-end ETL/ELT pipelines across Databricks Azure and AWS environments.
Key Responsibilities
Design scalable data lake and enterprise data architectures using Databricks and cloud-native services
Develop metadata-driven parameterized ingestion frameworks and multi-layer data architectures
Optimize data workloads and system performance
Define and enforce data governance frameworks for CHP
Design and develop reliable and scalable data pipelines
Architect AI systems including RAG workflows and prompt engineering solutions
Lead cloud migration initiatives from legacy systems to modern data platforms
Provide architectural guidance technical leadership and best practices across teams
Create documentation reusable components and standardized architectural patterns
Required Skills and Experience
Strong expertise with cloud platforms primarily Azure or AWS
Hands-on experience with Databricks
Strong proficiency in Python and SQL
Expertise in building ETL/ELT pipelines and ADF workflows
Experience designing data lakes and implementing data governance frameworks
Hands-on experience with CI/CD DevOps and Git-based development
Ability to translate business requirements into technical and architectural solutions
Technical Expertise
Programming: Python SQL R
Big Data: Hadoop Spark Kafka Hive
Cloud Platforms: Azure (ADF Databricks Azure OpenAI) AWS
Data Warehousing: Redshift SQL Server
ETL/ELT Tools: SSIS
Education & Experience
Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field
6 years of experience in data engineering development
Warm regards
Vishal (Victor) Verma Assistant Manager
NS IT Solutions
Required Skills:
.NETPIPELINE DESIGNDATA INFRASTRUCTUREDATA SYSTEMSSPARKAZURE DATA FACTORYCI/CDSQLCLOUD ENVIRONMENTCLOUD MIGRATIONHADOOPMACHINE LEARNINGAIDATA SOLUTIONSPYTHONDATA ARCHITECTUREGITAWSDATA GOVERNANCEAZUREDEVOPSDATA INTEGRITYBIG DATA TECHNOLOGIESDATA QUALITYDESIGNINGDATA MANAGEMENT
Job Title: Solutions Architect - DataLocation: Phoenix AZWork Type: OnsiteDuration: 06 Months of ContractLinkedIn Profile: Required for SubmissionNOTE: NO H1B // Local Candidates with DLPrimary Skills: Solutions Architecture; Cloud Data Architecture; Azure; AWS; Databricks; ETL / ELT Pipeline Design...
Job Title: Solutions Architect - Data
Location: Phoenix AZ
Work Type: Onsite
Duration: 06 Months of Contract
LinkedIn Profile: Required for Submission
NOTE: NO H1B // Local Candidates with DL
Primary Skills: Solutions Architecture; Cloud Data Architecture; Azure; AWS; Databricks; ETL / ELT Pipeline Design ; Data Lake Architecture; Python; SQL; Azure Data Factory (ADF); Data Governance & Data Quality; Big Data Technologies (Spark Hadoop Kafka); AI Architecture (RAG Prompt Engineering); CI/CD; DevOps; Git; Cloud Migration; Stakeholder & Cross-Team Collaboration
Job Overview
The Solutions Architect - Data is responsible for contributing to the design modernization optimization and ongoing operations of enterprise-scale data systems for CHP. This role focuses on designing and implementing data solutions that organize store and manage data within a cloud-based data platform.
The architect will perform continuous maintenance and operational support within the cloud environment including reviewing existing data infrastructure planning future database solutions and implementing systems that support data management needs for CHP users.
This role is also accountable for ensuring data integrity and governance ensuring adherence to standards that maintain accuracy consistency and reliability across systems. The architect will identify data quality issues analyze discrepancies and drive resolution efforts.
The position requires a strong balance of architectural leadership technical expertise and collaboration with business stakeholders data engineers machine learning practitioners and domain experts to deliver scalable secure and reliable AI-driven solutions.
The ideal candidate will have demonstrated experience delivering end-to-end ETL/ELT pipelines across Databricks Azure and AWS environments.
Key Responsibilities
Design scalable data lake and enterprise data architectures using Databricks and cloud-native services
Develop metadata-driven parameterized ingestion frameworks and multi-layer data architectures
Optimize data workloads and system performance
Define and enforce data governance frameworks for CHP
Design and develop reliable and scalable data pipelines
Architect AI systems including RAG workflows and prompt engineering solutions
Lead cloud migration initiatives from legacy systems to modern data platforms
Provide architectural guidance technical leadership and best practices across teams
Create documentation reusable components and standardized architectural patterns
Required Skills and Experience
Strong expertise with cloud platforms primarily Azure or AWS
Hands-on experience with Databricks
Strong proficiency in Python and SQL
Expertise in building ETL/ELT pipelines and ADF workflows
Experience designing data lakes and implementing data governance frameworks
Hands-on experience with CI/CD DevOps and Git-based development
Ability to translate business requirements into technical and architectural solutions
Technical Expertise
Programming: Python SQL R
Big Data: Hadoop Spark Kafka Hive
Cloud Platforms: Azure (ADF Databricks Azure OpenAI) AWS
Data Warehousing: Redshift SQL Server
ETL/ELT Tools: SSIS
Education & Experience
Bachelors degree in Computer Science Information Technology Information Systems Engineering or a related field
6 years of experience in data engineering development
Warm regards
Vishal (Victor) Verma Assistant Manager
NS IT Solutions
Required Skills:
.NETPIPELINE DESIGNDATA INFRASTRUCTUREDATA SYSTEMSSPARKAZURE DATA FACTORYCI/CDSQLCLOUD ENVIRONMENTCLOUD MIGRATIONHADOOPMACHINE LEARNINGAIDATA SOLUTIONSPYTHONDATA ARCHITECTUREGITAWSDATA GOVERNANCEAZUREDEVOPSDATA INTEGRITYBIG DATA TECHNOLOGIESDATA QUALITYDESIGNINGDATA MANAGEMENT
View more
View less