JD Title : Azure Data Architect
Experience level : 8 Years to 14 Years
Requirements:
- Design and implement endtoend data solutions on Microsoft Azure including data lakes data warehouses and ETL/ELT processes.
- Develop scalable and efficient data architectures that support largescale data processing and analytics workloads.
- Ensure high performance security and compliance within Azure data solutions.
- Know various techniques (lakehouse warehouse) and have experience implementing them.
- Evaluate and choose appropriate Azure services such as Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Azure Databricks (configuring costing etc) Unity Catalog and Azure Data Factory. Should have deep knowledge and handson experience with these Azure Data Services.
- Ideally knowledgeable and experienced with Microsoft Fabric.
- Work closely with business and technical teams to understand and translate data needs into robust scalable data architecture solutions.
- Experience with data governance data privacy and compliance requirements.
- Excellent communication and interpersonal skills with the ability to collaborate effectively with crossfunctional teams.
- Provide expertise and leadership to the development team implementing data engineering solutions.
- Collaborate with Data Scientists Analysts and other stakeholders to ensure data architectures align with business goals and data analysis requirements.
- Optimize cloudbased data infrastructure for performance costeffectiveness and scalability.
- Analyze data workloads and recommend optimizations for performance tuning cost management and reducing complexity.
- Monitor and address any issues related to performance and availability in cloudbased data solutions.
- Experience in programming languages (e.g. SQL Python Scala). Handson experience using MS SQL Server Oracle or similar RDBMS platform.
- Experience in Azure DevOps CI/CD pipeline development
- Handson experience working at a high level in architecture data science or combination.
- Indepth understanding of database structure principles
- Distributed Data Processing of big data batch or streaming pipelines.
- Familiarity with data visualization tools (e.g. Power BI Tableau etc.
- Data Modeling and strong analytics skills. The candidate must be able to take OLTP data structures and convert them into Star Schema. Ideally the candidate should have DBT experience along with data modeling experience.
- Problemsolving attitude Highly selfmotivated selfdirected and attentive to detail Ability to prioritize and execute tasks effectively.
- Attitude and aptitude are highly important at Hitachi; we are a very collaborative group.
We would like to see a blend of the following skills. Not all of these are required however Databricks and Spark are highly desirable:
- Azure SQL Data Warehouse
- Azure Data Factory
- Azure Data Lake
- Azure Analysis Services
- Databricks/Spark
- Python or Scala (Python preferred)
- Data Modeling
- Power BI
- Database migration from legacy systems to new solutions
- Design conceptual logical and physical data models using tools like ER Studio Erwin
Additional Information :
Beware of scams
Our recruiting team may communicate with candidates via our @hitachisolutions domain email address and/or via our SmartRecruiters (Applicant Tracking System) domain email address regarding your application and interview requests.
All offers will originate from our @hitachisolutions domain email address. If you receive an offer or information from someone purporting to be an employee of Hitachi Solutions from any other domain it may not be legitimate.
Remote Work :
Yes
Employment Type :
Fulltime