Requirements:
- knowledgeable and experienced with Microsoft Fabric.
- Design and implement end-to-end data solutions on Microsoft Azure including data lakes data warehouses and ETL/ELT processes.
- Develop scalable and efficient data architectures that support large-scale data processing and analytics workloads.
- Ensure high performance security and compliance within Azure data solutions.
- Know various techniques (lakehouse warehouse) and have experience implementing them.
- Evaluate and choose appropriate Azure services such as Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Azure Databricks (configuring costing etc) Unity Catalog and Azure Data Factory. Should have deep knowledge and hands-on experience with these Azure Data Services.
- Work closely with business and technical teams to understand and translate data needs into robust scalable data architecture solutions.
- Experience with data governance data privacy and compliance requirements.
- Excellent communication and interpersonal skills with the ability to collaborate effectively with cross-functional teams.
- Provide expertise and leadership to the development team implementing data engineering solutions.
- Collaborate with Data Scientists Analysts and other stakeholders to ensure data architectures align with business goals and data analysis requirements.
- Optimize cloud-based data infrastructure for performance cost-effectiveness and scalability.
- Analyze data workloads and recommend optimizations for performance tuning cost management and reducing complexity.
- Monitor and address any issues related to performance and availability in cloud-based data solutions.
- Experience in programming languages (e.g. SQL Python Scala). Hands-on experience using MS SQL Server Oracle or similar RDBMS platform.
- Experience in Azure DevOps CI/CD pipeline development
- Hands-on experience working at a high level in architecture data science or combination.
- In-depth understanding of database structure principles
- Distributed Data Processing of big data batch or streaming pipelines.
- Familiarity with data visualization tools (e.g. Power BI Tableau etc.)
- Data Modeling and strong analytics skills. The candidate must be able to take OLTP data structures and convert them into Star Schema. Ideally the candidate should have DBT experience along with data modeling experience.
- Problem-solving attitude Highly selfmotivated selfdirected and attentive to detail Ability to prioritize and execute tasks effectively.
- Attitude and aptitude are highly important at Hitachi; we are a very collaborative group.
We would like to see a blend of the following skills. Not all of these are required however Databricks and Spark are highly desirable:
- Azure SQL Data Warehouse
- Azure Data Factory
- Azure Data Lake
- Azure Analysis Services
- Databricks/Spark
- Python or Scala (Python preferred)
- Data Modeling
- Power BI
- Database migration from legacy systems to new solutions
- Design conceptual logical and physical data models using tools like ER Studio Erwin
Qualifications :
EXP: 8-12 Years
Additional Information :
Beware of scams
Our recruiting team may communicate with candidates via our @ domain email address and/or via our SmartRecruiters (Applicant Tracking System) domain email address regarding your application and interview requests.
All offers will originate from our @ domain email address. If you receive an offer or information from someone purporting to be an employee of Hitachi Solutions from any other domain it may not be legitimate.
Remote Work :
No
Employment Type :
Full-time