You will work closely with crossfunctional teams to deliver highquality solutions in domains such as Supply Chain Finance Operations Customer Experience HR Risk Management and Global IT.
Key Responsibilities:
- Be part of the technical plan for the migration including data ingestion transformation storage and access control in Azures Data Factory and data lake.
- Design and implement scalable and efficient data pipelines to ensure smooth data movement from multiple sources using Azure Databricks
- Developing scalable and reusable frameworks for ingesting of data sets
- Ensure data quality and integrity throughout the entire data pipeline implementing robust data validation and cleansing mechanisms.
- Working with event based/streaming technologies to ingest and process data.
- Provide support to the team resolving any technical challenges or issues that may arise during the migration and postmigration phases.
- Stay up to date with the latest advancements in cloud computing data engineering and analytics technologies and recommend best practices and industry standards for implementing the data lake solution.
Qualifications :
- 5 years of IT experience.
- Min. 4 years of Experience working with Azure Databricks.
- Experience in Data Modelling & Source System Analysis
- Familiarity with PySpark.
- Mastery of SQL.
- Knowledge of components: Azure Data Factory Azure Data Lake Azure SQL DW Azure SQL.
- Experience with Python programming language used for data Engineering purposes.
- Ability to conduct data profiling cataloging and mapping for technical design and construction of technical data flows.
- Experience in data visualization/exploration tools.
- Excellent communication skills with the ability to effectively convey complex ideas to technical and nontechnical stakeholders.
- Strong team player with excellent interpersonal and collaboration skills.
Remote Work :
Yes
Employment Type :
Fulltime