- Design and implement scalable reliable and secure data architectures.
- Develop best practices and frameworks for data integration governance and security.
- Define and drive the roadmap for enterprise data solutions aligned with business objectives.
- Architect build and optimize ETL/ELT pipelines using Azure Data Factory (ADF) Databricks and PySpark.
- Work with SQL and big data technologies to ensure efficient data processing and storage.
- Design and manage data lakes data warehouses and cloudbased storage solutions.
- Implement cloudbased data solutions primarily using Microsoft Azure (Azure Data Factory Azure Synapse Azure Databricks).
- Ensure costeffective and highperformance cloud solutions for data processing and analytics.
- Lead and mentor a team of data engineers ensuring adherence to best practices.
- Collaborate with stakeholders including data analysts business leaders and software engineers to define and implement datadriven solutions.
- Provide technical guidance and architectural direction for complex projects.
- Ensure high availability scalability and performance of data systems.
- Implement data security policies and ensure compliance with industry standards (GDPR HIPAA etc.).
- Optimize data storage and retrieval for maximum efficiency and cost savings.
Requirements
- Proven experience in designing and implementing largescale data solutions.
- SQL: Advanced skills in query optimization and database management.
- PySpark: Strong experience in data transformation and processing.
- Databricks: Handson experience in big data frameworks and analytics.
- Azure Data Factory (ADF): Expertise in orchestrating ETL/ELT pipelines.
- Cloud Platforms: Strong experience with Azure (preferred) AWS or GCP.
- Data Warehousing: Deep understanding of data modeling data lakes and data warehouse architectures.
- Solution Architecture: Ability to design endtoend data solutions for enterprises.
- Strong leadership and team management skills.
- Excellent problemsolving and analytical thinking.
- Effective communication and stakeholder management.
- Ability to work in a fastpaced and dynamic environment.
Proven experience in designing and implementing large-scale data solutions. SQL: Advanced skills in query optimization and database management. PySpark: Strong experience in data transformation and processing. Databricks: Hands-on experience in big data frameworks and analytics. Azure Data Factory (ADF): Expertise in orchestrating ETL/ELT pipelines. Cloud Platforms: Strong experience with Azure (preferred), AWS, or GCP. Data Warehousing: Deep understanding of data modeling, data lakes, and data warehouse architectures. Solution Architecture: Ability to design end-to-end data solutions for enterprises. Strong leadership and team management skills. Excellent problem-solving and analytical thinking. Effective communication and stakeholder management. Ability to work in a fast-paced and dynamic environment.
Education
Bachelor's degree in Computer Science,