Data Engineering & Architecture
Design develop and maintain scalable high-performance data pipelines
Work extensively with Azure Data Factory and Microsoft Fabric
Build robust ETL/ELT frameworks using Python
Design and optimize Lakehouse / Data Warehouse architectures
Handle large-scale datasets efficiently (high volume and throughput)
Write and optimize complex SQL queries for performance and reliability
Integrate data from multiple sources including APIs transactional systems and external platforms
Leadership & Delivery (Hands-on)
Lead and mentor a team of data engineers while remaining actively involved in coding and solution design
Perform hands-on development for critical pipelines complex transformations and performance optimisation.
Conduct code reviews and enforce best practices design patterns and coding standards
Act as the technical owner for data engineering deliverables
Quality Performance & Reliability
Implement data quality checks validations and monitoring
Optimize pipelines for performance scalability and cost
Ensure reliability fault tolerance and error handling in production systems
Follow data security access control and compliance best practices
Lead troubleshooting root-cause analysis and production issue resolution
Collaboration & Continuous Improvement
Work closely with BI analytics product and business teams
Translate business requirements into scalable technical solutions
Stay up to date with modern data engineering tools technologies and techniques
Proactively suggest architectural and process improvement
Requirements
3-6 years of experience in the Data Engineering field.
Strong hands-on experience in Python for data engineering including building and maintaining production-grade large-scale data pipelines
Advanced experience with Azure Data Factory and Azure-based data platforms for orchestration integration and scalable data processing
Working experience with Microsoft Fabric including Lakehouse and data engineering workloads along with a strong understanding of ETL/ELT and data warehousing concepts
Expert-level SQL skills covering complex query development optimization indexing and partitioning for high-performance systems
Proven experience handling large-volume high-throughput data and distributed processing environment.
Experience with analytics and visualization platforms such as Power BI
Knowledge of Delta Lake Spark and distributed data processing frameworks
Experience implementing CI/CD practices for data pipelines and data engineering workflows
Exposure to data governance lineage metadata management and compliance-driven environments such as fintech or high-transaction systems
Hands-on leadership mindset with strong ownership and accountability for outcomes
Ability to mentor guide and grow junior engineers while leading by example
Clear and effective communication with technical and non-technical stakeholders
Strong problem-solving analytical reasoning and decision-making skills
Benefits
Working hours: 10:00 AM 7:00 PM
Working days: 5 days a week (plus 1st & 3rd Saturdays working)
Medical Insurance coverage for employees
Provident Fund (PF) facility
Quarterly parties and yearly outings/trips for team bonding
Regular check-ins with leadership for growth and feedback
Recognition awards to celebrate high performance
Fun activities and team engagement sessions throughout the year
Required Skills:
Python l Microsoft Fabric l ETL l Power BI l Azure l SQL l Leadership
Required Education:
Bachelors degree in Computer Science Information Technology Engineering Data Science or a related technical fieldStrong practical background in data engineering
Data Engineering & ArchitectureDesign develop and maintain scalable high-performance data pipelinesWork extensively with Azure Data Factory and Microsoft FabricBuild robust ETL/ELT frameworks using PythonDesign and optimize Lakehouse / Data Warehouse architecturesHandle large-scale datasets efficien...
Data Engineering & Architecture
Design develop and maintain scalable high-performance data pipelines
Work extensively with Azure Data Factory and Microsoft Fabric
Build robust ETL/ELT frameworks using Python
Design and optimize Lakehouse / Data Warehouse architectures
Handle large-scale datasets efficiently (high volume and throughput)
Write and optimize complex SQL queries for performance and reliability
Integrate data from multiple sources including APIs transactional systems and external platforms
Leadership & Delivery (Hands-on)
Lead and mentor a team of data engineers while remaining actively involved in coding and solution design
Perform hands-on development for critical pipelines complex transformations and performance optimisation.
Conduct code reviews and enforce best practices design patterns and coding standards
Act as the technical owner for data engineering deliverables
Quality Performance & Reliability
Implement data quality checks validations and monitoring
Optimize pipelines for performance scalability and cost
Ensure reliability fault tolerance and error handling in production systems
Follow data security access control and compliance best practices
Lead troubleshooting root-cause analysis and production issue resolution
Collaboration & Continuous Improvement
Work closely with BI analytics product and business teams
Translate business requirements into scalable technical solutions
Stay up to date with modern data engineering tools technologies and techniques
Proactively suggest architectural and process improvement
Requirements
3-6 years of experience in the Data Engineering field.
Strong hands-on experience in Python for data engineering including building and maintaining production-grade large-scale data pipelines
Advanced experience with Azure Data Factory and Azure-based data platforms for orchestration integration and scalable data processing
Working experience with Microsoft Fabric including Lakehouse and data engineering workloads along with a strong understanding of ETL/ELT and data warehousing concepts
Expert-level SQL skills covering complex query development optimization indexing and partitioning for high-performance systems
Proven experience handling large-volume high-throughput data and distributed processing environment.
Experience with analytics and visualization platforms such as Power BI
Knowledge of Delta Lake Spark and distributed data processing frameworks
Experience implementing CI/CD practices for data pipelines and data engineering workflows
Exposure to data governance lineage metadata management and compliance-driven environments such as fintech or high-transaction systems
Hands-on leadership mindset with strong ownership and accountability for outcomes
Ability to mentor guide and grow junior engineers while leading by example
Clear and effective communication with technical and non-technical stakeholders
Strong problem-solving analytical reasoning and decision-making skills
Benefits
Working hours: 10:00 AM 7:00 PM
Working days: 5 days a week (plus 1st & 3rd Saturdays working)
Medical Insurance coverage for employees
Provident Fund (PF) facility
Quarterly parties and yearly outings/trips for team bonding
Regular check-ins with leadership for growth and feedback
Recognition awards to celebrate high performance
Fun activities and team engagement sessions throughout the year
Required Skills:
Python l Microsoft Fabric l ETL l Power BI l Azure l SQL l Leadership
Required Education:
Bachelors degree in Computer Science Information Technology Engineering Data Science or a related technical fieldStrong practical background in data engineering
View more
View less