-Lead and manage the global IT team for data engineering that develops all technical artefacts as code implemented in professional IDEs with full version control and CI/CD automation. This team combines both lakehouse modeling of common and business use ase artefacts & semantics as well as generalist data integration & metadata services.
-Ensure high-quality delivery of data engineering assets that enable business analytics AI/ML integration and data governance at scale.
-Act as the delivery and people manager for the data engineering team co-located in Bengaluru collaborating globally with platform business and other IT stakeholders.
-Drive consistency engineering excellence and cross-domain reuse across the entire data engineering lifecyclefrom data acquisition to semantic layer delivery while applying rigorous software engineering practices in data engineering such as modular design test-driven development and artifact reuse in all implementations
-Direct management of approx. 1015 data engineers (generalists and specialists). Reports to the global head of Data & Analytics within the IT Competence Center.
-Team delivers data engineering & analytics assets via Product Owner for data & analytics to all business domains.
-Collaborates with Product Owners Lead Architects & Lead engineers Data Governance Infrastructure & Cybersecurity and domain-aligned functional IT teams globally.
Main Tasks
- Line management for a high-performing cross-functional data engineering team.
- Drive skill development mentorship and performance management.
- Foster a culture of accountability and trust.
- Own timely delivery of data & analytics assets from data acquisition to semantic layers.
- Align work with business priorities and architectural standards.
- Ensure quality gates and documentation.
- Act as primary escalation and coordination point across business domains.
- Bridge infrastructure functional IT cybersecurity and platform decisions.
- Advocate for team in global forums.
- Guide adoption of engineering best practices (TDD CI/CD IaC) & guide building all technical artefacts as code creating scalable batch and streaming pipelines in Azure Databricks using PySpark and/or Scala
-Leading the design and operation of scalable batch/stream pipelines in Databricks including ingestion from structured/semi-structured sources and implementation of bronze/silver/gold layers under lakehouse governance.
-Overseeing dimensional modeling and curated data marts for analytics use cases while ensuring semantic layer compatibility and collaboration on enterprise 3NF warehouse integration.
-Ensuring high-quality engineering practices across data validation CI/CD-enabled TDD performance tuning metadata governance and stakeholder collaboration via agile methods.
- Build an inclusive high-performance team culture in Bengaluru.
- Champion DevSecOps reuse automation and reliability. Commit all artifacts to version control with peer review and CI/CD integration
- Ensure documentation knowledge sharing and continuous improvement.
-Leading the design and operation of scalable secure ingestion servicesincluding CDC delta full-load and SAP extractions via tools like Theobald Extract Universal.
-Overseeing integration with APIs legacy systems Salesforce and file-based sources while aligning all interfaces with cybersecurity standards and compliance protocols.
-Driving the development of the enterprise data catalog application supporting dataset discoverability metadata quality and Unity Catalogaligned access workflows.
Qualifications :
Degree in Computer Science Data Engineering Information Systems or related discipline.
Certifications in software development and data engineering (e.g. Databricks DE Associate Azure Data Engineer or relevant DevOps certifications).
Minimum 8 years in enterprise data engineering including data ingestion and pipeline design. Experience across structured and semi-structured source systems is required. Demonstrated experience building production-grade codebases in IDEs with test coverage and version control.
Hands-on experience with secure SAP/API ingestion lakehouse development in Databricks and metadata-driven data platforms. Delivered high-impact enterprise data products in cross-functional environments.
At least 3 years of team leadership or technical lead experience including hiring mentoring and representing team interests in enterprise-wide planning forums.
Demonstrated success leading globally distributed teams and collaborating with stakeholders across multiple time zones and cultures.
Additional Information :
The well-being of our employees is important to us. Thats why we offer exciting career prospects and support you in achieving a good work-life balance with additional benefits such as:
and much more...
Sounds interesting for you Click here to find out more.
Diversity Inclusion & Belonging are important to us and make our company strong and successful. We offer equal opportunities to everyone - regardless of age gender nationality cultural background disability religion ideology or sexual orientation.
Ready to drive with Continental Take the first step and fill in the online application.
Remote Work :
No
Employment Type :
Full-time
Continental develops pioneering technologies and services for sustainable and connected mobility of people and their goods. Founded in 1871, the technology company offers safe, efficient, intelligent and affordable solutions for vehicles, machines, traffic and transportation. In 2019, ... View more