Senior Data Engineer

Pearson

Not Interested
Bookmark
Report This Job

profile Job Location:

Chennai - India

profile Monthly Salary: Not Disclosed
Posted on: 4 hours ago
Vacancies: 1 Vacancy

Department:

Engineering

Job Summary

Description

Summary:
At Pearson we add life to a lifetime of learning so everyone can realise the life they imagine. We do this by creating vibrant and enriching learning experiences designed for real-life impact. Pearson was founded in 1844 and has been built on our ability to grow with and adapt to a constantly evolving market. Our employees are dedicated to creating high-quality digital-first accessible and sustainable resources for lifelong learning.

About the job:

The Lead Data Engineer operates at IC25 level and is responsible for the design build and ownership of the data transformation (Source Curated) layer within the Integration Hub.

This role sits within a data integration function supporting an Azure-based Technical Hub with data pipelines spanning ingestion transformation and distribution layers alongside event-driven integration with downstream platforms.

Working closely with Architecture Integration Engineers and platform teams the Lead Data Engineer will lead the development of scalable and resilient data transformation pipelines define engineering standards and ensure consistent delivery across the curated layer.

About You:

You are an experienced data engineer with a strong background in building and optimising large-scale data transformation pipelines in cloud environments.

You are comfortable working hands-on with Azure data technologies and leading technical design decisions while also supporting and mentoring other engineers.

You have a strong understanding of data modelling performance optimisation and pipeline design and can work effectively with cross-functional teams to deliver robust and scalable data solutions.

You bring a proactive ownership-driven mindset and are comfortable operating in a fast-moving delivery environment.

Key Responsibilities

  • Own the design and delivery of the data transformation (Source Curated) layer within the Integration Hub
  • Transform legacy data into canonical and microservice-ready formats
  • Design develop and maintain scalable ETL/ELT pipelines using Azure-based data platforms and related tooling
  • Define and implement engineering standards reusable components and best practices across the transformation layer
  • Design and build reusable pipeline components frameworks and patterns
  • Oversee the work of Data Engineers including code reviews pair programming and technical guidance
  • Mentor and support engineers to improve capability and delivery quality
  • Ensure data integrity consistency and reliability across transformation processes
  • Drive performance optimisation through query tuning indexing strategies and partitioning
  • Lead design decisions across data modelling pipeline automation and testing approaches
  • Implement monitoring alerting and operational processes for data pipelines
  • Work with Change Data Capture (CDC) technologies to support integration with legacy systems
  • Collaborate with stakeholders to ensure solutions meet functional and non-functional requirements
  • Support CI/CD practices and version control for pipelines and database changes

Key Skills & Experience

  • Strong hands-on experience with cloud-based data platforms (Azure preferred) including data pipeline orchestration and transformation
  • Proven experience delivering large-scale data transformation solutions
  • Advanced SQL skills including complex queries stored procedures and optimisation
  • Experience working with relational databases such as Azure SQL PostgreSQL MySQL or similar
  • Strong understanding of ETL/ELT patterns and pipeline design
  • Experience with performance tuning indexing strategies and partitioning
  • Experience working with large data volumes and high-throughput systems
  • Familiarity with Change Data Capture (CDC) and integration of legacy systems
  • Experience with version control systems such as Git
  • Strong problem-solving and analytical skills
  • Ability to work effectively with cross-functional teams and communicate technical concepts clearly

Technology Stack (Azure-based)

Data Transformation & Engineering

  • Azure Data Factory (ADF) or similar data orchestration tools
  • Azure SQL Database or equivalent relational databases
  • Databricks or similar distributed data processing platforms

Data Modelling & Processing

  • SQL (T-SQL PostgreSQL MySQL or equivalent)
  • ETL / ELT pipelines
  • Canonical data modelling

DevOps & Tooling

  • GitHub
  • Visual Studio Code
  • Azure DevOps (CI/CD pipelines)
  • Jira
  • Basic CI/CD awareness

Desirable Skills & Experience

  • Experience with Azure Data Factory or similar orchestration tools with an expectation to build capability in Azure-native data platforms
  • Hands-on experience with Databricks (Python or Scala) or DBT
  • Understanding of event-driven and microservices-based architectures
  • Experience working with Change Data Capture tools
  • Exposure to data integration or platform-based environments
  • Experience working in Agile delivery environments
  • Azure certification is beneficial



Required Experience:

Senior IC

DescriptionSummary:At Pearson we add life to a lifetime of learning so everyone can realise the life they imagine. We do this by creating vibrant and enriching learning experiences designed for real-life impact. Pearson was founded in 1844 and has been built on our ability to grow with and adapt to ...
View more view more

About Company

Company Logo

Pearson is an Equal Opportunity Employer and a member of E-Verify. Employment decisions are based on qualifications, merit and business need. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gen ... View more

View Profile View Profile