About the Role
Were looking for experienced data engineers with strong expertise in cloud-based data solutions.
In this role youll be responsible for building optimizing and maintaining scalable data pipelines and architectures that support analytics machine learning and data-driven decision-making across the organization.
Joining our Google Cloud team this role is ideal for a Senior Data Engineer with experience across any of the major cloud providers who is looking to build their knowledge within the GCP stack.
Key Responsibilities
- Collaborate with data analysts and data scientists to design efficient data flows pipelines and reporting solutions.
- Work closely with business stakeholders to understand data usage and identify areas for improvement.
- Design and implement cloud-based architectures and deployment processes on GCP.
- Build and maintain data pipelines transformations and metadata to support business needs.
- Develop test and optimize big data solutions for scalability performance and reliability.
- Create and manage relational and dimensional data models aligned with platform capabilities.
- Monitor and maintain the production environment ensuring data quality integrity and governance.
- Lead or contribute to initiatives improving data quality governance security and compliance.
Requirements
Proven experience as a Data Engineer across AWS Azure or GCP services (e.g. BigQuery Dataflow Pub/Sub Dataproc Cloud Storage).
Strong understanding of data architecture modeling and ETL/ELT processes.
Hands-on experience with big data frameworks and modern data tools.
Excellent communication and collaboration skills across technical and business teams.
Familiarity with machine learning AI or advanced analytics is a plus.
Qualifications :
5 years of experience in data engineering.
Strong proficiency in Python and Apache Spark.
Hands-on experience designing and implementing ETL/ELT processes and data pipelines.
Solid expertise in SQL scripting and query optimization.
Background in cloud data technologies and tools with exposure to:
Data processing frameworks (Spark Hadoop Apache Beam Dataproc or similar)
Cloud-based data warehouses (Snowflake Redshift BigQuery or similar)
Real-time streaming pipelines (Kafka Kinesis Pub/Sub or similar)
Batch and serverless data processing
Strong analytical skills with the ability to work with both structured and unstructured data.
Experience in leading IT projects and managing stakeholder expectations.
Additional Information :
Discover some of the global benefits that empower our people to become the best version of themselves:
- Finance: Competitive salary package share plan company performance bonuses value-based recognition awards referral bonus;
- Career Development: Career coaching global career opportunities non-linear career paths internal development programmes for management and technical leadership;
- Learning Opportunities: Complex projects rotations internal tech communities training certifications coaching online learning platforms subscriptions pass-it-on sessions workshops conferences;
- Work-Life Balance: Hybrid work and flexible working hours employee assistance programme;
- Health: Global internal wellbeing programme access to wellbeing apps;
- Community: Global internal tech communities hobby clubs and interest groups inclusion and diversity programmes events and celebrations.
At Endava were committed to creating an open inclusive and respectful environment where everyone feels safe valued and empowered to be their best. We welcome applications from people of all backgrounds experiences and perspectivesbecause we know that inclusive teams help us deliver smarter more innovative solutions for our customers. Hiring decisions are based on merit skills qualifications and potential. If you need adjustments or support during the recruitment process please let us know.
Remote Work :
No
Employment Type :
Full-time
About the RoleWere looking for experienced data engineers with strong expertise in cloud-based data solutions.In this role youll be responsible for building optimizing and maintaining scalable data pipelines and architectures that support analytics machine learning and data-driven decision-making ac...
About the Role
Were looking for experienced data engineers with strong expertise in cloud-based data solutions.
In this role youll be responsible for building optimizing and maintaining scalable data pipelines and architectures that support analytics machine learning and data-driven decision-making across the organization.
Joining our Google Cloud team this role is ideal for a Senior Data Engineer with experience across any of the major cloud providers who is looking to build their knowledge within the GCP stack.
Key Responsibilities
- Collaborate with data analysts and data scientists to design efficient data flows pipelines and reporting solutions.
- Work closely with business stakeholders to understand data usage and identify areas for improvement.
- Design and implement cloud-based architectures and deployment processes on GCP.
- Build and maintain data pipelines transformations and metadata to support business needs.
- Develop test and optimize big data solutions for scalability performance and reliability.
- Create and manage relational and dimensional data models aligned with platform capabilities.
- Monitor and maintain the production environment ensuring data quality integrity and governance.
- Lead or contribute to initiatives improving data quality governance security and compliance.
Requirements
Proven experience as a Data Engineer across AWS Azure or GCP services (e.g. BigQuery Dataflow Pub/Sub Dataproc Cloud Storage).
Strong understanding of data architecture modeling and ETL/ELT processes.
Hands-on experience with big data frameworks and modern data tools.
Excellent communication and collaboration skills across technical and business teams.
Familiarity with machine learning AI or advanced analytics is a plus.
Qualifications :
5 years of experience in data engineering.
Strong proficiency in Python and Apache Spark.
Hands-on experience designing and implementing ETL/ELT processes and data pipelines.
Solid expertise in SQL scripting and query optimization.
Background in cloud data technologies and tools with exposure to:
Data processing frameworks (Spark Hadoop Apache Beam Dataproc or similar)
Cloud-based data warehouses (Snowflake Redshift BigQuery or similar)
Real-time streaming pipelines (Kafka Kinesis Pub/Sub or similar)
Batch and serverless data processing
Strong analytical skills with the ability to work with both structured and unstructured data.
Experience in leading IT projects and managing stakeholder expectations.
Additional Information :
Discover some of the global benefits that empower our people to become the best version of themselves:
- Finance: Competitive salary package share plan company performance bonuses value-based recognition awards referral bonus;
- Career Development: Career coaching global career opportunities non-linear career paths internal development programmes for management and technical leadership;
- Learning Opportunities: Complex projects rotations internal tech communities training certifications coaching online learning platforms subscriptions pass-it-on sessions workshops conferences;
- Work-Life Balance: Hybrid work and flexible working hours employee assistance programme;
- Health: Global internal wellbeing programme access to wellbeing apps;
- Community: Global internal tech communities hobby clubs and interest groups inclusion and diversity programmes events and celebrations.
At Endava were committed to creating an open inclusive and respectful environment where everyone feels safe valued and empowered to be their best. We welcome applications from people of all backgrounds experiences and perspectivesbecause we know that inclusive teams help us deliver smarter more innovative solutions for our customers. Hiring decisions are based on merit skills qualifications and potential. If you need adjustments or support during the recruitment process please let us know.
Remote Work :
No
Employment Type :
Full-time
View more
View less