Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailOur data team has expertise across engineering analysis architecture modeling machine learning artificial intelligence and data science. This discipline is responsible for transforming raw data into actionable insights building robust data infrastructures and enabling data-driven decision-making and innovation through advanced analytics and predictive modeling.
As a Data Engineer at Endava you will play a critical role in designing building and maintaining robust scalable and high-performance data pipelines and infrastructure. You will collaborate with data scientists analysts software engineers and product managers to ensure seamless data flow across the organization and support data-driven decision-making at all levels.
This role requires a solid foundation in software engineering a deep understanding of modern data architecture and a passion for creating clean reusable and efficient systems.
Responsibilities:
Build and maintain ETL/ELT pipelines: Design develop and optimize batch and real-time data pipelines using tools like Apache Airflow DBT Kafka or cloud-native solutions.
Data modeling and warehousing: Design efficient data models and implement scalable data warehouses or lakes using Snowflake BigQuery Redshift or similar.
Collaboration: Work closely with Data Science Analytics DevOps and Software Engineering teams to integrate data systems and support downstream use cases.
Infrastructure as code: Implement and maintain infrastructure using Terraform Pulumi or similar tools to ensure reproducibility and scalability.
Monitoring and quality assurance: Set up monitoring alerting and data quality validation mechanisms to ensure data integrity and availability.
Documentation: Write and maintain documentation for data flows schemas and design decisions to facilitate transparency and maintainability.
Qualifications :
5 years of experience as a Data Engineer or in a similar role (e.g. Backend Engineer with strong data focus).
Strong proficiency in Python SQL and at least one data pipeline orchestration framework (Airflow Prefect etc.).
Experience with cloud platforms (GCP AWS or Azure) and associated data services (e.g. BigQuery S3 Redshift GCS).
Familiarity with containerization and deployment tools (Docker Kubernetes CI/CD pipelines).
Understanding of data modeling data warehousing concepts (Kimball Inmon) and performance optimization techniques.
Comfortable working in agile environments and familiar with software development best practices (code reviews testing version control).
Batch processing.
Additional Information :
Discover some of the global benefits that empower our people to become the best version of themselves:
At Endava were committed to creating an open inclusive and respectful environment where everyone feels safe valued and empowered to be their best. We welcome applications from people of all backgrounds experiences and perspectivesbecause we know that inclusive teams help us deliver smarter more innovative solutions for our customers. Hiring decisions are based on merit skills qualifications and potential. If you need adjustments or support during the recruitment process please let us know.
Remote Work :
No
Employment Type :
Full-time
Full-time