drjobs AWS Lead Data Engineer

AWS Lead Data Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Porto - Portugal

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Job Title: AWS Lead Data Engineer

Location: Porto Portugal

Work Regime: Full-time & Hybrid (3x per week at the office)


Overview / Summary:

We are looking for a Senior Data Engineer acting as a Tech Lead (with at least 2-3 years of previous experience in similar roles with 4-5 years in data engineering technical roles) that will be responsible for:
  • Leading the design implementation and maintenance of scalable data solutions on the AWS or Azure cloud platforms aligned with the defined data architecture;
  • You will collaborate closely with cross-functional teams to develop and optimise data pipelines ETL processes and orchestration workflows utilising tools like Apache Airflow and AWS Step Functions;
  • Your role will involve translating business requests into technical tasks and distributing these tasks effectively among team members
  • Additionally you will provide technical leadership mentorship and guidance to the engineering team ensuring best practices in production awareness and troubleshooting are upheld.


Responsibilities and Tasks:
  • Utilise expert knowledge of AWS services like Lambda Glue and Step Functions to design implement and maintain scalable data solutions;
  • Develop robust solution architectures considering scalability performance security and cost optimisation;
  • Demonstrate proficiency in cloud networking including VPCs subnets security groups and routing tables;
  • Design efficient data models for optimal query performance;
  • Write and optimise SQL queries and identify performance bottlenecks;
  • Manage ETL processes and data integration into Redshift MySQL and PostgreSQL;
  • Create documentation and provide training to team members;
  • Set up and manage logging and tracing mechanisms in AWS using services like AWS CloudTrail and AWS X-Ray;
  • Implement orchestration solutions using Apache Airflow and AWS Step Functions;
  • Utilise Athena for interactive query analysis of large datasets in Amazon S3;
  • Provide technical leadership and guidance acting as a subject matter expert in AWS and data engineering technologies;
  • Write comprehensive solution documents and technical documentation;
  • Stay updated on emerging technologies and industry trends;
  • Challenge business requirements and propose innovative solutions for efficiency and performance improvement.


Requirements

Mandatory Requirements:
  • Production Awareness and Troubleshooting: Proactive approach to production monitoring and troubleshooting with the ability to anticipate and mitigate potential issues;
  • Technical Leadership and Communication: Capability to evolve into a technical lead role with excellent communication and teamwork skills for effective collaboration with cross-functional teams;
  • Strong Analytical and Problem-Solving Skills: Ability to analyse requirements define technical approaches and propose innovative solutions to complex problems;
  • ETL Tools and Big Data Experience: Knowledge of ETL tools and experience working with large volumes of data with a preference for experience with Kafka;
  • Expertise in AWS: Extensive experience with AWS services including Lambda Glue Step Functions CloudFormation and CloudWatch;
  • Strong Solution Architecture Knowledge: Ability to design scalable and efficient data solutions on AWS adhering to best practices for cloud architecture and infrastructure;
  • Proficiency in Python and Databases: Strong programming skills in Python and experience with relational databases (MySQL PostgreSQL Redshift);
  • Orchestration and Workflow Management: Experience with orchestration tools such as Apache Airflow and AWS Step Functions for automating and managing data workflows;
  • Fluent in written and spoken English;
  • Degree in Computer Engineering or similar.

Complementary requirements:
  • Knowledge in Apache Flink Kafka and other streaming data technologies;
  • Experience with cloud-native architectures and serverless computing;
  • Certification in AWS Azure or relevant technologies;
  • Exposure to Azure Databricks: Familiarity with Databricks for data engineering and analytics tasks;
  • Experience with Iceberg Tables: Familiarity with Iceberg tables for managing large datasets efficiently ensuring data consistency and supporting ACID transactions.


#VisionaryFuture - Build the future join our living ecosystem!

Mandatory Requirements: Production Awareness and Troubleshooting: Proactive approach to production monitoring and troubleshooting, with the ability to anticipate and mitigate potential issues; Technical Leadership and Communication: Capability to evolve into a technical lead role, with excellent communication and teamwork skills for effective collaboration with cross-functional teams; Strong Analytical and Problem-Solving Skills: Ability to analyse requirements, define technical approaches, and propose innovative solutions to complex problems; ETL Tools and Big Data Experience: Knowledge of ETL tools and experience working with large volumes of data, with a preference for experience with Kafka; Expertise in AWS: Extensive experience with AWS services, including Lambda, Glue, Step Functions, CloudFormation, and CloudWatch; Strong Solution Architecture Knowledge: Ability to design scalable and efficient data solutions on AWS, adhering to best practices for cloud architecture and infrastructure; Proficiency in Python and Databases: Strong programming skills in Python and experience with relational databases (MySQL, PostgreSQL, Redshift); Orchestration and Workflow Management: Experience with orchestration tools such as Apache Airflow and AWS Step Functions for automating and managing data workflows; Fluent in written and spoken English; Degree in Computer Engineering or similar. Complementary requirements: Knowledge in Apache Flink, Kafka, and other streaming data technologies; Experience with cloud-native architectures and serverless computing; Certification in AWS, Azure or relevant technologies; Exposure to Azure Databricks: Familiarity with Databricks for data engineering and analytics tasks; Experience with Iceberg Tables: Familiarity with Iceberg tables for managing large datasets efficiently, ensuring data consistency, and supporting ACID transactions.

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.