Job Title: Data Engineer GCP
Location: Oporto/Lisbon Portugal
Work Regime: Full-time & Hybrid
Overview / Summary:
We are looking for a Data Engineer responsible for building and maintaining data platforms recognizing the importance of data for the organization in areas where it is key to success. The role focuses on designing developing and maintaining the data platform required for data storage processing orchestration and analysis as well as implementing scalable and performant data pipelines and data integration solutions. The position is agnostic of data sources and technologies to ensure efficient data flow and high data quality enabling data scientists analysts and other stakeholders to access and analyze data effectively.
Responsibilities and Tasks:
- Design build and maintain scalable data platforms.
- Collect process and analyze large and complex data sets from various sources.
- Develop and implement data processing workflows using data processing framework technologies such as Spark and Apache Beam.
- Collaborate with cross-functional teams to ensure data accuracy and integrity.
- Ensure data security and privacy through proper implementation of access controls and data encryption.
- Extract data from various sources including databases file systems and APIs.
- Monitor system performance and optimize for high availability and scalability.
Requirements
Mandatory Requirements:
Experience:
- Experience with cloud platforms and services for data engineering (GCP).
- Experience with Big Data tools such as Spark Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid etc.
- Experience with relational and NoSQL databases.
- Experience with data integration and ETL tools (e.g. Apache Kafka Talend).
- Experience with version control tools such as Git.
Interpersonal skills:
- Ability to adapt to different contexts teams and clients.
- Teamwork skills with a sense of autonomy.
- Motivation for international projects and openness to travel.
- Willingness to collaborate with other players.
- Strong communication skills.
Stack Tech:
- Python Java or Scala.
- Spark Apache Beam.
- Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid.
- SQL.
- Relational and NoSQL databases.
- Cloud platforms (GCP AWS S3 Azure Data Factory).
- Git.
Complementary Requirements:
- Knowledge of data modeling and database design principles.
- Understanding of distributed systems and data processing architectures.
Benefits
Important:
- Our company does not sponsor work visas or work permits. All applicants must have the legal right to work in the country where the position is based.
- Only candidates who meet the required qualifications and match the profile requested by our clients will be contacted.
#VisionaryFuture - Build the future join our living ecosystem!
Required Skills:
Experience with cloud platforms and services for data engineering (GCP). Experience with Big Data tools such as Spark Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid etc. Experience with relational and NoSQL databases. Experience with data integration and ETL tools (e.g. Apache Kafka Talend). Experience with version control tools such as Git. Interpersonal skills: Ability to adapt to different contexts teams and clients. Teamwork skills with a sense of autonomy. Motivation for international projects and openness to travel. Willingness to collaborate with other players. Strong communication skills. Stack Tech: Python Java or Scala. Spark Apache Beam. Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid. SQL. Relational and NoSQL databases. Cloud platforms (GCP AWS S3 Azure Data Factory). Git.
Job Title: Data Engineer GCP Location: Oporto/Lisbon PortugalWork Regime: Full-time & Hybrid Overview / Summary: We are looking for a Data Engineer responsible for building and maintaining data platforms recognizing the importance of data for the organization in areas where it is key to success. T...
Job Title: Data Engineer GCP
Location: Oporto/Lisbon Portugal
Work Regime: Full-time & Hybrid
Overview / Summary:
We are looking for a Data Engineer responsible for building and maintaining data platforms recognizing the importance of data for the organization in areas where it is key to success. The role focuses on designing developing and maintaining the data platform required for data storage processing orchestration and analysis as well as implementing scalable and performant data pipelines and data integration solutions. The position is agnostic of data sources and technologies to ensure efficient data flow and high data quality enabling data scientists analysts and other stakeholders to access and analyze data effectively.
Responsibilities and Tasks:
- Design build and maintain scalable data platforms.
- Collect process and analyze large and complex data sets from various sources.
- Develop and implement data processing workflows using data processing framework technologies such as Spark and Apache Beam.
- Collaborate with cross-functional teams to ensure data accuracy and integrity.
- Ensure data security and privacy through proper implementation of access controls and data encryption.
- Extract data from various sources including databases file systems and APIs.
- Monitor system performance and optimize for high availability and scalability.
Requirements
Mandatory Requirements:
Experience:
- Experience with cloud platforms and services for data engineering (GCP).
- Experience with Big Data tools such as Spark Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid etc.
- Experience with relational and NoSQL databases.
- Experience with data integration and ETL tools (e.g. Apache Kafka Talend).
- Experience with version control tools such as Git.
Interpersonal skills:
- Ability to adapt to different contexts teams and clients.
- Teamwork skills with a sense of autonomy.
- Motivation for international projects and openness to travel.
- Willingness to collaborate with other players.
- Strong communication skills.
Stack Tech:
- Python Java or Scala.
- Spark Apache Beam.
- Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid.
- SQL.
- Relational and NoSQL databases.
- Cloud platforms (GCP AWS S3 Azure Data Factory).
- Git.
Complementary Requirements:
- Knowledge of data modeling and database design principles.
- Understanding of distributed systems and data processing architectures.
Benefits
Important:
- Our company does not sponsor work visas or work permits. All applicants must have the legal right to work in the country where the position is based.
- Only candidates who meet the required qualifications and match the profile requested by our clients will be contacted.
#VisionaryFuture - Build the future join our living ecosystem!
Required Skills:
Experience with cloud platforms and services for data engineering (GCP). Experience with Big Data tools such as Spark Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid etc. Experience with relational and NoSQL databases. Experience with data integration and ETL tools (e.g. Apache Kafka Talend). Experience with version control tools such as Git. Interpersonal skills: Ability to adapt to different contexts teams and clients. Teamwork skills with a sense of autonomy. Motivation for international projects and openness to travel. Willingness to collaborate with other players. Strong communication skills. Stack Tech: Python Java or Scala. Spark Apache Beam. Flink Kafka Elastic Search Hadoop Hive Sqoop Flume Impala Kafka Streams and Connect Druid. SQL. Relational and NoSQL databases. Cloud platforms (GCP AWS S3 Azure Data Factory). Git.
View more
View less