drjobs Data Engineer

Data Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Warsaw - Poland

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Our client is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models accelerate innovation and maximize growth by harnessing digital technologies. The company has 90000 employees across the globe.


We are looking for a Data Engineer to design and optimize data solutions that power decisionmaking across global business units. Youll work handson with technologies like PySpark Hadoop and Hive SQL to process largescale datasets while also playing a key role in ensuring system performance data quality and production stability. This role blends technical implementation with crossregional collaboration operational support and continuous improvement of data infrastructure.


Key takeaways:

Stack: Apache Spark Hadoop Ecosystem Python SparkSQL

Salary: Contract of Employment (UoP): PLN gross/month

Working model: Hybrid (3 days weekly in the office)

Location: Warsaw

Recruitment process:

  1. Call with MOTIFE Recruiter
  2. Technical Interview
  3. Interview with the Client


Responsibilities:

  • Implement and configure PySpark Hadoop and Hive SQL solutions in production environments working with largescale datasets.
  • Engage with stakeholders across EMEA NAM and APAC regions to address incidents coordinate fixes and ensure the timely resolution of production issues.
  • Collaborate with BAU teams and the global production assurance team to maintain system stability performance and adherence to SLAs.
  • Provide technical guidance and support to offshore teams particularly in PySpark and Hadoop environments including troubleshooting and issue resolution.
  • Utilize Autosys for job scheduling monitoring and automation of workflows.
  • Work closely with regional EMEA tech teams to ensure compliance with data protection regulations and best practices in data handling.


Requirements:

  • Professional experience in Big Data PySpark HIVE Hadoop PL/SQL.
  • Good knowledge of AWS and Snowflake.
  • Good understanding of CI/CD and system design.
  • Possession of the Databricks Certified Developer Apache Spark 3.0 certification is a mandatory requirement for this position.
  • Excellent written and oral communication skills in English.
  • Ability to understand and work on various internal systems.
  • Ability to work with multiple stakeholders.
  • Experience working on technologies in Fund transfer AML knowledge will be an added advantage.
  • Bachelors or Masters degree in computer science engineering or a related field.
  • Nice to have: experience with Starburst Presto.



Join the team and make a real difference. Apply now to take the next step in your career!

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.