drjobs Data Engineer with Scala (121930)

Data Engineer with Scala (121930)

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Warsaw - Poland

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

What will you do

We are looking for professionals at various levels from Mid through Senior to Expert to join our team. Your responsibilities will include performance tuning and optimization of existing solutions building and maintaining ETL pipelines as well as testing and documenting current data flows. You will also be involved in implementing tools and processes to support data-related projects and promoting the best development standards across the team.

As a Data Engineer with Scala your mission will be to develop test and deploy the technical and functional specifications from the Solution Designers / Business Architects / Business Analysts guaranteeing the correct operability and compliance with the internal quality levels.

Openness to work in a hybrid model (2 days from the office per week)

Openness to visiting the clients office in Cracow once every two months (for 3 days)

Your tasks

  • You will develop end-to-end ETL processes with Spark/Scala. This includes transferring datafrom/to the datalake technical validations business logic etc.
  • You will use Scrum methodology and be part of a high performance team
  • You will document your solutions in tools such as JIRA Confluence
  • You will certify your delivery and its integration with other components designing and performing the relevant test to ensure the quality of your team delivery

Your skills

  • At least 4 years of experience working on Data Engineering topics
  • At least 2 years of experience working with Spark and Scala
  • Strong SQL and Python
  • Experience in working with big data Spark Hadoop Hive
  • Knowledge of GCP or Azure Databricks is considered as a strong plus
  • Experience and expertise across data integration and data management with high data volumes.
  • Experience working in agile continuous integration/DevOps paradigm and tool set (Git GitHub Jenkins Jira)
  • Experience with different database structures including (Postgres SQL Hive)
  • Fluent English is a must (both written and spoken)

Nice to have

  • CI/CD: Jenkins GitHub Actions
  • Orchestration: Control-M Airflow
  • Scripting: Bash Python

We offer you

  • Working in a highly experienced and dedicated team
  • Contract of employment or B2B contract
  • Hybrid work from our offices 2 office days per week
  • Competitive salary and extra benefit package that can be tailored to your personal needs (private medical coverage sport & recreation package lunch subsidy life insurance etc.)
  • On-line training and certifications fit for career path
  • On-line foreign language lessons
  • Social events
  • Access to e-learning platform
  • Ergonomic and functional working space

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.