Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailWe are focusing on developing global solutions across different geographical locations. Our main aim is to create efficient user friendly and scalable solutions that could be used by different teams in our company. We believe that strong and robust data products could mitigate risks and improve our capability inside our Group.
The main objective of the project you will join is to provide:
(1) Work with a Data Hub that includes operations from multiple countries
(2) Incorporate new information into the data hub whether new products or new countries.
(3) Support the different teams that consume the information from the data hub.
(4) Work with different countries and data providers
We run our projects in SCRUM/Agile methodology as we believe that continuous delivery that enables quick feedback loop is the key to our success and satisfaction of our clients.
To achieve our goals we use Scala Python Java SQL/HQL Airflow ControlM Jenkins Github Hive Databricks Azure S3 Maven.
To ensure the highest quality we use Unit Tests with Quality tests Automation and Pull Requests review.
WHAT YOU WILL BE DOING
As a Backend Spark developer your mission will be to develop test and deploy the technical and functional specifications from the Solution Designers / Business Architects / Business Analysts guaranteeing the correct operability and the compliance with the internal quality levels.
We need somebody like you to help us in different fronts:
You will develop endtoend ETL processes with Spark/Scala. This includes transferring data from/to the data lake technical validations business logic etc.
You will use Scrum methodology and be part of a high performance team
You will document your solutions in JIRA Confluence ALM
You will certify your delivery and its integration with other components designing and performing the relevant test to ensure the quality of your team delivery
You will work side by side with a technical specialist who will help you to improve the architecture and technical implementation of the solution in place
You will collaborate with crossfunctional and crosscountry teams to integrate data from various sources
You will develop and maintain data hub and data pipelines in microservices architecture
You will collaborate with other developers to maintain and improve CI/CD pipelines
Qualifications :
Required qualifications:
Bachelors degree in Computer Science or a related field
1 years of experience in data/software engineering (preferable in banking)
Experience with Integration Solutions (using API and Microservices)
Programming backend applications with Big Data technologies
Agile approach for software development
Continuous Integration (Git GitHub Jenkins)
Knowledge of SQL databases
English (at least B2)
Preferred qualifications:
Jenkins orchestration. Nice to have: git hub actions
Scala or Python the best with Apache Spark
Bash scripting
ControlM AirFlow
Software development life cycle (HP ALM..)
Basics of cybersecurity & Quality (Sonar)
Basics of Cloud computing (Docker Kubernetes S3 Azure AWS EMR DataBricks)
SOA Architecture
Analytics skills
Additional Information :
Remote Work :
No
Employment Type :
Fulltime
Full-time