Cooperation model: hybrid (remote on-site in Warsaw office 2 times per month)
Required Skills (must have):
- Minimum 2 years of professional experience in Spark
- Technical background (IT/Engineering studies)
- Solid understanding of Big Data concepts Data Warehousing and Data Management
- Experience with Hadoop platforms (Cloudera/Hortonworks)
- Knowledge of engineering best practices for large-scale data processing: design standards data modeling techniques coding documenting testing and deployment
- Hands-on experience with data formats: JSON PARQUET ORC AVRO
- Understanding of database types and usage scenarios (Hive Kudu HBase Iceberg etc.)
- Advanced SQL skills
- Experience integrating data from multiple sources
- Familiarity with project/application build tools (e.g. Maven)
Nice to Have:
- Practical knowledge of Agile methodologies and tools (Jira Confluence Kanban Scrum)
- Experience with Kubeflow
- Knowledge of streaming technologies such as Kafka Apache NiFi
- Familiarity with CI/CD automation processes and tools
Why join us
- A stable long-term project within a major financial institution
- Collaboration with highly skilled specialists in a large enterprise environment
- Opportunity to work on high-impact projects in the banking industry with exposure to Machine Learning solutions