Data Engineer with Databricks
Warsaw
Location: Warsaw (Inflancka 4A) or Remote Work (Poland)
B2B Contract Targeted Salary: 150 - 170 PLN Net/Hour
#Python #ApacheSpark #Databricks #MSSQL #Git #CI/CD #Docker #Azure #Kubernetes
Are you ready to join our international team as aData Engineer with Databricks We shall tell you why you should...
What products do we develop
KMD Elementsis a cloud-based solution tailored for the energy and utility market. It offers a highly efficient way to handle complex data validation and advanced formula-based settlements on time series. Designed for the international market KMD Elements automates intricate calculation and billing processes. Key features include an advanced configuration engine robust automation capabilities multiple integration options and a customer-centric interface. More info can be foundhere
How do we work
#Agile #Scrum #Teamwork #CleanCode #CodeReview #E2Eresponsibility #ConstantImprovement
Your responsibilities:
- Develop and maintain data delivery pipelines for a leading IT solution in the energy market leveraging Apache Spark Databricks Delta Lake and Python.
- Have end-to-end responsibility for the full lifecycle of features you develop.
- Design technical solutions for business requirements from the product roadmap.
- Ensure optimal performance
- Refactor existing code and enhance system architecture to improve maintainability and scalability.
- Design and evolve the test automation strategy including technology stack and solution architecture.
- Prepare reviews participate in retrospectives estimate user stories and refine features ensuring their readiness for development.
Ideal candidate:
- Has 3 years of commercial experience in implementing developing or maintaining data load systems (ETL/ELT).
- Is proficient in Python with a solid understanding of data processing challenges.
- Has experience working with Apache Spark and Databricks.
- Is familiar with MSSQL databases or other relational databases.
- Has some experience working with distributed systems on a cloud platform.
- Has worked on large-scale systems and understands performance optimization.
- Is comfortable with Git and CI/CD practices and can contribute to deployment processes for data pipelines.
- Is proactive eager to learn and has a strong can-do attitude.
- Communicates fluently in English and Polish both written and spoken.
- Is a team player with excellent collaboration and communication skills.
Nice to Have:
- Experience with Azure
- Experience working with SSIS
- Familiarity with Azure PostgreSQL
- Knowledge of Docker and Kubernetes
- Exposure to Kafka or other message brokers and event-driven architecture
- Experience working in Agile/Scrum environments
Copyright KMD 2025 (C)
Data Engineer with DatabricksWarsawLocation: Warsaw (Inflancka 4A) or Remote Work (Poland)B2B Contract Targeted Salary: 150 - 170 PLN Net/Hour#Python #ApacheSpark #Databricks #MSSQL #Git #CI/CD #Docker #Azure #KubernetesAre you ready to join our international team as aData Engineer with Databricks W...
Data Engineer with Databricks
Warsaw
Location: Warsaw (Inflancka 4A) or Remote Work (Poland)
B2B Contract Targeted Salary: 150 - 170 PLN Net/Hour
#Python #ApacheSpark #Databricks #MSSQL #Git #CI/CD #Docker #Azure #Kubernetes
Are you ready to join our international team as aData Engineer with Databricks We shall tell you why you should...
What products do we develop
KMD Elementsis a cloud-based solution tailored for the energy and utility market. It offers a highly efficient way to handle complex data validation and advanced formula-based settlements on time series. Designed for the international market KMD Elements automates intricate calculation and billing processes. Key features include an advanced configuration engine robust automation capabilities multiple integration options and a customer-centric interface. More info can be foundhere
How do we work
#Agile #Scrum #Teamwork #CleanCode #CodeReview #E2Eresponsibility #ConstantImprovement
Your responsibilities:
- Develop and maintain data delivery pipelines for a leading IT solution in the energy market leveraging Apache Spark Databricks Delta Lake and Python.
- Have end-to-end responsibility for the full lifecycle of features you develop.
- Design technical solutions for business requirements from the product roadmap.
- Ensure optimal performance
- Refactor existing code and enhance system architecture to improve maintainability and scalability.
- Design and evolve the test automation strategy including technology stack and solution architecture.
- Prepare reviews participate in retrospectives estimate user stories and refine features ensuring their readiness for development.
Ideal candidate:
- Has 3 years of commercial experience in implementing developing or maintaining data load systems (ETL/ELT).
- Is proficient in Python with a solid understanding of data processing challenges.
- Has experience working with Apache Spark and Databricks.
- Is familiar with MSSQL databases or other relational databases.
- Has some experience working with distributed systems on a cloud platform.
- Has worked on large-scale systems and understands performance optimization.
- Is comfortable with Git and CI/CD practices and can contribute to deployment processes for data pipelines.
- Is proactive eager to learn and has a strong can-do attitude.
- Communicates fluently in English and Polish both written and spoken.
- Is a team player with excellent collaboration and communication skills.
Nice to Have:
- Experience with Azure
- Experience working with SSIS
- Familiarity with Azure PostgreSQL
- Knowledge of Docker and Kubernetes
- Exposure to Kafka or other message brokers and event-driven architecture
- Experience working in Agile/Scrum environments
Copyright KMD 2025 (C)
View more
View less