Middle+Senior Data Platform Engineer (Snowflake)
Job Summary
We are looking for a Middle/Senior Data Platform Engineer (Snowflake):
Language Proficiency: Upper-Intermediate
Employment type: Full time
Candidate Location: Poland
Working Time Zone: CET
Planned Work Duration: 12 months
Customer Description:
Our Client is a leading global management consulting company recognized for delivering high-impact solutions across industries.
The company works with large global enterprises across finance media technology and public sector organizations providing advanced platforms and consulting services.
Project Description:
This project is part of a federated data delivery initiative within a secure enterprise technology ecosystem. The focus is on building and maintaining robust data pipelines that collect and process data from multiple enterprise systems and cloud platforms.
The objective is to enable leadership to gain actionable insights aligned with strategic goals and to support product and service teams in targeting appropriate user groups while measuring the effectiveness of AI-driven productivity initiatives.
Project Phase: ongoing
Project Team: Program Manager 2 Product Managers 2 Engineers User Researcher Design professional Analytics Lead
Soft Skills:
Highly proactive with the ability to independently identify stakeholders and drive tasks to completion
Strong stakeholder management skills with the ability to engage diverse roles across technical and product teams
Curious mindset with a focus on continuous improvement and challenging existing processes
Excellent communication skills for effective collaboration with cross-functional teams
Strong time management with a high level of organization and reliability
Hard Skills / Must Have:
5 years of data engineering experience
Python for scripting API development and pipeline creation
Apache Airflow for pipeline orchestration; Dagster or Prefect accepted as alternatives
AWS services especially Glue Lambda; experience deploying and maintaining production workloads
Apache Spark for distributed processing particularly within AWS Glue
Snowflake preferred data warehouse; Redshift or BigQuery accepted if concepts transfer cleanly
CI/CD pipelines GitHub Actions or similar; this is how pipelines and scripts are deployed to Airflow and Glue
API experience consuming third-party APIs and building internal APIs with Python)
Git / GitHub version control branching strategy pull request workflow
PostgreSQL or other OLTP databases for operational data access and integration
Hard Skills / Nice to Have:
Snowflake Cortex increasingly used within the team
Scala for distributed data processing tasks
Agentic frameworks LangChain Pydantic ecosystem or similar
Snowflake access and role management RBAC column-level security (ABAC)
Responsibilities and Tasks:
Build data ingestion pipelines integrating AI tools and internal platforms into Snowflake
Maintain and harden the existing Snowflake infrastructure schemas and tables that grew organically without data engineering input and bring them up to standard
Deploy work through CI/CD pipelines into Airflow or AWS Glue
Manage and process access requests
Collaborate proactively with product managers and engineers to identify data needs
Technology Stack: Python Snowflake Apache Airflow Apache Spark Scala PostgreSQL AWS
Ready to Join
We look forward to receiving your application and welcoming you to our team!
Key Skills
About Company
For job seekers, BONAPOLIA offers a gateway to exciting career prospects and the chance to thrive in a fulfilling work environment. We believe that the right job can transform lives, and we are committed to making that happen for you.