N-iX is looking for a Middle/Senior Data Engineer who would be involved in designing implementing and managing the new Data Lakehouse for our customer in the e-commerce domain. The ideal candidate has worked with data-related services in AWS Snowflake and experience in modern data approaches.
Our Client is a global full-service e-commerce and subscription billing platform on a mission to simplify software sales everywhere. For nearly two decades weve helped SaaS digital goods and subscription-based businesses grow by managing payments global tax compliance fraud prevention and recurring revenue at scale. Our flexible cloud-based platform combined with consultative services helps clients accelerate growth reach new markets and build long-term customer relationships.
Data is at the heart of everything we do powering insights driving innovation and shaping business decisions. We are building a next-generation data platform and were looking for a Senior Data Engineer to help us make it happen.
As a Data Engineer you will play a key role in designing and building our new Data Lakehouse on AWS enabling scalable reliable and high-quality data solutions. You will work closely with senior engineers data architects and product managers to create robust data pipelines develop data products and optimize storage solutions that support business-critical analytics and decision-making.
Responsibilities:
- Build and operate a modern Data Lakehouse on AWS (S3 Iceberg) supporting ingestion storage transformation and serving layers.
- Design and optimize ETL pipelines using PySpark Airflow (MWAA) and Snowflake for scalability and cost efficiency.
- Automate workflows with Python scripts integration validation and monitoring across sources and layers.
- Implement and enforce data quality controls (Glue Data Quality Great Expectations) and contribute to governance best practices.
- Collaborate with cross-functional teams (Data and Software Architects Engineering Managers Product Owners and Data/Power BI Engineers) to refine data requirements and deliver trusted and actionable insights.
- Support CI/CD practices via GitLab ensuring version-controlled testable and auditable data processes.
- Document data flows and business logic to maintain transparency lineage and knowledge transfer across teams.
- Continuously improve operational efficiency by troubleshooting issues monitoring performance and suggesting technical enhancements.
Requirements:
- 3 years of hands-on experience in Data Engineering preferably in lakehouse or hybrid architectures.
- Proficiency in PySpark for large-scale transformations across layered datasets.
- Experience with Airflow (MWAA) for orchestrating end-to-end pipelines dependencies and SLA-driven workloads.
- Knowledge of AWS services used in modern data platforms: S3 Iceberg Glue (Catalog Data Quality) Athena EMR.
- Experience in Snowflake for analytics serving and cross-platform ingestion.
- Proficiency in Python for automation validation and auxiliary data workflows.
- Understanding of data modeling principles and harmonization principles including SCD handling and cross-source entity resolution.
- Familiarity with CI/CD pipelines in Git/GitLab ensuring tested version-controlled and production-ready deployments.
- Experience working with BI ecosystems (e.g. Power BI dbt-like transformations semantic layers).
- Upper-Intermediate English or higher with the ability to document and explain complex concepts.
We offer*:
- Flexible working format - remote office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program tech talks and trainings centers of excellence and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
Required Experience:
Senior IC