Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailOur client is a tech company founded in 2016 that develops core software solutions for a leading financial group. With a strong focus on innovation the company delivers secure scalable platforms to support banking wealth management and compliance operations.
Key Responsibilities:
Data Pipeline Development
Data Management
Manage large-scale complex datasets aligned with business needs.
Support metadata management and contribute to data governance efforts.
Data Modelling & Optimization
Optimize SQL queries and data models for performance and efficiency.
Operational Support & Collaboration
Support production systems including troubleshooting and user support.
Continuous Improvement
Follow architectural and coding standards to ensure maintainability.
Continuously evolve and optimize the platforms scalability and performance.
Stay current with emerging tools frameworks and practices in data engineering.
Your Profile
Essential Qualifications & Experience:
Degree in Engineering Computer Science or a related field.
Proficiency in Python and libraries such as Pandas and Boto3.
Valuable Additional Experience:
Hands-on experience with Snowflake.
Experience with version control tools such as GitHub or GitLab.
Knowledge of data visualization platforms (e.g. Tableau).
Experience with Infrastructure as Code tools (e.g. Terraform CloudFormation).
AWS and/or Snowflake certifications are a plus.
Soft Skills & Working Style:
Agile mindset with a collaborative growth-oriented approach.
Strong analytical and troubleshooting abilities.
Detail-focused with a commitment to performance and quality.
Comfortable working in Agile environments and tools like Jira.
Excellent interpersonal and communication skills.
Fluency in English; Portuguese and/or French is a plus.
Team-oriented proactive and eager to continuously learn and improve.
Full Time