As a Data Developer you ll play a key role in implementing and maintaining transformation logic using dbt and working within a Snowflake environment to support downstream analytics reporting and different use cases. You will collaborate closely with data product teams IT and business stakeholders around the world helping to turn raw data into actionable insights.
He/she will:
Implement and maintain data transformation logic using following already defined models and specifications.
Write clean modular and efficient SQL code tailored for Snowflake focusing on data cleaning normalization and enrichment.
Enforce data quality standards through testing monitoring and integration with data observability practices.
Orchestrate and manage pipeline execution using Apache Airflow ensuring reliability and reusability.
Participate in documentation efforts including technical specs transformation logic and metadata definitions.
Contribute to CI/CD pipelines version control workflows (Bitbucket) and best practices for data development.
Continuously optimize transformation processes for performance costefficiency and maintainability in Snowflake.
Requirements
3 years of professional experience in data engineering or a related field with a strong focus on data transformation and quality assurance.
Proficiency in dbt including handson experience writing and managing models tests and macros.
Demonstrated ability to write clean efficient and highperformance SQL in Snowflake particularly for complex data transformation and cleaning workflows.
Experience with Apache Airflow or similar pipeline orchestration tools.
Familiarity with Bitbucket Git workflows and DevOps/CI/CD practices.
Solid understanding of data quality frameworks testing methodologies and data observability principles.
Excellent verbal and written communication skills with a proven ability to collaborate effectively in a remote global and crossfunctional environment.
Fluency in English (both spoken and written) is required.
Preferred Qualifications:
o Experience working with pharmaceutical datasets and applications.
o Familiarity with Jira and Confluence for task and knowledge management.
o Knowledge of Data Vault modeling principles.
o Experience with AWS cloud services.