- Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemas
- Build and maintain infrastructure using Terraform for cloud platforms
- Design and implement large-scale cloud data infrastructure self-service tooling and microservices
- Work with large datasets to optimize performance and ensure seamless data integration
- Develop and maintain squad-specific data architectures and pipelines following ETL and Data Lake principles
- Discover analyze and organize disparate data sources into clean understandable schemas
Qualifications :
- Hands-on experience with cloud computing services in data and analytics
- Experience with data modeling reporting tools data governance and data warehousing
- Proficiency in Python and PySpark for distributed data processing
- Experience with Azure Snowflake and Databricks
- Experience with Docker and Kubernetes
- Knowledge of infrastructure as code (Terraform)
- Advanced SQL skills and familiarity with big data databases such as Snowflake Redshift etc.
- Experience with stream processing technologies such as Kafka Spark Structured Streaming
- At least an Upper-Intermediate level of English
Remote Work :
Yes
Employment Type :
Full-time
Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemasBuild and maintain infrastructure using Terraform for cloud platformsDesign and implement large-scale cloud data infrastructure self-service tooling and microservicesWork with large datasets to...
- Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemas
- Build and maintain infrastructure using Terraform for cloud platforms
- Design and implement large-scale cloud data infrastructure self-service tooling and microservices
- Work with large datasets to optimize performance and ensure seamless data integration
- Develop and maintain squad-specific data architectures and pipelines following ETL and Data Lake principles
- Discover analyze and organize disparate data sources into clean understandable schemas
Qualifications :
- Hands-on experience with cloud computing services in data and analytics
- Experience with data modeling reporting tools data governance and data warehousing
- Proficiency in Python and PySpark for distributed data processing
- Experience with Azure Snowflake and Databricks
- Experience with Docker and Kubernetes
- Knowledge of infrastructure as code (Terraform)
- Advanced SQL skills and familiarity with big data databases such as Snowflake Redshift etc.
- Experience with stream processing technologies such as Kafka Spark Structured Streaming
- At least an Upper-Intermediate level of English
Remote Work :
Yes
Employment Type :
Full-time
View more
View less