Local to New England
Platform is in Snowflake need to get data from multiple different sources from their on-prem oracle databases or SQL server databases or MySQL postgres. They will need to build something for the data to pulled from and bring it into snowflake. Its in the cloud and theyre using APIs to pull the data.
| Job Description: Job Summary Sr Snowflake Data Engineer Informatica ICS to be more within Snowflake itself because Snowflake has an open flow tool that allow to extract data from any sources and then bring them into Snowflake. So someone who has exposure to that or at least has that. We are in Snowflake right We get data from multiple sources right from on Prem Oracle databases or SQL Server databases or My SQL and Postgres. Those are RDBMSS. They will need to build something to pull data from that and bring it to Snowflake. So thats one. And we today like most of the data sources we have its somewhere in a cloud and then we need to bring it in. We are using APIs to pull the data and someone who has. Great understanding of web services API you know it has a strong Python background. I think that will be very needed and once the data is in Snowflake we need to build some type of transformation within Snowflake. And someone who has of course it must have its a data strong SQL skill will always be there. But also what we need if someone has a DBT core right thats used within Snowflake for the data transformation I think that would be great plus. In a nutshell for me those are the key skill sets that Im looking for. Strong SQL strong Python some open flow exposure will be great and knowing DBT transfer DBT core. Job Summary Were hiring a Senior Snowflake Data Engineer to build and operate reliable scalable data pipelines and curated data products on the Snowflake Data Cloud. Our platform uses a multi-account strategy and our primary workloads support BI and ML/AI. This is a hands-on engineering role focused on Python-driven data engineering robust ETL/ELT and modern transformation practices using dbt Core and OpenFlow. Youll partner with analytics data science platform and security teams to deliver production-grade datasets with strong quality observability governance alignment and performance/cost efficiency. Key Responsibilities - Build and maintain batch and/or near-real-time ETL/ELT pipelines landing data into Snowflake (raw curated consumption layers).
- Develop Python data engineering components (connectors orchestration logic framework utilities testing tools and automation) supporting BI and ML use cases.
- Implement transformation frameworks in dbt Core: project structure standards modular models macros tests documentation and environment-based deployments.
- Use OpenFlow to build and operationalize ingestion/flow patterns including configuration scheduling troubleshooting and performance tuning.
- Design data models optimized for consumption: curated marts for BI and ML-ready datasets/features with repeatable refresh patterns.
- Apply data quality and reliability practices: automated testing schema drift handling idempotent loads backfills and reconciliation checks.
- Tune Snowflake performance and cost for pipelines: warehouse sizing clustering/partitioning strategy where appropriate incremental processing and query optimization.
- Enable cross-account patterns aligned to the multi-account strategy (promotion between environments sharing curated datasets deployment consistency).
- Build operational excellence: pipeline observability alerting runbooks incident response participation and root-cause analysis.
- Collaborate with platform/security teams to align pipelines with governance controls (RBAC secure data access patterns) without blocking delivery.
Required Qualifications - 5 years of data engineering experience including significant delivery on Snowflake in production.
- Strong Python skills (clean testable code; packaging; logging/metrics; performance-aware data processing).
- Strong SQL and data modeling fundamentals (dimensional and/or domain-oriented modeling).
- Hands-on experience with dbt Core (models macros tests docs deployments CI practices).
- Proven experience designing and operating ETL/ELT pipelines (incremental loads CDC concepts error handling and backfills).
- Experience working in cloud environments (AWS/Azure/GCP) and production operations (monitoring on-call/incident response SLAs).
- Comfortable working across teams (analytics ML platform/security) and translating requirements into deliverable datasets.
Nice to Have - Experience supporting BI workloads (semantic-friendly marts performance considerations consumption patterns).
- Experience supporting ML workflows (feature-ready datasets reproducible training data lineage and governance).
- Familiarity with Snowflake governance features (masking/row access policies secure views) and multi-account deployment patterns.
- CI/CD and automation (Git workflows build pipelines infra-as-code such as Terraform).
- Experience with common ingestion/orchestration tools (Airflow Dagster Prefect etc.) or ELT tools (Fivetran/Matillion/Informatica).
|
Local to New England Platform is in Snowflake need to get data from multiple different sources from their on-prem oracle databases or SQL server databases or MySQL postgres. They will need to build something for the data to pulled from and bring it into snowflake. Its in the cloud and theyre u...
Local to New England
Platform is in Snowflake need to get data from multiple different sources from their on-prem oracle databases or SQL server databases or MySQL postgres. They will need to build something for the data to pulled from and bring it into snowflake. Its in the cloud and theyre using APIs to pull the data.
| Job Description: Job Summary Sr Snowflake Data Engineer Informatica ICS to be more within Snowflake itself because Snowflake has an open flow tool that allow to extract data from any sources and then bring them into Snowflake. So someone who has exposure to that or at least has that. We are in Snowflake right We get data from multiple sources right from on Prem Oracle databases or SQL Server databases or My SQL and Postgres. Those are RDBMSS. They will need to build something to pull data from that and bring it to Snowflake. So thats one. And we today like most of the data sources we have its somewhere in a cloud and then we need to bring it in. We are using APIs to pull the data and someone who has. Great understanding of web services API you know it has a strong Python background. I think that will be very needed and once the data is in Snowflake we need to build some type of transformation within Snowflake. And someone who has of course it must have its a data strong SQL skill will always be there. But also what we need if someone has a DBT core right thats used within Snowflake for the data transformation I think that would be great plus. In a nutshell for me those are the key skill sets that Im looking for. Strong SQL strong Python some open flow exposure will be great and knowing DBT transfer DBT core. Job Summary Were hiring a Senior Snowflake Data Engineer to build and operate reliable scalable data pipelines and curated data products on the Snowflake Data Cloud. Our platform uses a multi-account strategy and our primary workloads support BI and ML/AI. This is a hands-on engineering role focused on Python-driven data engineering robust ETL/ELT and modern transformation practices using dbt Core and OpenFlow. Youll partner with analytics data science platform and security teams to deliver production-grade datasets with strong quality observability governance alignment and performance/cost efficiency. Key Responsibilities - Build and maintain batch and/or near-real-time ETL/ELT pipelines landing data into Snowflake (raw curated consumption layers).
- Develop Python data engineering components (connectors orchestration logic framework utilities testing tools and automation) supporting BI and ML use cases.
- Implement transformation frameworks in dbt Core: project structure standards modular models macros tests documentation and environment-based deployments.
- Use OpenFlow to build and operationalize ingestion/flow patterns including configuration scheduling troubleshooting and performance tuning.
- Design data models optimized for consumption: curated marts for BI and ML-ready datasets/features with repeatable refresh patterns.
- Apply data quality and reliability practices: automated testing schema drift handling idempotent loads backfills and reconciliation checks.
- Tune Snowflake performance and cost for pipelines: warehouse sizing clustering/partitioning strategy where appropriate incremental processing and query optimization.
- Enable cross-account patterns aligned to the multi-account strategy (promotion between environments sharing curated datasets deployment consistency).
- Build operational excellence: pipeline observability alerting runbooks incident response participation and root-cause analysis.
- Collaborate with platform/security teams to align pipelines with governance controls (RBAC secure data access patterns) without blocking delivery.
Required Qualifications - 5 years of data engineering experience including significant delivery on Snowflake in production.
- Strong Python skills (clean testable code; packaging; logging/metrics; performance-aware data processing).
- Strong SQL and data modeling fundamentals (dimensional and/or domain-oriented modeling).
- Hands-on experience with dbt Core (models macros tests docs deployments CI practices).
- Proven experience designing and operating ETL/ELT pipelines (incremental loads CDC concepts error handling and backfills).
- Experience working in cloud environments (AWS/Azure/GCP) and production operations (monitoring on-call/incident response SLAs).
- Comfortable working across teams (analytics ML platform/security) and translating requirements into deliverable datasets.
Nice to Have - Experience supporting BI workloads (semantic-friendly marts performance considerations consumption patterns).
- Experience supporting ML workflows (feature-ready datasets reproducible training data lineage and governance).
- Familiarity with Snowflake governance features (masking/row access policies secure views) and multi-account deployment patterns.
- CI/CD and automation (Git workflows build pipelines infra-as-code such as Terraform).
- Experience with common ingestion/orchestration tools (Airflow Dagster Prefect etc.) or ELT tools (Fivetran/Matillion/Informatica).
|
View more
View less