Job Title: Sr. Data Engineer/Python
Location: Issaquah WA - 3 days (Hybrid)
Duration: 12 Months Contract
Job Overview
We are seeking a Data Engineer with strong Python and Azure experience to design build and maintain scalable data pipelines. The ideal candidate will have experience working with Spark Azure Data Factory and ETL processes to support large-scale data processing and analytics.
Key Responsibilities
- Design develop and maintain data pipelines and ETL workflows for large-scale data processing.
- Build and optimize data engineering solutions using Python and Apache Spark.
- Develop and manage data integration pipelines using Azure Data Factory (ADF).
- Work with cross-functional teams to gather data requirements and implement scalable data solutions.
- Ensure data quality reliability and performance across data platforms.
- Troubleshoot and resolve data pipeline issues and optimize processing performance.
- Implement best practices for data architecture security and governance.
Required Skills
- Strong experience with Python for backend/data engineering.
- Hands-on experience with Apache Spark or Azure Data Factory (ADF).
- Solid experience building ETL pipelines and data workflows.
- Experience working with Azure cloud services.
- Strong knowledge of data pipeline architecture and data processing frameworks.
- Experience with large datasets and distributed data processing.
Preferred Skills
- Experience with Azure Data Lake Databricks or SQL-based data warehouses.
- Familiarity with data modeling and performance optimization.
- Experience working in Agile environments.
Job Title: Sr. Data Engineer/Python Location: Issaquah WA - 3 days (Hybrid) Duration: 12 Months Contract Job Overview We are seeking a Data Engineer with strong Python and Azure experience to design build and maintain scalable data pipelines. The ideal candidate will have experience working wi...
Job Title: Sr. Data Engineer/Python
Location: Issaquah WA - 3 days (Hybrid)
Duration: 12 Months Contract
Job Overview
We are seeking a Data Engineer with strong Python and Azure experience to design build and maintain scalable data pipelines. The ideal candidate will have experience working with Spark Azure Data Factory and ETL processes to support large-scale data processing and analytics.
Key Responsibilities
- Design develop and maintain data pipelines and ETL workflows for large-scale data processing.
- Build and optimize data engineering solutions using Python and Apache Spark.
- Develop and manage data integration pipelines using Azure Data Factory (ADF).
- Work with cross-functional teams to gather data requirements and implement scalable data solutions.
- Ensure data quality reliability and performance across data platforms.
- Troubleshoot and resolve data pipeline issues and optimize processing performance.
- Implement best practices for data architecture security and governance.
Required Skills
- Strong experience with Python for backend/data engineering.
- Hands-on experience with Apache Spark or Azure Data Factory (ADF).
- Solid experience building ETL pipelines and data workflows.
- Experience working with Azure cloud services.
- Strong knowledge of data pipeline architecture and data processing frameworks.
- Experience with large datasets and distributed data processing.
Preferred Skills
- Experience with Azure Data Lake Databricks or SQL-based data warehouses.
- Familiarity with data modeling and performance optimization.
- Experience working in Agile environments.
View more
View less