Sr. Python Developer/Lead
Auburn Hills MI 48321
The Senior Data Engineer & Technical Lead (SDET Lead) will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building deploying and maintaining robust data pipelines using Python PySpark and Airflow as well as designing and implementing CI/CD processes for data engineering projects
Key Responsibilities
- Data Engineering: Design develop and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
- Workflow Orchestration: Build schedule and monitor complex workflows using Airflow ensuring reliability and maintainability.
- CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub Docker and cloud-native solutions.
- Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
- Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
- Collaboration: Work closely with Data Architects QA teams and business stakeholders to translate requirements into technical solutions.
- Documentation: Create and maintain technical documentation including process/data flow diagrams and system design artifacts.
- Mentorship: Lead and mentor junior engineers providing guidance on coding testing and deployment best practices.
- Troubleshooting: Analyze and resolve technical issues across the data stack including pipeline failures and performance bottlenecks.
- Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g. operations support) for full knowledge coverage.
Experience Required:
- Minimum of 7 years overall IT experience; Hands-on Data Engineering
- Minimum 5 yearsof practical experience building production-grade data pipelines using Python and PySpark
- Graduate degree in a related field such as Computer Science or Data Analytics
- Familiarity with Test-Driven Development (TDD)
- A high tolerance for OpenShift Cloudera Tableau Confluence Jira and other enterprise tools
Sr. Python Developer/Lead Auburn Hills MI 48321 The Senior Data Engineer & Technical Lead (SDET Lead) will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building deploying and ...
Sr. Python Developer/Lead
Auburn Hills MI 48321
The Senior Data Engineer & Technical Lead (SDET Lead) will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building deploying and maintaining robust data pipelines using Python PySpark and Airflow as well as designing and implementing CI/CD processes for data engineering projects
Key Responsibilities
- Data Engineering: Design develop and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
- Workflow Orchestration: Build schedule and monitor complex workflows using Airflow ensuring reliability and maintainability.
- CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub Docker and cloud-native solutions.
- Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
- Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
- Collaboration: Work closely with Data Architects QA teams and business stakeholders to translate requirements into technical solutions.
- Documentation: Create and maintain technical documentation including process/data flow diagrams and system design artifacts.
- Mentorship: Lead and mentor junior engineers providing guidance on coding testing and deployment best practices.
- Troubleshooting: Analyze and resolve technical issues across the data stack including pipeline failures and performance bottlenecks.
- Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g. operations support) for full knowledge coverage.
Experience Required:
- Minimum of 7 years overall IT experience; Hands-on Data Engineering
- Minimum 5 yearsof practical experience building production-grade data pipelines using Python and PySpark
- Graduate degree in a related field such as Computer Science or Data Analytics
- Familiarity with Test-Driven Development (TDD)
- A high tolerance for OpenShift Cloudera Tableau Confluence Jira and other enterprise tools
View more
View less