Python Developer
Contract
Auburn Hills MI (Onsite)
This position requires hands-on expertise in building deploying and maintaining robust data pipelines using Python PySpark and Airflow as well as designing and implementing CI/CD processes for data engineering projects
Key Responsibilities
- Data Engineering: Design develop and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
- Workflow Orchestration: Build schedule and monitor complex workflows using Airflow ensuring reliability and maintainability.
- CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub Docker and cloud-native solutions.
- Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
- Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
- Collaboration: Work closely with Data Architects QA teams and business stakeholders to translate requirements into technical solutions.
- Documentation: Create and maintain technical documentation including process/data flow diagrams and system design artifacts.
Technical Experience:
- Hands-on Data Engineering : Minimum 5 yearsof practical experience building production-grade data pipelines using Python and PySpark.
- Airflow Expertise: Proven track record of designing deploying and managing Airflow DAGs in enterprise environments.
- CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows including automated testing and deployment**.
- Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles
- Python Fluency : Ability to write object-oriented Python code manage dependencies and follow industry best practices
- Version Control: Proficiency with **Git** for source code management and collaboration (commits branching merging GitHub/GitLab workflows).
- Unix/Linux: Strong command-line skills** in Unix-like environments.
- SQL : Solid understanding of SQL for data ingestion and analysis.
- Collaborative Development : Comfortable with code reviews pair programming and usingremote collaboration tools effectively.
- Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software
- Education: Bachelors or graduate degree in Computer Science Data Analytics or related field or equivalent work experience.
Unique Skills:
- Graduate degree in a related field such as Computer Science or Data Analytics
- Familiarity with Test-Driven Development (TDD)
- A high tolerance for OpenShift Cloudera Tableau Confluence Jira and other enterprise tools
Python Developer Contract Auburn Hills MI (Onsite) This position requires hands-on expertise in building deploying and maintaining robust data pipelines using Python PySpark and Airflow as well as designing and implementing CI/CD processes for data engineering projects Key Responsibilities Data Eng...
Python Developer
Contract
Auburn Hills MI (Onsite)
This position requires hands-on expertise in building deploying and maintaining robust data pipelines using Python PySpark and Airflow as well as designing and implementing CI/CD processes for data engineering projects
Key Responsibilities
- Data Engineering: Design develop and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads.
- Workflow Orchestration: Build schedule and monitor complex workflows using Airflow ensuring reliability and maintainability.
- CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub Docker and cloud-native solutions.
- Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines.
- Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle.
- Collaboration: Work closely with Data Architects QA teams and business stakeholders to translate requirements into technical solutions.
- Documentation: Create and maintain technical documentation including process/data flow diagrams and system design artifacts.
Technical Experience:
- Hands-on Data Engineering : Minimum 5 yearsof practical experience building production-grade data pipelines using Python and PySpark.
- Airflow Expertise: Proven track record of designing deploying and managing Airflow DAGs in enterprise environments.
- CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows including automated testing and deployment**.
- Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles
- Python Fluency : Ability to write object-oriented Python code manage dependencies and follow industry best practices
- Version Control: Proficiency with **Git** for source code management and collaboration (commits branching merging GitHub/GitLab workflows).
- Unix/Linux: Strong command-line skills** in Unix-like environments.
- SQL : Solid understanding of SQL for data ingestion and analysis.
- Collaborative Development : Comfortable with code reviews pair programming and usingremote collaboration tools effectively.
- Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software
- Education: Bachelors or graduate degree in Computer Science Data Analytics or related field or equivalent work experience.
Unique Skills:
- Graduate degree in a related field such as Computer Science or Data Analytics
- Familiarity with Test-Driven Development (TDD)
- A high tolerance for OpenShift Cloudera Tableau Confluence Jira and other enterprise tools
View more
View less