Overall Responsibilities:
- Data Pipeline Development:Design develop and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform ensuring data integrity and accuracy.
- Data Ingestion:Implement and manage data ingestion processes from a variety of sources (e.g. relational databases APIs file systems) to the data lake or data warehouse on CDP.
- Data Transformation and Processing:Use PySpark to process cleanse and transform large datasets into meaningful formats that support analytical needs and business requirements.
- Performance Optimization:Conduct performance tuning of PySpark code and Cloudera components optimizing resource utilization and reducing runtime of ETL processes.
- Data Quality and Validation:Implement data quality checks monitoring and validation routines to ensure data accuracy and reliability throughout the pipeline.
- Automation and Orchestration:Automate data workflows using tools like Apache Oozie Airflow or similar orchestration tools within the Cloudera ecosystem.
- Monitoring and Maintenance:Monitor pipeline performance troubleshoot issues and perform routine maintenance on the Cloudera Data Platform and associated data processes.
- Collaboration:Work closely with other data engineers analysts product managers and other stakeholders to understand data requirements and support various datadriven initiatives.
- Documentation:Maintain thorough documentation of data engineering processes code and pipeline configurations.
Software Requirements:
- Advanced proficiency in PySpark including working with RDDs DataFrames and optimization techniques.
- Strong experience with Cloudera Data Platform (CDP) components including Cloudera Manager Hive Impala HDFS and HBase.
- Knowledge of data warehousing concepts ETL best practices and experience with SQLbased tools (e.g. Hive Impala).
- Familiarity with Hadoop Kafka and other distributed computing tools.
- Experience with Apache Oozie Airflow or similar orchestration frameworks.
- Strong scripting skills in Linux.
Categorywise Technical Skills:
- PySpark:Advanced proficiency in PySpark including working with RDDs DataFrames and optimization techniques.
- Cloudera Data Platform:Strong experience with Cloudera Data Platform (CDP) components including Cloudera Manager Hive Impala HDFS and HBase.
- Data Warehousing:Knowledge of data warehousing concepts ETL best practices and experience with SQLbased tools (e.g. Hive Impala).
- Big Data Technologies:Familiarity with Hadoop Kafka and other distributed computing tools.
- Orchestration and Scheduling:Experience with Apache Oozie Airflow or similar orchestration frameworks.
- Scripting and Automation:Strong scripting skills in Linux.
Experience:
- 3 years of experience as a Data Engineer with a strong focus on PySpark and the Cloudera Data Platform.
- Proven track record of implementing data engineering best practices.
- Experience in data ingestion transformation and optimization on the Cloudera Data Platform.
DaytoDay Activities:
- Design develop and maintain ETL pipelines using PySpark on CDP.
- Implement and manage data ingestion processes from various sources.
- Process cleanse and transform large datasets using PySpark.
- Conduct performance tuning and optimization of ETL processes.
- Implement data quality checks and validation routines.
- Automate data workflows using orchestration tools.
- Monitor pipeline performance and troubleshoot issues.
- Collaborate with team members to understand data requirements.
- Maintain documentation of data engineering processes and configurations.
Qualifications:
- Bachelors or Masters degree in Computer Science Data Engineering Information Systems or a related field.
- Relevant certifications in PySpark and Cloudera technologies are a plus.
Soft Skills:
- Strong analytical and problemsolving skills.
- Excellent verbal and written communication abilities.
- Ability to work independently and collaboratively in a team environment.
- Attention to detail and commitment to data quality.
SYNECHRONS DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity Equity and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger successful businesses as a global company. We encourage applicants from across diverse backgrounds race ethnicities religion age marital status gender sexual orientations or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements mentoring internal mobility learning and development programs and more.
All employment decisions at Synechron are based on business needs job requirements and individual qualifications without regard to the applicants gender gender identity sexual orientation race ethnicity disabled or veteran status or any other characteristic protected by law.
Candidate Application Notice