- REQUIREMENT TEMPLATE Pyspark Hadoop
| |
| Requirement Identifier (Serial No.) | | |
| No. of positions | | |
| Prepared by | | |
| Date | | |
| Account Name | BofA | |
| Service Line | DNAFS | |
| Hiring Manager (mail ID) | | |
| Details of the panelist who will interview | | |
| Must have skills - 2 skills which are non-negotiable | Pyspark Spark Hadoop SQL | |
| Desirable skills - 1 skill which is nice to have | Hive Sqoop UNIX shell scripting | |
| Infosys role | | |
| Desired experience range | 6 - 12 years | |
| Location(s) where this position can work out of | Hyderabad Pune Chennai Bangalore Trivandrum | |
| Does this position require working from client office all or some days in the week If yes pls provide details | No | |
| Is remote working allowed | Yes (as per client and Infosys policies) 3 days WFO weekly | |
| Any additional things to be checked | | |
| Responsibilities and JD in brief along with additional criteria to be considered (if any): - At least 6 years of experience in designing and developing large scale distributed data processing pipelines using PySpark Hadoop and related technologies.
- Having expertise in Pyspark Spark Core Spark SQL Batch processing and Spark Streaming
- Experience with Hadoop HDFS Hive and other BigData technologies.
- Familiarity with Data warehousing and ETL concepts and techniques
- UNIX shell scripting will be an added advantage in scheduling/running application jobs.
- At least 5 years of experience in Project development life cycle activities and development/maintenance projects
- Work with business stakeholders and other SMEs to understand high level business requirements.
- Work with the Solution Designers and contribute to the development of project plans by participating in the scoping and estimating of proposed project.
- Apply technical background understanding business knowledge system knowledge in the elicitation of Systems Requirements for projects.
- Possess good knowledge on Spark architecture and transformations using Spark and PySpark.
- Work in an Agile environment and participation in scrum daily standups sprint planning reviews and retrospectives.
- Understand project requirements and translate them into technical solutions which meets the project quality standards
- Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify troubleshoot and resolve data issues.
- Strong problem solving and Good Analytical skills.
- Excellent verbal and written communication skills.
- Experience and desire to work in a Global delivery environment.
- Stay up to date with new technologies and industry trends in Development.
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
REQUIREMENT TEMPLATE Pyspark Hadoop Requirement Identifier (Serial No.) No. of positions Prepared by Date Account Name BofA Service Line DNAFS Hiring Manager (mail ID) Details of the panelist who will interview ...
- REQUIREMENT TEMPLATE Pyspark Hadoop
| |
| Requirement Identifier (Serial No.) | | |
| No. of positions | | |
| Prepared by | | |
| Date | | |
| Account Name | BofA | |
| Service Line | DNAFS | |
| Hiring Manager (mail ID) | | |
| Details of the panelist who will interview | | |
| Must have skills - 2 skills which are non-negotiable | Pyspark Spark Hadoop SQL | |
| Desirable skills - 1 skill which is nice to have | Hive Sqoop UNIX shell scripting | |
| Infosys role | | |
| Desired experience range | 6 - 12 years | |
| Location(s) where this position can work out of | Hyderabad Pune Chennai Bangalore Trivandrum | |
| Does this position require working from client office all or some days in the week If yes pls provide details | No | |
| Is remote working allowed | Yes (as per client and Infosys policies) 3 days WFO weekly | |
| Any additional things to be checked | | |
| Responsibilities and JD in brief along with additional criteria to be considered (if any): - At least 6 years of experience in designing and developing large scale distributed data processing pipelines using PySpark Hadoop and related technologies.
- Having expertise in Pyspark Spark Core Spark SQL Batch processing and Spark Streaming
- Experience with Hadoop HDFS Hive and other BigData technologies.
- Familiarity with Data warehousing and ETL concepts and techniques
- UNIX shell scripting will be an added advantage in scheduling/running application jobs.
- At least 5 years of experience in Project development life cycle activities and development/maintenance projects
- Work with business stakeholders and other SMEs to understand high level business requirements.
- Work with the Solution Designers and contribute to the development of project plans by participating in the scoping and estimating of proposed project.
- Apply technical background understanding business knowledge system knowledge in the elicitation of Systems Requirements for projects.
- Possess good knowledge on Spark architecture and transformations using Spark and PySpark.
- Work in an Agile environment and participation in scrum daily standups sprint planning reviews and retrospectives.
- Understand project requirements and translate them into technical solutions which meets the project quality standards
- Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify troubleshoot and resolve data issues.
- Strong problem solving and Good Analytical skills.
- Excellent verbal and written communication skills.
- Experience and desire to work in a Global delivery environment.
- Stay up to date with new technologies and industry trends in Development.
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
View more
View less