Role: Hadoop SME / Developer
Job Location: Mumbai/ Pune
Start date: ASAP
Role Description:
This position is for a Hadoop SME who will be specializing in Hadoop to Teradata migrations. Good experience of the Hadoop ecosystem will be a key competency for this position.
Minimum Requirements:
- Total IT experience of 10 years.
- Minimum 5 years of relevant experience using key Hadoop services
- Data Integration experience is required
- Knowledge of following Hadoop components are required: HDFS HBase Solr Spark Hive.
- Experience in unstructured data processing and validation such as documents images and text.
- Worked and developed data ingestion pipelines in Hadoop for unstructured data.
- Good hands-on experience in programming language like Java Python Scala and Bash scripting.
- Understanding of how structured data is stored and accessed in HBase and metadata can be linked with the RDBMS.
- Understanding of how Solr text search works.
- Experience in migrating storing and retrieving data in object stores like AWS S3 and Azure Blob.
- Should be able to communicate effectively with customers and prospects.
Nice to have Experience:
- Working experience with AWS integration services like AWS Glue and Athena.
- Good Experience in Teradata Vantage SQL
- Experience with any database migration would be added plus.
| Name | Contact | Email | Final Bill Rate | Availability | Location | Availability for interview |
| | | | | | | |
| | | | | | | |
Role: Hadoop SME / Developer Job Location: Mumbai/ Pune Start date: ASAP Role Description: This position is for a Hadoop SME who will be specializing in Hadoop to Teradata migrations. Good experience of the Hadoop ecosystem will be a key competency for this position. Minimum Re...
Role: Hadoop SME / Developer
Job Location: Mumbai/ Pune
Start date: ASAP
Role Description:
This position is for a Hadoop SME who will be specializing in Hadoop to Teradata migrations. Good experience of the Hadoop ecosystem will be a key competency for this position.
Minimum Requirements:
- Total IT experience of 10 years.
- Minimum 5 years of relevant experience using key Hadoop services
- Data Integration experience is required
- Knowledge of following Hadoop components are required: HDFS HBase Solr Spark Hive.
- Experience in unstructured data processing and validation such as documents images and text.
- Worked and developed data ingestion pipelines in Hadoop for unstructured data.
- Good hands-on experience in programming language like Java Python Scala and Bash scripting.
- Understanding of how structured data is stored and accessed in HBase and metadata can be linked with the RDBMS.
- Understanding of how Solr text search works.
- Experience in migrating storing and retrieving data in object stores like AWS S3 and Azure Blob.
- Should be able to communicate effectively with customers and prospects.
Nice to have Experience:
- Working experience with AWS integration services like AWS Glue and Athena.
- Good Experience in Teradata Vantage SQL
- Experience with any database migration would be added plus.
| Name | Contact | Email | Final Bill Rate | Availability | Location | Availability for interview |
| | | | | | | |
| | | | | | | |
View more
View less