W2 candidates Only
Key Responsibilities
- Develop and maintain Hadoop-based applications and data pipelines.
- Work with Hadoop ecosystem tools such as Hive HDFS MapReduce Spark Pig Sqoop and Kafka.
- Design scalable and high-performance big data solutions.
- Process structured and unstructured data from multiple sources.
- Optimize Hadoop jobs for performance and reliability.
- Collaborate with data engineers analysts and business teams.
- Monitor cluster performance and troubleshoot data-related issues.
- Implement data security and governance practices.
- Create technical documentation and workflow diagrams.
Required Skills
- Strong knowledge of Hadoop ecosystem components.
- Experience with Hive Spark HDFS MapReduce and Kafka.
- Good understanding of SQL and NoSQL databases.
- Proficiency in Java Python or Scala.
- Experience with ETL processes and data warehousing.
- Knowledge of Linux/Unix environments.
- Familiarity with cloud platforms like AWS Azure or GCP is a plus.
- Strong analytical and problem-solving skills.
Qualifications
- Bachelors degree in Computer Science IT or related field.
- 8 years of experience in Big Data or Hadoop development.
- Certifications in Big Data technologies are an advantage.
W2 candidates Only Key Responsibilities Develop and maintain Hadoop-based applications and data pipelines. Work with Hadoop ecosystem tools such as Hive HDFS MapReduce Spark Pig Sqoop and Kafka. Design scalable and high-performance big data solutions. Process structured and unstructured data from m...
W2 candidates Only
Key Responsibilities
- Develop and maintain Hadoop-based applications and data pipelines.
- Work with Hadoop ecosystem tools such as Hive HDFS MapReduce Spark Pig Sqoop and Kafka.
- Design scalable and high-performance big data solutions.
- Process structured and unstructured data from multiple sources.
- Optimize Hadoop jobs for performance and reliability.
- Collaborate with data engineers analysts and business teams.
- Monitor cluster performance and troubleshoot data-related issues.
- Implement data security and governance practices.
- Create technical documentation and workflow diagrams.
Required Skills
- Strong knowledge of Hadoop ecosystem components.
- Experience with Hive Spark HDFS MapReduce and Kafka.
- Good understanding of SQL and NoSQL databases.
- Proficiency in Java Python or Scala.
- Experience with ETL processes and data warehousing.
- Knowledge of Linux/Unix environments.
- Familiarity with cloud platforms like AWS Azure or GCP is a plus.
- Strong analytical and problem-solving skills.
Qualifications
- Bachelors degree in Computer Science IT or related field.
- 8 years of experience in Big Data or Hadoop development.
- Certifications in Big Data technologies are an advantage.
View more
View less