Job Description:
We are seeking a skilled Java Hadoop Developer to join our data engineering team. The ideal candidate will have expertise in Java Hadoop ecosystem and Big Data technologies to design develop and maintain scalable data processing systems. You will work on largescale distributed systems to process and analyze massive datasets efficiently.
Key Responsibilities:
- Develop implement and maintain Hadoopbased applications using Java.
- Design and optimize MapReduce Hive Spark solutions.
- Build and maintain ETL pipelines for processing structured and unstructured data.
- Work with Hadoop ecosystem tools such as HBase Hive Pig Flume and Sqoop.
- Optimize and finetune Big Data processing workflows for performance and scalability.
- Ensure data security reliability and compliance with industry standards.
- Integrate Hadoopbased solutions with cloud platforms (AWS Azure GCP) as needed.
Required Skills & Qualifications:
- Strong Java programming skills with experience in writing scalable applications.
- Handson experience with Hadoop ecosystem components (HDFS MapReduce Hive Spark HBase Sqoop Flume).
- Proficiency in Big Data tools such as Apache Spark Kafka and NoSQL databases.
- Experience with CI/CD tools Git Jenkins and containerization (Docker Kubernetes) is a plus.
- Strong analytical and problemsolving skills.