Hiring: W2 Candidates Only
Visa: Open to any visa type with valid work authorization in the USA
Key Responsibilities
-
Design develop and implement scalable big data solutions using Hadoop.
-
Work with Hadoop ecosystem tools such as HDFS MapReduce YARN Hive Pig HBase and Spark.
-
Develop and optimize ETL pipelines for large datasets.
-
Write and optimize HiveQL queries for data analysis and reporting.
-
Integrate Hadoop systems with data sources like RDBMS NoSQL databases and streaming systems.
-
Monitor Hadoop cluster performance and troubleshoot issues.
-
Ensure data security governance and compliance.
-
Collaborate with data engineers data scientists and business teams.
-
Document system designs workflows and best practices.
Required Skills
-
Strong experience with Hadoop framework and ecosystem.
-
Proficiency in Java Scala or Python.
-
Hands-on experience with Hive Pig HBase Spark and Sqoop.
-
Knowledge of Linux/Unix environments.
-
Experience with SQL and NoSQL databases.
-
Understanding of distributed systems and data processing concepts.
-
Familiarity with data warehousing and ETL tools.
Preferred Qualifications
-
Experience with Apache Spark and real-time processing tools like Kafka or Flume.
-
Knowledge of cloud platforms (AWS EMR Azure HDInsight or Google Dataproc).
-
Understanding of DevOps tools (Git Jenkins CI/CD pipelines).
-
Hadoop certification (Cloudera Hortonworks) is a plus.
Education & Experience
Hiring: W2 Candidates Only Visa: Open to any visa type with valid work authorization in the USA Key Responsibilities Design develop and implement scalable big data solutions using Hadoop. Work with Hadoop ecosystem tools such as HDFS MapReduce YARN Hive Pig HBase and Spark. Develop and optimi...
Hiring: W2 Candidates Only
Visa: Open to any visa type with valid work authorization in the USA
Key Responsibilities
-
Design develop and implement scalable big data solutions using Hadoop.
-
Work with Hadoop ecosystem tools such as HDFS MapReduce YARN Hive Pig HBase and Spark.
-
Develop and optimize ETL pipelines for large datasets.
-
Write and optimize HiveQL queries for data analysis and reporting.
-
Integrate Hadoop systems with data sources like RDBMS NoSQL databases and streaming systems.
-
Monitor Hadoop cluster performance and troubleshoot issues.
-
Ensure data security governance and compliance.
-
Collaborate with data engineers data scientists and business teams.
-
Document system designs workflows and best practices.
Required Skills
-
Strong experience with Hadoop framework and ecosystem.
-
Proficiency in Java Scala or Python.
-
Hands-on experience with Hive Pig HBase Spark and Sqoop.
-
Knowledge of Linux/Unix environments.
-
Experience with SQL and NoSQL databases.
-
Understanding of distributed systems and data processing concepts.
-
Familiarity with data warehousing and ETL tools.
Preferred Qualifications
-
Experience with Apache Spark and real-time processing tools like Kafka or Flume.
-
Knowledge of cloud platforms (AWS EMR Azure HDInsight or Google Dataproc).
-
Understanding of DevOps tools (Git Jenkins CI/CD pipelines).
-
Hadoop certification (Cloudera Hortonworks) is a plus.
Education & Experience
View more
View less