Job Title: Senior Hadoop Engineer
Location: Madison Wisconsin 53703
Experience: 12 Years
Employment Type: Contract
Job Description We are seeking a highly experienced Senior Hadoop Engineer to lead the design development and optimization of our large-scale data processing and analytics environment. The ideal candidate will have extensive hands-on expertise in Hadoop ecosystem tools and distributed data frameworks. This role involves working closely with data architects analysts and application teams to build scalable and secure big-data solutions that support business-critical analytics.
Key Responsibilities -
Design build and maintain Hadoop-based big data platforms and data pipelines.
-
Implement and optimize large-scale data processing applications using tools such as HDFS Hive Spark Impala and HBase.
-
Work with engineering and business teams to translate requirements into scalable data architecture.
-
Improve performance and reliability of Hadoop clusters including monitoring capacity planning and tuning.
-
Develop and manage ETL processes that integrate data from multiple sources.
-
Ensure data security governance and compliance across all Hadoop environments.
-
Automate operational tasks and support continuous deployment practices.
-
Troubleshoot issues across Hadoop components and provide root-cause analysis.
-
Support migration and modernization initiatives to cloud platforms when applicable.
Required Skills and Experience -
12 years of professional experience in data engineering or software engineering roles.
-
Strong expertise in Hadoop ecosystem tools including HDFS YARN Hive Pig Spark Kafka Sqoop Oozie and Zookeeper.
-
Proficiency in programming languages such as Java Scala and Python.
-
Solid understanding of distributed systems parallel processing and performance optimization.
-
Experience working with relational and NoSQL databases (e.g. Oracle MySQL HBase Cassandra MongoDB).
-
Hands-on experience with data ingestion and ETL pipelines.
-
Experience with version control CI/CD tools and Linux environments.
-
Familiarity with cloud platforms such as AWS Azure or GCP is preferred.
-
Strong analytical problem-solving and communication skills.
Preferred Qualifications -
Experience working in a large enterprise or government project environment.
-
Certifications in Big Data Cloud or Data Engineering.
-
Experience implementing real-time streaming solutions with Kafka and Spark Streaming.
Education
Job Title: Senior Hadoop Engineer Location: Madison Wisconsin 53703 Experience: 12 Years Employment Type: Contract Job Description We are seeking a highly experienced Senior Hadoop Engineer to lead the design development and optimization of our large-scale data processing and analytics environment. ...
Job Title: Senior Hadoop Engineer
Location: Madison Wisconsin 53703
Experience: 12 Years
Employment Type: Contract
Job Description We are seeking a highly experienced Senior Hadoop Engineer to lead the design development and optimization of our large-scale data processing and analytics environment. The ideal candidate will have extensive hands-on expertise in Hadoop ecosystem tools and distributed data frameworks. This role involves working closely with data architects analysts and application teams to build scalable and secure big-data solutions that support business-critical analytics.
Key Responsibilities -
Design build and maintain Hadoop-based big data platforms and data pipelines.
-
Implement and optimize large-scale data processing applications using tools such as HDFS Hive Spark Impala and HBase.
-
Work with engineering and business teams to translate requirements into scalable data architecture.
-
Improve performance and reliability of Hadoop clusters including monitoring capacity planning and tuning.
-
Develop and manage ETL processes that integrate data from multiple sources.
-
Ensure data security governance and compliance across all Hadoop environments.
-
Automate operational tasks and support continuous deployment practices.
-
Troubleshoot issues across Hadoop components and provide root-cause analysis.
-
Support migration and modernization initiatives to cloud platforms when applicable.
Required Skills and Experience -
12 years of professional experience in data engineering or software engineering roles.
-
Strong expertise in Hadoop ecosystem tools including HDFS YARN Hive Pig Spark Kafka Sqoop Oozie and Zookeeper.
-
Proficiency in programming languages such as Java Scala and Python.
-
Solid understanding of distributed systems parallel processing and performance optimization.
-
Experience working with relational and NoSQL databases (e.g. Oracle MySQL HBase Cassandra MongoDB).
-
Hands-on experience with data ingestion and ETL pipelines.
-
Experience with version control CI/CD tools and Linux environments.
-
Familiarity with cloud platforms such as AWS Azure or GCP is preferred.
-
Strong analytical problem-solving and communication skills.
Preferred Qualifications -
Experience working in a large enterprise or government project environment.
-
Certifications in Big Data Cloud or Data Engineering.
-
Experience implementing real-time streaming solutions with Kafka and Spark Streaming.
Education
View more
View less