Job Title - Big Data ETL Developer
Location- Mississauga ON Exp Range : 6 - 8 years
Must Have Technical/Functional Skills Hadoop
HDFS
YARN
MapReduce
Hive
Spark
AWS
Azure
Google Cloud
Data modeling
ETL processes
Data warehousing
SQL
Roles & Responsibilities Design and implement scalable and efficient Hadoop architecture solutions.
Proven experience in designing and managing Hadoop-based architectures.
Strong understanding of Hadoop ecosystem components such as HDFS YARN MapReduce Hive HBase and Spark.
Collaborate with data engineers and scientists to understand data requirements.
Optimize Hadoop clusters for performance and resource utilization.
Maintain and monitor Hadoop infrastructure ensuring high availability.
Implement data security and governance policies.
Stay updated with the latest advancements in Hadoop and big data technologies.
Troubleshoot and resolve issues within the Hadoop ecosystem.
Strong hands-on and architectural knowledge of Python PySpark Unix SQL.
Exposure to AI/ML lifecycle management MLOps and GenAI solutions.
Will be responsible for developing Spark based solutions to support near real-time data ingestion analytics and reporting.
Generic Managerial Skills If any Communication
Team Player
Analytical Ability
Job Title - Big Data ETL Developer Location- Mississauga ON Exp Range : 6 - 8 years Must Have Technical/Functional Skills Hadoop HDFS YARN MapReduce Hive Spark AWS Azure Google Cloud Data modeling ETL processes Data warehousing SQL Roles & Responsi...
Job Title - Big Data ETL Developer
Location- Mississauga ON Exp Range : 6 - 8 years
Must Have Technical/Functional Skills Hadoop
HDFS
YARN
MapReduce
Hive
Spark
AWS
Azure
Google Cloud
Data modeling
ETL processes
Data warehousing
SQL
Roles & Responsibilities Design and implement scalable and efficient Hadoop architecture solutions.
Proven experience in designing and managing Hadoop-based architectures.
Strong understanding of Hadoop ecosystem components such as HDFS YARN MapReduce Hive HBase and Spark.
Collaborate with data engineers and scientists to understand data requirements.
Optimize Hadoop clusters for performance and resource utilization.
Maintain and monitor Hadoop infrastructure ensuring high availability.
Implement data security and governance policies.
Stay updated with the latest advancements in Hadoop and big data technologies.
Troubleshoot and resolve issues within the Hadoop ecosystem.
Strong hands-on and architectural knowledge of Python PySpark Unix SQL.
Exposure to AI/ML lifecycle management MLOps and GenAI solutions.
Will be responsible for developing Spark based solutions to support near real-time data ingestion analytics and reporting.
Generic Managerial Skills If any Communication
Team Player
Analytical Ability
View more
View less