Job Type : Big Data Developer (Java Spark or Java ETL)
Location : Toronto ON (Hybrid)
Job Type : Contract
Job Description:
We are looking for a skilled Big Data Developer with strong experience in Java development and hands-on expertise in Big Data technologies such as Spark and ETL tools. The ideal candidate should have strong analytical skills be able to work independently and possess excellent communication abilities.
Responsibilities:
- Design develop and maintain big data solutions using Java Spark and PySpark.
- Work with Hive HDFS and other Hadoop ecosystem components.
- Develop and automate scripts using Unix Shell scripting.
- Write and optimize PL/SQL queries for data transformation and analysis.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
- Troubleshoot performance issues and ensure system scalability and reliability.
- Participate in requirement analysis design reviews and code reviews.
Required Skills:
- Strong hands-on experience in Java and Python.
- Experience with PySpark / Java Spark for large-scale data processing.
- Good knowledge of Hive HDFS and Unix Shell scripting.
- Strong background in PL/SQL and ETL concepts.
- Excellent analytical and communication skills.
- Ability to work independently and handle multiple tasks.
Good to Have:
Experience with IBM DataStage or any other ETL tool.
Job Type : Big Data Developer (Java Spark or Java ETL) Location : Toronto ON (Hybrid) Job Type : Contract Job Description: We are looking for a skilled Big Data Developer with strong experience in Java development and hands-on expertise in Big Data technologies such as Spark and ETL tools. Th...
Job Type : Big Data Developer (Java Spark or Java ETL)
Location : Toronto ON (Hybrid)
Job Type : Contract
Job Description:
We are looking for a skilled Big Data Developer with strong experience in Java development and hands-on expertise in Big Data technologies such as Spark and ETL tools. The ideal candidate should have strong analytical skills be able to work independently and possess excellent communication abilities.
Responsibilities:
- Design develop and maintain big data solutions using Java Spark and PySpark.
- Work with Hive HDFS and other Hadoop ecosystem components.
- Develop and automate scripts using Unix Shell scripting.
- Write and optimize PL/SQL queries for data transformation and analysis.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
- Troubleshoot performance issues and ensure system scalability and reliability.
- Participate in requirement analysis design reviews and code reviews.
Required Skills:
- Strong hands-on experience in Java and Python.
- Experience with PySpark / Java Spark for large-scale data processing.
- Good knowledge of Hive HDFS and Unix Shell scripting.
- Strong background in PL/SQL and ETL concepts.
- Excellent analytical and communication skills.
- Ability to work independently and handle multiple tasks.
Good to Have:
Experience with IBM DataStage or any other ETL tool.
View more
View less