Job Description:
-
Develop and Deploy Spark Applications: Design develop test and deploy robust and scalable data processing applications using Apache Spark and Scala.
-
Performance Optimization: Optimize and tune Spark applications for enhanced performance and efficiency especially when handling large-scale datasets.
-
Data Pipeline Development: Build and maintain data pipelines often integrating with various big data technologies like Hadoop (HDFS Hive) Kafka and other data storage solutions.
Mandatory Skill Sets -
Programming Expertise: Strong proficiency in Scala (including functional programming concepts) and experience with JVM-based languages like Java or Python.
-
Big Data Technologies: Expertise in Spark Scala (Spark Core Spark SQL Spark Streaming) and familiarity with the broader Hadoop ecosystem (HDFS Hive etc.).
-
Database Knowledge: Proficiency in SQL and experience working with relational databases (e.g. PostgreSQL MySQL Oracle) and NoSQL databases (e.g. MongoDB Cassandra).
-
Cloud Platforms: Experience with major cloud services such as AWS Azure or GCP is often preferred.
Job Description: Develop and Deploy Spark Applications: Design develop test and deploy robust and scalable data processing applications using Apache Spark and Scala. Performance Optimization: Optimize and tune Spark applications for enhanced performance and efficiency especially when handlin...
Job Description:
-
Develop and Deploy Spark Applications: Design develop test and deploy robust and scalable data processing applications using Apache Spark and Scala.
-
Performance Optimization: Optimize and tune Spark applications for enhanced performance and efficiency especially when handling large-scale datasets.
-
Data Pipeline Development: Build and maintain data pipelines often integrating with various big data technologies like Hadoop (HDFS Hive) Kafka and other data storage solutions.
Mandatory Skill Sets -
Programming Expertise: Strong proficiency in Scala (including functional programming concepts) and experience with JVM-based languages like Java or Python.
-
Big Data Technologies: Expertise in Spark Scala (Spark Core Spark SQL Spark Streaming) and familiarity with the broader Hadoop ecosystem (HDFS Hive etc.).
-
Database Knowledge: Proficiency in SQL and experience working with relational databases (e.g. PostgreSQL MySQL Oracle) and NoSQL databases (e.g. MongoDB Cassandra).
-
Cloud Platforms: Experience with major cloud services such as AWS Azure or GCP is often preferred.
View more
View less