Data Engineer (Spark, Scala)

Not Interested
Bookmark
Report This Job

profile Job Location:

Bengaluru - India

profile Monthly Salary: Not Disclosed
Posted on: 18 hours ago
Vacancies: 1 Vacancy

Job Summary

Job Description:
  • Develop and Deploy Spark Applications: Design develop test and deploy robust and scalable data processing applications using Apache Spark and Scala.

  • Performance Optimization: Optimize and tune Spark applications for enhanced performance and efficiency especially when handling large-scale datasets.

  • Data Pipeline Development: Build and maintain data pipelines often integrating with various big data technologies like Hadoop (HDFS Hive) Kafka and other data storage solutions.

Mandatory Skill Sets
  • Programming Expertise: Strong proficiency in Scala (including functional programming concepts) and experience with JVM-based languages like Java or Python.

  • Big Data Technologies: Expertise in Spark Scala (Spark Core Spark SQL Spark Streaming) and familiarity with the broader Hadoop ecosystem (HDFS Hive etc.).

  • Database Knowledge: Proficiency in SQL and experience working with relational databases (e.g. PostgreSQL MySQL Oracle) and NoSQL databases (e.g. MongoDB Cassandra).

  • Cloud Platforms: Experience with major cloud services such as AWS Azure or GCP is often preferred.

Job Description: Develop and Deploy Spark Applications: Design develop test and deploy robust and scalable data processing applications using Apache Spark and Scala. Performance Optimization: Optimize and tune Spark applications for enhanced performance and efficiency especially when handlin...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala