drjobs
Apache Spark Engineer
drjobs
Apache Spark Enginee....
drjobs Apache Spark Engineer العربية

Apache Spark Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs

Jobs by Experience

drjobs

4-5years

Job Location

drjobs

Sabana - USA

Monthly Salary

drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Req ID : 2581617

This is a remote position.


Job Overview:
We are seeking a highly skilled Apache Spark Engineer with a minimum of 3 years of experience to join our dynamic team. The ideal candidate will be responsible for designing implementing and maintaining Apache Spark applications to process and analyze large datasets. The candidate should have a strong background in big data processing distributed computing and data engineering.



Requirements

  1. Design develop and maintain Apache Spark applications for largescale data processing.
  2. Collaborate with data scientists and analysts to understand data requirements and implement efficient solutions.
  3. Optimize and tune Spark applications for performance and scalability.
  4. Develop and implement data pipelines for ETL (Extract Transform Load) processes.
  5. Work closely with crossfunctional teams to integrate Spark solutions into the overall data architecture.
  6. Troubleshoot and resolve issues related to Spark applications ensuring high availability and reliability.
  7. Stay updated with the latest trends and advancements in big data technologies especially in the Apache Spark ecosystem.


Job Title: Apache Spark Engineer

Job Overview: We are seeking a highly skilled Apache Spark Engineer with a minimum of 3 years of experience to join our dynamic team. The ideal candidate will be responsible for designing implementing and maintaining Apache Spark applications to process and analyze large datasets. The candidate should have a strong background in big data processing distributed computing and data engineering.

Key Responsibilities:

  1. Design develop and maintain Apache Spark applications for largescale data processing.
  2. Collaborate with data scientists and analysts to understand data requirements and implement efficient solutions.
  3. Optimize and tune Spark applications for performance and scalability.
  4. Develop and implement data pipelines for ETL (Extract Transform Load) processes.
  5. Work closely with crossfunctional teams to integrate Spark solutions into the overall data architecture.
  6. Troubleshoot and resolve issues related to Spark applications ensuring high availability and reliability.
  7. Stay updated with the latest trends and advancements in big data technologies especially in the Apache Spark ecosystem.

Qualifications:

  1. Bachelors or Masters degree in Computer Science Information Technology or a related field.
  2. Minimum of 3 years of handson experience with Apache Spark in a production environment.
  3. Proficient in programming languages such as Scala or Python.
  4. Strong understanding of distributed computing concepts and big data processing.
  5. Experience with Spark SQL Spark Streaming and Spark MLlib.
  6. Familiarity with Hadoop ecosystem tools such as HDFS Hive and HBase.
  7. Solid understanding of data modeling data structures and algorithms.
  8. Excellent problemsolving and analytical skills.

Preferred Skills:

  1. Experience with cloud platforms such as AWS Azure or Google Cloud.
  2. Knowledge of containerization and orchestration tools like Docker and Kubernetes.
  3. Familiarity with version control systems particularly Git.
  4. Experience with Apache Kafka for realtime data streaming.
  5. Strong communication skills and ability to work collaboratively in a team environment.


Responsibilities: Customization and Configuration: Key Responsibilities: Design, develop, and maintain Apache Spark applications for large-scale data processing. Collaborate with data scientists and analysts to understand data requirements and implement efficient solutions. Optimize and tune Spark applications for performance and scalability. Develop and implement data pipelines for ETL (Extract, Transform, Load) processes. Work closely with cross-functional teams to integrate Spark solutions into the overall data architecture. Troubleshoot and resolve issues related to Spark applications, ensuring high availability and reliability. Stay updated with the latest trends and advancements in big data technologies, especially in the Apache Spark ecosystem.

Employment Type

Full Time

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.