drjobs Bigdata Developer

Bigdata Developer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Pune - India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

About the job


Company Description

Spero Healthcare Innovations is a software company based in Pune. Our focus is on creating innovative emergency management software for critical dispatch of ambulances and other emergency service providers. Our SPERO EMS software is customized to meet the specific needs of governmentrun ambulance services and other emergency service providers. We prioritize userfriendly and ecofriendly software solutions that maximize electronic communication and documentation while reducing human error. In addition to our emergency management software we also have a dedicated Home Health Care Support System with a team of healthcare professionals.


Role Description

This is a fulltime onsite role for a Big Data Developer in Pune. As a Big Data Developer your responsibilities will include data engineering Extract Transform Load (ETL) big data processing data warehousing and software development. You will work on developing and maintaining scalable data solutions analyzing large data sets and optimizing data pipelines. We are searching for a talented Big Data Developer to join our growing data team. You will play a key role in designing developing and maintaining realtime data pipelines using Apache Kafka. You will work closely with engineers and data scientists to ensure our data infrastructure runs smoothly and efficiently.



Responsibilities

  • Design develop and implement realtime data pipelines using Kafka
  • Integrate Kafka with other big data technologies (e.g. Hadoop Spark)
  • Develop and maintain data transformation logic for streaming data
  • Monitor and troubleshoot Kafka clusters
  • Perform data quality checks and ensure data integrity
  • Work with data scientists to understand data requirements
  • Stay uptodate on the latest big data trends and technologies

Qualifications

  • Bachelors degree in Computer Science Data Science or a related field (or equivalent experience)
  • 3 years of experience in big data development
  • Proven experience with Apache Kafka (including Producer Consumer and Streams API)
  • Strong programming skills in Java or Python (or both)
  • Experience with distributed systems and data modeling
  • Solid understanding of data structures and algorithms
  • Experience with Linux and scripting languages (e.g. Bash)
  • Excellent problemsolving and analytical skills
  • Strong communication and collaboration skills

Bonus Points

  • Experience with other big data technologies (e.g. Hadoop Spark Flink)
  • Experience with cloud platforms (e.g. AWS Azure GCP)
  • Experience with data visualization tools (e.g. Tableau Power BI)

What We Offer

  • Competitive salary and benefits package
  • Opportunity to work on challenging and impactful projects
  • Collaborative and supportive work environment
  • Chance to learn and grow your skills with the latest technologies


Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.