Mid-level Data Engineer

Info Resume Edge

Not Interested
Bookmark
Report This Job

profile Job Location:

Phoenix, NM - USA

profile Monthly Salary: Not Disclosed
Posted on: Yesterday
Vacancies: 1 Vacancy

Job Summary

Minimum Qualifications:

  • Masters in computer applications or equivalent OR bachelors degree in engineering or computer science or equivalent.
  • Deep understanding of Hadoop and Spark Architecture and its working principle.
  • Deep understanding of Data warehousing concepts.
  • Ability to design and develop optimized Data pipelines for batch and real time data processing.
  • Experience in data analytics and cleansing
  • 5 years of software development experience.
  • 5 years experience on Python or JavaHands-on experience on writing and understanding complex SQL (Hive/PySpark-data frames) optimizing joins while processing huge amount of data.
  • 3 years of hands-on experience of working with Map-Reduce Hive Spark (core SQLand PySpark).
  • Hands on Experience on Google Cloud Platform (BigQuery DataProc Cloud Composer)
  • Hands on Experience in Airflow
  • 3 years of experience in UNIX shell scripting
  • Should have experience in analysis design development testing and implementation of system applications.
  • Ability to effectively communicate with internal and external business partners.

Additional requirements:

  • Understanding of Distributed eco system.
  • Experience in machine learning models RAG and NLP
  • Experience in designing and building solutions using Kafka streams or queues.
  • Experience with NoSQL i.e. HBase Cassandra Couchbase or MongoDB
  • Experience with Data Visualization tools like PowerBI Tableau SiSense Looker
  • Ability to learn and apply new programming concepts.
  • Knowledge of Financial reporting ecosystem will be a plus.
  • Experience in leading teams of engineers and scrum teams
Minimum Qualifications: Masters in computer applications or equivalent OR bachelors degree in engineering or computer science or equivalent. Deep understanding of Hadoop and Spark Architecture and its working principle. Deep understanding of Data warehousing concepts. Ability to design and develop o...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala