You will be a key member of our Data Engineering team focused on designing developing and maintaining robust data solutions on on-premise environments. You will work closely with internal teams and client stakeholders to build and optimize data pipelines and analytical tools using Python PySpark SQL and Hadoop ecosystem technologies. This role requires deep hands-on experience with big data technologies in traditional data center environments (non-cloud).
What youll be doing
- Design build and maintain on-premise data pipelines to ingest process and transform large volumes of data from multiple sources into data warehouses and data lakes
- Develop and optimize PySpark and SQL jobs for high-performance batch and real-time data processing
- Ensure the scalability reliability and performance of data infrastructure in an on-premise setup
- Collaborate with data scientists analysts and business teams to translate their data requirements into technical solutions
- Troubleshoot and resolve issues in data pipelines and data processing workflows
- Monitor tune and improve Hadoop clusters and data jobs for cost and resource efficiency
- Stay current with on-premise big data technology trends and suggest enhancements to improve data engineering capabilities
Qualifications :
- Bachelors degree in Computer Science Software Engineering or a related field
- 5 years of experience in data engineering or a related domain
- Strong programming skills in Python (with experience in PySpark)
- Expertise in SQL with a solid understanding of data warehousing concepts
- Hands-on experience with Hadoop ecosystem components (e.g. HDFS Hive Oozie Sqoop)
- Proven ability to design and manage data solutions in on-premise environments (no cloud dependency)
- Strong problem-solving skills with an ability to work independently and collaboratively
- Excellent communication skills and ability to engage with technical and non-technical stakeholders
Remote Work :
No
Employment Type :
Full-time