Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Key Responsibilities:
Develop and maintain complex SQL queries and stored procedures for data extraction transformation and loading (ETL).
Build and optimize scalable data pipelines and data processing workflows using PySpark and Python.
Collaborate with data engineers scientists and analysts to understand and fulfill data requirements.
Ensure data quality integrity and consistent performance across big data environments.
Debug monitor and fine-tune data jobs for optimal performance.
Document code and processes; adhere to best practices for coding and performance.
This role is pivotal in enabling organizations to process analyze and manage large datasets efficiently on modern enterprise data platforms.
Full-time