Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailKey Responsibilities:
Design and manage data pipelines using GCP tools (BigQuery Dataflow Dataproc).
Collaborate with stakeholders to define data requirements and models.
Implement cloud data storage processing and analytics solutions.
Optimize Big Data ecosystems (Hadoop HDFS Hive Spark).
Develop ETL processes and workflows for data transformation.
Apply advanced Java skills for scalable data processing.
Troubleshoot data issues and improve performance.
Automate tasks and ensure system reliability using GCP services.
Maintain data quality security and compliance standards.
Stay updated with best practices in cloud infrastructure.
Key Skills & Qualifications:
Proficient with GCP services (BigQuery Dataflow Dataproc).
Expertise in Big Data and Hadoop ecosystems (Spark Hive HBase).
Strong Java programming skills for cloud-based data tasks.
Excellent problem-solving and analytical abilities.
Experience with data pipelines workflows and cloud data architecture.
Familiarity with containerization (Docker Kubernetes) and CI/CD.
Strong communication and team collaboration skills.
Full Time