Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Location : NYC
RoleSpecific Technical Requirements (Based on Interview Questions)
Deep understanding of SQL compression techniques and optimization strategies.
Expert knowledge of GCP architecture core services and best practices.
Ability to design and implement Pub/Subbased solutions for realtime highvolume data ingestion.
Proven approach to ensuring lowlatency data processing in distributed environments.
Demonstrated experience implementing security and access controls within GCP using IAM and security tools.
Handson experience using GCP Stackdriver for pipeline troubleshooting and operational visibility.
Indepth knowledge of GCP services including:
Mastery of Python and SQL as scripting and query languages for data manipulation validation and ETL processes.
Key Responsibilities
Design build and optimize endtoend data pipelines using GCPnative services such as Dataflow Dataproc and Pub/Sub.
Implement data ingestion transformation and processing workflows using Apache Beam Apache Spark and scripting in Python.
Manage and optimize data storage using BigQuery Cloud Storage and Cloud SQL to ensure performance scalability and costefficiency.
Enforce enterprisegrade data security and access controls using GCP IAM and Cloud Security Command Center.
Monitor and troubleshoot data pipelines using Stackdriver and Cloud Monitoring to ensure high availability and low latency.
Collaborate closely with analysts data scientists and crossfunctional product teams to understand business needs and deliver robust data solutions.
Required Skills and Qualifications
Full Time