Core Responsibilities
Architecture & Development: Lead the design and development of scalable data pipelines and robust backend services using Java and Python.
Strategic Scalability: Architect high-throughput infrastructure optimized for high availability and sub-second latency.
High-Performance Querying: Design distributed ingestion layers and warehouse schemas tailored for Trino and Athena to support petabyte-scale querying.
End-to-End Ownership: Drive the full SDLCfrom gathering complex requirements to production deployment monitoring and incident response.
Stakeholder Leadership: Partner with cross-functional teams to translate business needs into technical requirements and resolve systemic platform bottlenecks.
Operational Excellence: Maintain a strong bias for action ensuring the team delivers resilient high-quality code in a fast-paced environment.
- Observability: Hands-on experience with Prometheus and Grafana for system monitoring.
Technical Requirements (Must-Have)
Language Mastery: Expert-level proficiency in Java (Primary) and Python.
Distributed Systems: Strong command of Data Structures and Algorithms with a proven track record of optimizing large-scale distributed systems.
Orchestration & Infrastructure: Hands-on Kubernetes expertise for managing containerized deployments and resolving complex infrastructure failures.
Data & Messaging: Deep experience with SQL Kafka and the Confluent Schema Registry.
Query Engines: Proficiency with distributed query engines like Athena or Trino.
Preferred Qualifications (Bonus)
Advanced Querying: Specialized experience in Trino performance tuning.
Orchestration: Experience with Airflow for complex workflow management.
Analytics: Familiarity with product analytics tools and data modeling.
Required Experience:
IC
Core ResponsibilitiesArchitecture & Development: Lead the design and development of scalable data pipelines and robust backend services using Java and Python.Strategic Scalability: Architect high-throughput infrastructure optimized for high availability and sub-second latency.High-Performance Queryi...
Core Responsibilities
Architecture & Development: Lead the design and development of scalable data pipelines and robust backend services using Java and Python.
Strategic Scalability: Architect high-throughput infrastructure optimized for high availability and sub-second latency.
High-Performance Querying: Design distributed ingestion layers and warehouse schemas tailored for Trino and Athena to support petabyte-scale querying.
End-to-End Ownership: Drive the full SDLCfrom gathering complex requirements to production deployment monitoring and incident response.
Stakeholder Leadership: Partner with cross-functional teams to translate business needs into technical requirements and resolve systemic platform bottlenecks.
Operational Excellence: Maintain a strong bias for action ensuring the team delivers resilient high-quality code in a fast-paced environment.
- Observability: Hands-on experience with Prometheus and Grafana for system monitoring.
Technical Requirements (Must-Have)
Language Mastery: Expert-level proficiency in Java (Primary) and Python.
Distributed Systems: Strong command of Data Structures and Algorithms with a proven track record of optimizing large-scale distributed systems.
Orchestration & Infrastructure: Hands-on Kubernetes expertise for managing containerized deployments and resolving complex infrastructure failures.
Data & Messaging: Deep experience with SQL Kafka and the Confluent Schema Registry.
Query Engines: Proficiency with distributed query engines like Athena or Trino.
Preferred Qualifications (Bonus)
Advanced Querying: Specialized experience in Trino performance tuning.
Orchestration: Experience with Airflow for complex workflow management.
Analytics: Familiarity with product analytics tools and data modeling.
Required Experience:
IC
View more
View less