Senior Scala Spark Engineer (Kafka AWS Streaming Data)
Location: NYC
Work Model: Onsite / Hybrid (as per client)
Visa: Open
Interview: Technical rounds focused on Spark problem solving
Role Snapshot
- Owning Spark/Scala pipelines powering enterprise data warehouse systems
- Working on streaming batch ingestion (Kafka Spark)
- Tuning performance fixing bottlenecks and supporting live systems
Environment
- High-scale data-intensive financial platform
- Streaming distributed systems (Spark EMR Kafka AWS EKS)
- Fast-paced production-first engineering culture
What this role actually owns day-to-day
- Build and evolve Spark ETL pipelines using Scala
- Add and onboard new data feeds into Kafka/Spark pipelines
- Tune jobs for performance (memory partitions execution plans)
- Support production pipelines and debug failures under load
- Work with data consumers (BI analytics trading systems) to shape usable datasets
- Own delivery end-to-end-from development through release and support
Key Responsibilities
- Write and maintain Spark jobs (Scala) handling high-volume data
- Integrate Kafka streams into batch streaming pipelines
- Profile jobs and optimize execution time and resource usage
- Handle pipeline failures reruns and production fixes
- Build and maintain automated tests (unit integration performance)
- Collaborate with engineering and data teams across regions
Must-Have Requirements (Non-Negotiable)
- 4 years hands-on Scala Apache Spark (including streaming) in production
- Experience building and maintaining ETL/data pipelines at scale
- Strong understanding of distributed processing and performance tuning
- Experience with Kafka or event-driven data pipelines
- Solid background in Java C or C#
- Database experience across relational or distributed systems
Senior Scala Spark Engineer (Kafka AWS Streaming Data) Location: NYC Work Model: Onsite / Hybrid (as per client) Visa: Open Interview: Technical rounds focused on Spark problem solving Role Snapshot Owning Spark/Scala pipelines powering enterprise data warehouse systems Working on streaming bat...
Senior Scala Spark Engineer (Kafka AWS Streaming Data)
Location: NYC
Work Model: Onsite / Hybrid (as per client)
Visa: Open
Interview: Technical rounds focused on Spark problem solving
Role Snapshot
- Owning Spark/Scala pipelines powering enterprise data warehouse systems
- Working on streaming batch ingestion (Kafka Spark)
- Tuning performance fixing bottlenecks and supporting live systems
Environment
- High-scale data-intensive financial platform
- Streaming distributed systems (Spark EMR Kafka AWS EKS)
- Fast-paced production-first engineering culture
What this role actually owns day-to-day
- Build and evolve Spark ETL pipelines using Scala
- Add and onboard new data feeds into Kafka/Spark pipelines
- Tune jobs for performance (memory partitions execution plans)
- Support production pipelines and debug failures under load
- Work with data consumers (BI analytics trading systems) to shape usable datasets
- Own delivery end-to-end-from development through release and support
Key Responsibilities
- Write and maintain Spark jobs (Scala) handling high-volume data
- Integrate Kafka streams into batch streaming pipelines
- Profile jobs and optimize execution time and resource usage
- Handle pipeline failures reruns and production fixes
- Build and maintain automated tests (unit integration performance)
- Collaborate with engineering and data teams across regions
Must-Have Requirements (Non-Negotiable)
- 4 years hands-on Scala Apache Spark (including streaming) in production
- Experience building and maintaining ETL/data pipelines at scale
- Strong understanding of distributed processing and performance tuning
- Experience with Kafka or event-driven data pipelines
- Solid background in Java C or C#
- Database experience across relational or distributed systems
View more
View less