Senior-Level BackendData Platform PythonJava Engineer 3

Not Interested
Bookmark
Report This Job

profile Job Location:

Sunnyvale, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: Yesterday
Vacancies: 1 Vacancy

Job Summary

Top Required Skills (Non Negotiable / Highest Priority)

  • Python (most important)
  • Java (important but secondary to Python)
  • Must be fluent not just basic familiarity.
  • Cloud development experience: Ideally AWS GCP
  • Data pipelines / data processing experience:
  • Ideally with Apache ecosystem:
  • Kafka
  • Flink
  • (Iceberg as a plus)
  • Kubernetes (doesnt need to be an expert; exposure ability to learn is fine)
  • CI/CD Terraform is a bonus



Interview process: 3 Rounds (Zoom & technical coding)

This is a midlevel backend engineering role focused on real time data pipelines and distributed systems using Python Kafka Flink and AWS.

Project is migrating workloads out of Snowflake to reduce cost and latency and theyre looking for a strong coder with data processing experience who can grow under a Staff Engineer.

Software Development Engineer 3 (Sunnyvale CA)

This role is a senior-level backend/data platform engineering position supporting Intuitive Surgicals robot manufacturing and clinical data ecosystems. The engineer designs and builds highly scalable event-driven and streaming data platforms that power ingestion processing and self-service access to large volumes of robotic and clinical data. The role partners closely with core engineering teams to evolve data models APIs and platform capabilities while applying modern engineering best practices including CI/CD automated testing infrastructure-as-code microservices and Kubernetes-based deployments. The ideal candidate is a strong Python/Java engineer with hands-on experience building distributed data pipelines using technologies such as Kafka/Flink Snowflake AWS Lambda Kubernetes and SQL with bonus strengths in Apache Iceberg Terraform GitLab CI/CD and CNCF-native cloud platforms.

Responsibilities

  • Build highly scalable distributed systems that leverage event-based and streaming data pipelines to handle ingestion and processing of robot manufacturing and clinical data
  • Enable users by providing self-service APIs and applications to access and interact with data
  • Work closely with core engineering teams to consistently evolve data models based on growing business needs
  • Apply software development best practices such as CI/CD automated testing infrastructure-as-code and microservice architectures
  • Effectively participate in the teams planning code reviews KPI reviews and design discussions leading to continuous improvement in these areas.

Skills Characteristics and Technology:

  • Exceptional quantitative background (Computer Science Math Physics and/or Engineering) or at least 5 years of industry experience in a quantitative role
  • Fluent coding with Python and Java
  • Proven experience building data pipelines and working with distributed systems using technologies such as Kafka/Flink Snowflake AWS Lambdas
  • Excellent written and verbal communication skills
  • Proven understanding of best engineering practices such as unit testing and integration testing and deployment patterns.
  • Experience with Kubernetes
  • Experience with SQL and relational databases
  • Ability and enthusiasm to work collaboratively and cross-functionally and take end-to-end ownership to deliver results for customers

Bonus points:

  • Experience on a Platform team
  • Experience with Gitlab CI/CD or other CI tooling
  • Experience with Apache Iceberg
  • Experience with Terraform and general IaC best practices
  • Youre familiar with CNCF projects and have successfully used them in the past

Project is Refactor existing Data Pipelines and focus on Distributing processing and data pipelines. Real-time data processing on the cloud (AWS Kubernetes).

Move some workloads out of Snowflake into real-time frameworks using Kafka Flink on AWS EKS (Kubernetes)

Goals: reduce cost and improve latency for end customers

  • Its a Mid level backend engineer who builds the systems that move process and expose large volumes of data in real time.
  • Design build and maintain backend services and distributed data processing pipelines
  • Work on real-time streaming pipelines using Kafka and Flink
  • Help migrate workloads from Snowflake to real-time systems
  • Develop and deploy services running on Kubernetes (AWS EKS)
  • Collaborate closely with a Staff Engineer who will lead the project
  • Use DevOps tooling (CI/CD Terraform) as needed to support development and deployment
  • Candidate must have done project related to distributed systems work especially data processing pipelines.
Top Required Skills (Non Negotiable / Highest Priority) Python (most important) Java (important but secondary to Python) Must be fluent not just basic familiarity. Cloud development experience: Ideally AWS GCP Data pipelines / data processing experience: Ideally with Apache ecosystem: Kafka Flink (...
View more view more

Key Skills

  • REST
  • Eclipse
  • Junit
  • Spring
  • Struts
  • SOAP
  • Jpa
  • Hibernate
  • Maven
  • J2EE
  • Jdbc
  • Java