drjobs Staff Software Engineer - Data Platform

Staff Software Engineer - Data Platform

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Bengaluru - India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

P1348

At Databricks we are passionate about enabling data teams to solve the worlds toughest problems from security threat detection to cancer drug development. We do this by building and running the worlds best data and AI infrastructure platform so our customers can focus on the high value challenges that are central to their own missions. Our engineering teams build technical products that fulfill real important needs in the world. We always push the boundaries of data and AI technology while simultaneously operating with the resilience security and scale that is important to making customers successful on our platform.

We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines generating terabytes of logs and processing exabytes of data per day. At our scale we observe cloud hardware network and operating system faults and our software must gracefully shield our customers from any of the above.

As a Staff Software Engineer working on the Data Platform team you will help build the Data Intelligence Platform for Databricks that will allow us to automate decisionmaking across the entire company. You will achieve this in collaboration with Databricks Product Teams Data Science Applied AI and many more. You will develop a variety of tools spanning logging orchestration data transformation metric store governance platforms data consumption layers etc. You will do this using the latest bleedingedge Databricks product and other tools in the data ecosystem the team also functions as a large production inhouse customer that dog foods Databricks and guides the future direction of the product.

The impact you will have:

  • Design and run the Databricks metrics store that enables all business units and engineering teams to bring their detailed metrics into a common platform for sharing and aggregation with high quality introspection ability and query performance.
  • Design and run the crosscompany Data Intelligence Platform which contains every business and product metric used to run Databricks. Youll play a key role in developing the right balance of data protections and ease of shareability for the Data Intelligence Platform as we transition to a public company.
  • Develop tooling and infrastructure to efficiently manage and run Databricks on Databricks at scale across multiple clouds geographies and deployment types. This includes CI/CD processes test frameworks for pipelines and data quality and infrastructureascode tooling.
  • Design the base ETL framework used by all pipelines developed at the company.
  • Partner with our engineering teams to provide leadership in developing the longterm vision and requirements for the Databricks product.
  • Build reliable data pipelines and solve data problems using Databricks our partners products and other OSS tools. Provide early feedback on the design and operations of these products.
  • Establish conventions and create new APIs for telemetry debug feature and audit event log data and evolve them as the product and underlying services change.
  • Represent Databricks at academic and industrial conferences & events.

What we look for:

  • 12 years of industry experience
  • 6 years of experience providing technical leadership on large projects similar to the ones described above ETL frameworks metrics stores infrastructure management data security.
  • Experience building shipping and operating reliable multigeo data pipelines at scale.
  • Experience working with and operating workflow or orchestration frameworks including open source tools like Airflow and DBT or commercial enterprise tools.
  • Experience with largescale messaging systems like Kafka or RabbitMQ or commercial systems.
  • Excellent crossfunctional and communication skills consensus builder.
  • Passion for data infrastructure and for enabling others by making their data easier to access.


Required Experience:

Staff IC

Employment Type

Full-Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.