CSQ126R235. This role is hybrid with 3 days in our Plano office.
Are you passionate about solving challenging technical problems working with cuttingedge big data technologies and growing your expertise in Apache Spark cloud platforms and data engineering Join our global team and make an impact by helping customers achieve their goals with Databricks!
We are looking for a Stafflevel Spark Technical Solutions Engineer with a strong data engineering background and handson Spark experience. In this role youll work closely with our customers to solve complex technical challenges related to Spark machine learning Delta Lake streaming and our Lakehouse platform. Youll use your technical expertise and communication skills to guide customers in their Databricks journey ensuring they maximize the value of our platform.
The Impact You Will Have
- Analyze and troubleshoot Spark issues such as job performance and slowness using tools like Spark UI DAG and event logs.
- Solve problems related to Spark core Spark SQL structured streaming Delta and other Databricks runtime features.
- Help customers optimize Spark performance in areas like memory management streaming and data integration.
- Work directly with strategic customers to resolve daytoday Spark and cloudrelated issues.
- Collaborate with Account Executives Customer Success Engineers and Solution Architects to address customer needs.
- Collaborate with the R&D team to identify and escalate complex technical challenges driving inhouse supportability solutions within the Lakehouse platform.
- Provide live support via screensharing sessions Slack and meetings to resolve major Spark issues.
- Create and maintain technical documentation including knowledge base articles and manuals.
- Coordinate with engineering teams to report and track product defects.
- Participate in oncall rotations for handling escalations and incidents.
- Offer best practices for Spark performance and custombuilt solutions.
- Advocate for customers and their success.
- Contribute to the development of internal tools and automation.
- Support integrations between Databricks and thirdparty platforms.
- Track and manage support tickets to meet SLAs.
- Continuously learn and improve your expertise in Databricks AWS and Azure.
What Were Looking For
- 812 years of experience developing Python Java or Scala applications in data engineering or consulting roles.
- 3 years of handson experience with Spark (required) and other big data technologies like Hadoop Kafka or machine learning at a production scale.
- Proven experience troubleshooting and optimizing Hive and Spark applications.
- Knowledge of JVM memory management and garbage collection is a plus.
- Familiarity with SQL databases and ETL tools (e.g. Informatica Oracle Teradata) is preferred.
- Handson experience with AWS Azure or GCP is a plus.
- Excellent written and verbal communication skills.
- Basic Linux/Unix skills are a bonus.
- Knowledge of data lakes and slowly changing dimensions (SCD) is a plus.
- Strong problemsolving and analytical skills especially in distributed big data environments.
Required Experience:
Staff IC