About Netskope
Today theres more data and users outside the enterprise than inside causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed one that is built in the cloud and follows and protects data wherever it goes so we started Netskope to redefine Cloud Network and Data Security.
Since 2012 we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara St. Louis Bangalore London Paris Melbourne Taipei and Tokyo. Our core values are openness honesty and transparency and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships collaboration and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON) we strive to keep work fun supportive and interactive.Visit us atNetskope Careers. Please follow us on LinkedIn and Twitter@Netskope.
About the role
Please note this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience.
The Digital Experience Management (DEM Engineering team) is responsible for building data ingestions analytics API and AI/ML on timeseries network user and application telemetry data generated from real user monitoring (RUM) synthetic monitoring endpoint monitoring from Netskope SASE platform . We work closely with engineers and the product team to build solutions solving real world problems of Network Operators and IT Admins.
Whats in it for you
As part of the Digital Experience Management team you will work on state-of-the-art cloud-scale distributed systems at the intersections of networking cloud security and big data. You will be part of designing and building systems that provide critical infrastructure for global Fortune 100 companies.
What you will be doing:
1. Architecting and Building Distributed Data Systems
- Design and implement large-scale distributed platforms microservices and frameworks.
- Build data ingestion pipelines that can handle millions of telemetry events daily both streaming and batch.
- Ensure systems are fault-tolerant highly available and cost-efficient at scale.
2. Translating Complex Business Needs into Software
- Partner closely with the product team to understand complex operational and analytical requirements.
- Convert these into usable performant and maintainable technical solutions.
3. Technical Leadership
- Serve as a technical mentor and architectural guide for senior developers.
- Lead architecture reviews design discussions and code reviews.
- Influence engineering practices and promote best-in-class observability reliability and security.
4. Innovation in DEM and SASE
- Build solutions that enhance user experience monitoring correlating data across network endpoint and cloud layers.
- Integrate AI/ML models for root cause analysis anomaly detection and forecasting on time series telemetry data.
- Continuously optimize data reliability latency and insight accuracy.
Required skills and experience
Core Technical Expertise
- 8 years building scalable distributed systems in cloud-native environments.
- Expert-level ability to design and deliver complex technical solutions from architecture to production.
- Hands-on experience with data pipelines that handle massive throughput both streaming (Kafka Flink Spark) and batch (ETL frameworks).
- Big Data Architecture expertise: data modeling ingestion transformation and storage optimization (especially with systems like ClickHouse Redis Kafka).
- Experience with ReST / OpenAPI
Programming and Systems Design
- Strong in Go Python Java with advanced system design and algorithmic problem-solving.
- Deep understanding of networking and security protocols: TCP/IP TLS IPSec GRE PKI DNS BGP routing.
- Strong grasp of web performance and telemetry concepts (latency page load route optimization).
Cloud Containerization and SRE
- Proven experience designing/deploying on AWS or other cloud providers.
- Expertise in Docker and Kubernetes orchestration.
- Deep understanding of SRE principles monitoring alerting SLIs/SLOs and incident management.
- History of driving performance improvements cost optimization and reliability.
Leadership and Communication
- Ability to mentor influence and set technical direction across teams.
- Ownership of a major product area
- Excellent communication and documentation skills for diverse audiences.
- Proven track record of cross-functional collaboration with product operations and data science teams.
Good to have
- Hands-on experience building APM NPM or DEM products.
- Prior work with AI/ML for time series analytics (root cause anomaly detection forecasting).
- Open source contributions related to big data observability or distributed systems.
- Advanced degree (MSCS or equivalent).
What Makes This Role Unique
This is not just another backend or data role its at the intersection of cloud network and data intelligence.
Youll be shaping the core observability and performance layer for some of the worlds largest enterprise networks.
- Its deeply technical but also strategic and influential.
It blends big data cloud-native distributed systems and AI/ML insights all critical to Netskopes SASE vision.
Education
- BSCS or equivalent required MSCS or equivalent strongly preferred
#LI-JB3
Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion race color sex marital or veteran statues age national origin ancestry physical or mental disability medical condition sexual orientation gender identity/expression genetic information pregnancy (including childbirth lactation and related medical conditions) or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate.
Netskope respects your privacy and is committed to protecting the personal information you share with us please refer toNetskopes Privacy Policyfor more details.