Who we are
About Stripe
Stripe is a financial infrastructure platform for businesses. Millions of companiesfrom the worlds largest enterprises to the most ambitious startupsuse Stripe to accept payments grow their revenue and accelerate new business opportunities. Our mission is to increase the GDP of the internet and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyones reach while doing the most important work of your career.
About the team
The Reporting Platform Data Foundations group maintains and evolves the core systems that power reporting data for Stripes users. Were responsible for Aqueduct the data ingestion and processing platform that powers core reporting data for millions of businesses on Stripe. We integrate with the latest Data Platform tooling such as Falcon for real-time data. Our goal is to provide a robust scalable and efficient data infrastructure that enables clear and timely insights for Stripes users.
What youll do
As a Software Engineer on the Reporting Platform Data Foundations group you will lead efforts to improve and redesign core data ingestion and processing systems that power reporting for millions of Stripe users. Youll tackle complex challenges in data management scalability and system architecture.
Responsibilities
- Design and implement a new backfill model for reporting data that can handle hundreds of millions of row additions and updates efficiently
- Revamp the end-to-end experience for product teams adding or changing API-backed datasets improving ergonomics and clarity
- Enhance the Aqueduct Dependency Resolver system responsible for determining what critical data to update for Stripes users based on events. Areas include error management observability and delegation of issue resolution to product teams
- Lead integration with the latest Data Platform tooling such as Falcon for real-time data while managing deprecation of older systems
- Implement and improve data warehouse management practices ensuring data freshness and reliability
- Collaborate with product teams to understand their reporting needs and data requirements
- Design and implement scalable solutions for data ingestion processing and storage
- Onboard spin up and mentor engineers and set the groups technical direction and strategy
Who you are
Were looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements you are encouraged to apply. The preferred qualifications are a bonus not a requirement.
Minimum requirements
- 8 years of professional experience writing high quality production level code or software programs.
- Extensive experience in designing and implementing large-scale data processing systems
- Strong background in distributed systems and data pipeline architectures
- Proficiency in at least one modern programming language (e.g. Go Java Python Scala)
- Experience with big data technologies (e.g. Hadoop Flink Spark Kafka Pinot Trino Iceberg)
- Solid understanding of data modeling and database systems
- Excellent problem-solving skills and ability to tackle complex technical challenges
- Strong communication skills and ability to work effectively with cross-functional teams
- Experience mentoring other engineers and driving technical initiatives
Preferred qualifications
- Experience with real-time data processing and streaming systems
- Knowledge of data warehouse technologies and best practices
- Experience in migrating legacy systems to modern architectures
- Contributions to open-source projects or technical communities
Required Experience:
Staff IC