- Build and maintain Sav s data infrastructure pipelines and storage systems for structured and unstructured data
- Ingest and normalize 350 user-level data points across banking APIs behavioral events transaction data and 3rd party services
- Architect a data mesh that allows distributed ownership but unified insight
- Set up and maintain modern ELT/ETL pipelines with performance cost and scale in mind
- Integrate LLM tooling (Ollama Gemini OpenAI) into data workflows for real-time inference and analysis
- Enable self-service data access and build visualization layers using tools like Looker Metabase or Superset
- Collaborate with data scientists and ML engineers to build model-ready datasets and ensure feature consistency
- Own data quality governance security and documentation
- Continuously improve latency observability and cost-e ciency of data systems
Requirements
- 4 7 years of hands-on experience as a Data Engineer or in a similar data infrastructure role
- Strong programming skills in Python SQL and experience with dbt or similar tools
- Experience building data pipelines using modern stack: Airflow Kafka Spark or equivalent
- Familiarity with cloud data platforms (GCP preferred) BigQuery Firestore or similar
- Experience handling large-scale transactional and behavioral data
- Bonus: Prior experience integrating LLMs/AI APIs (Ollama OpenAI Gemini) into analytics or automation workflows
- Good understanding of data warehousing governance and privacy-by-design principles
- Self-starter comfortable in 0 1 environments and excited to work cross-functionally with tech and non-tech teams
- Clear communication and documentation skills
Benefits
- Be part of a mission-driven fintech scaling across the GCC.
- Work alongside a passionate and visionary leadership team.
- A flat fast-paced no-drama culture with high ownership and high trust
- Work alongside a globally distributed team of experienced builders
- Compensation: INR 12-24LPA ESOPs Employee benefits
4 7 years of hands-on experience as a Data Engineer or in a similar data infrastructure role Strong programming skills in Python, SQL, and experience with dbt or similar tools Experience building data pipelines using modern stack: Airflow, Kafka, Spark, or equivalent Familiarity with cloud data platforms (GCP preferred), BigQuery, Firestore, or similar Experience handling large-scale transactional and behavioral data Bonus: Prior experience integrating LLMs/AI APIs (Ollama, OpenAI, Gemini) into analytics or automation workflows Good understanding of data warehousing, governance, and privacy-by-design principles Self-starter, comfortable in 0 1 environments, and excited to work cross-functionally with tech and non-tech teams Clear communication and documentation skills
Education
4 7 years of hands-on experience as a Data Engineer or in a similar data infrastructure role Strong programming skills in Python, SQL, and experience with dbt or similar tools Experience building data pipelines using modern stack: Airflow, Kafka, Spark, or equivalent Familiarity with cloud data platforms (GCP preferred), BigQuery, Firestore, or similar Experience handling large-scale transactional and behavioral data Bonus: Prior experience integrating LLMs/AI APIs (Ollama, OpenAI, Gemini) into