Data Engineer

Payoneer

Not Interested
Bookmark
Report This Job

profile Job Location:

Gurgaon - India

profile Monthly Salary: Not Disclosed
Posted on: 5 days ago
Vacancies: 1 Vacancy

Job Summary

About Payoneer

Founded in 2005 Payoneer is the global financial platform that removes friction from doing business across borders with a mission to connect the worlds underserved businesses to a rising global economy. Were a community with over 2500 colleagues all over the world working to serve customers and partners in over 190 countries and territories.

By taking the complexity out of the financial workflowsincluding everything from global payments and compliance to multi-currency and workforce management to providing working capital and business intelligencewe give businesses the tools they need to work efficiently worldwide and grow with confidence.


Role summary

Werelooking for a Data Engineer who is a hands-on builder with a drive for excellence and a pragmatic problem-solving business and product needs into reliable batch and streaming data pipelines in a payments and fintech environment.

This role is best suited to an engineer with solid data engineering fundamentals who is excited to buildoperate and improve production data systems while continuing to grow in streaming platform reliability and cloud-native data engineering practices.

AI-first mindset: We value engineers who can incorporate AI-enabled and agentic development practices into day-to-day delivery using AI responsibly to accelerate development and testing improve observability and data quality and solve engineering use cases where it creates clear business value.

WhatYoullDo

  • Buildmaintain andoptimizebatch and streaming data pipelines that power product and business use cases using distributed data processing frameworks such as Apache Beam Spark or Flink with managed runners or engines such as Google Cloud Dataflow where relevant.
  • Develop curated datasets and dimensional models for analytics and reporting in cloud data warehouses.
  • Implement workflow orchestration and automation with an emphasis on reliability repeatability and clear failure handling.
  • Contribute to event-driven integrations using messaging platforms such as Kafka building familiarity with core streaming concepts including windowing late-data handling replay and backfill strategies and idempotency.
  • Work with operational data stores such as Bigtable SQL Server MongoDB or equivalentswherealigned to access patterns scalability and performance requirements.
  • Strengthen data quality and trust through validation frameworks pipeline observability monitoring and governance-aligned practices.
  • Use AI-assisted development tools to improve throughput for example through faster debugging automated test scaffolding and better documentation and explore data engineering-adjacent AI use cases such as anomaly detection on pipeline or business metrics.

Who You Are

  • You have a solid foundation in data engineering and are excited to build andoperatereliable data pipelines in production.
  • Yourecomfortable working across core batch data engineering patterns and you have some exposure to streaming concepts and distributed processing at scale.
  • You enjoy debugging and improving performance and data quality.
  • You collaborate well with product analytics and business stakeholders and can translate requirements into clear technical tasks.
  • You care about engineering hygiene including testing documentation and operational ownership andyoureopen to using AI responsibly to improve your throughput and the quality of what you ship

Key skills and competencies

  • Hands-on experience building andmaintainingproduction data pipelines with strong SQL and data modelling fundamentals.
  • Experience with at least one distributed data processing framework such as Apache Beam Spark or Flink.
  • Experience with at least one cloud data warehouse such asBigQuery Snowflake Redshift Databricks SQL or Synapse.
  • Familiarity with pipeline orchestration using frameworks such as Airflow Composer Prefect or equivalent.
  • Exposure to streaming platforms such as Kafka and an understanding of core streaming concepts including windowing late data replay and idempotency.
  • Understanding of data quality and observability basics including validation checks monitoring and lineage or metadata concepts.

Preferred

  • Bachelors degree in Computer Science Engineering Mathematics ora relatedfield.
  • Experience with at least one major cloud data platform such as Google Cloud AWS or Azure.
  • Prior exposure to fintech payments lending or broader financial services domains.
  • Exposure toautomation tools for reporting workflows.

Why this role

Youllwork on high-impact data foundations that directly enable product outcomes reporting and downstream AI/ML use cases.

Youllship in a collaborative environment that values clarity ownership and continuous improvement with room to grow your technical depth across both batch and streaming systems.


Required Experience:

IC

About PayoneerFounded in 2005 Payoneer is the global financial platform that removes friction from doing business across borders with a mission to connect the worlds underserved businesses to a rising global economy. Were a community with over 2500 colleagues all over the world working to serve cust...
View more view more

About Company

Company Logo

In today’s borderless digital world, Payoneer enables millions of businesses and professionals from more than 200 countries and territories to connect with each other and grow globally through our cross-border payments platform. With Payoneer’s fast, flexible, secure and low-cost solu ... View more

View Profile View Profile