Senior DataOps Engineer

DLocal

Not Interested
Bookmark
Report This Job

profile Job Location:

Madrid - Spain

profile Monthly Salary: Not Disclosed
Posted on: 23 hours ago
Vacancies: 1 Vacancy

Job Summary

Why should you join dLocal

dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record where we operate we make it possible for our merchants to make inroads into the worlds fastest-growing emerging markets.

By joining us you will be a part of an amazing global team that makes it all happen in a flexible remote-first dynamic culture with travel health and learning benefits among others. Being a part of dLocal means working with 1000 teammates from 30 different nationalities and developing an international career that impacts millions of peoples daily lives. We are builders we never run from a challenge we are customer-centric and if this sounds like you we know you will thrive in our team.




Whats the opportunity
As a Senior DataOps Engineer youll be a strategic professional shaping the foundation of our data platform. Youll design and evolve scalable infrastructure on Kubernetes operate Databricks as our primary data platform enable data governance and reliability at scale and ensure our data assets are clean observable and accessible.

What will I be doing

    • Architect and evolve scalable infrastructure to ingest process and serve large volumes of data efficiently using Kubernetes and Databricks as core building blocks.
    • Design build and maintain Kubernetes-based infrastructure owning deployment scaling and reliability of data workloads running on our clusters.
    • Operate Databricks as our primary data platform including workspace and cluster configuration job orchestration and integration with the broader data ecosystem.
    • Work in improvements to existing frameworks and pipelines to ensure performance reliability and cost-efficiency across batch and streaming workloads.
    • Build and maintain CI/CD pipelines for data applications (DAGs jobs libraries containers) automating testing deployment and rollback.
    • Implement release strategies (e.g. blue/green canary feature flags) where relevant for data services and platform changes.
    • Establish and maintain robust data governance practices (e.g. contracts catalogs access controls quality checks) that empower cross-functional teams to access and trust data.
    • Build a framework to move raw datasets into clean reliable and well-modeled assets for analytics modeling and reporting in partnership with Data Engineering and BI.
    • Define and track SLIs/SLOs for critical data services (freshness latency availability data quality signals).
    • Implement and own monitoring logging tracing and alerting for data workloads and platform components improving observability over time.
    • Lead and participate in on-call rotation for data platforms manage incidents and run structured postmortems to drive continuous improvement.
    • Investigate and resolve complex data and platform issues ensuring data accuracy system resilience and clear root-cause analysis.
    • Maintain high standards for code quality testing and documentation with a strong focus on reproducibility and observability.
    • Work closely with the Data Enablement team BI and ML stakeholders to continuously evolve the data platform based on their needs and feedback.
    • Stay current with industry trends and emerging technologies in DataOps DevOps and data platforms to continuously raise the bar on our engineering practices.

What skills do I need

    • Bachelors degree in Computer Engineering Data Engineering Computer Science or a related technical field (or equivalent practical experience).
    • Proven experience in data engineering platform engineering or backend software development ideally in cloud-native environments.
    • Deep expertise in Python or/and SQL with strong skills building data or platform tooling.
    • Strong experience with distributed data processing frameworks such as Apache Spark (Databricks experience strongly preferred).
    • Solid understanding of cloud platforms especially AWS and/or GCP.
    • Hands-on experience with containerization and orchestration: Docker Kubernetes / EKS / GKE / AKS (or equivalent)
    • Proficiency with Infrastructure-as-Code (e.g. Terraform Pulumi CloudFormation) for managing data and platform components.
    • Experience implementing CI/CD pipelines (e.g. GitHub Actions GitLab CI Jenkins CircleCI ArgoCD Flux) for data workloads and services.
    • Experience in monitoring & observability (metrics logging tracing) using tools like Prometheus Grafana Datadog CloudWatch or similar.
    • Experience with incident management: Participating in or leading on-call rotations.
    • Handling incidents and running postmortems
    • Building automation and guardrails to prevent regressions
    • Strong analytical thinking and problem-solving skills comfortable debugging across infrastructure network and application layers.
    • Able to work autonomously and collaboratively.

    • Nice to have
    • Experience designing and maintaining DAGs with Apache Airflow or similar orchestration tools (Dagster Prefect Argo Workflows).
    • Familiarity with modern data formats and table formats (e.g. Parquet Delta Lake Iceberg).
    • Experience acting as a Databricks admin/developer managing workspaces clusters compute policies and jobs for multiple teams.
    • Exposure to data quality data contracts or data observability tools and practices.
What do we offer

Besides the tailored benefits we have for each country dLocal will help you thrive and go that extra mile by offering you:
- Flexibility: we have flexible schedules and we are driven by performance.
- Fintech industry: work in a dynamic and ever-evolving environment with plenty to build and boost your creativity.
- Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded.
- Learning & development: get access to a Premium Coursera subscription.
- Language classes: we provide free English Spanish or Portuguese classes.
- Social budget: youll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections!
- dLocal Houses: want to rent a house to spend one week anywhere in the world coworking with your team Weve got your back!

What happens after you apply
Our Talent Acquisition team is invested in creating the best candidate experience possible so dont worry you will definitely hear from us. We will review your CV and keep you posted by email at every step of the process!

We may use artificial intelligence (AI) tools to support parts of the hiring process such as reviewing applications analyzing resumes or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed please contact us.

Required Experience:

Senior IC

Why should you join dLocaldLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record where we opera...
View more view more

Key Skills

  • APIs
  • C/C++
  • Computer Graphics
  • Go
  • React
  • Redux
  • Node.js
  • AWS
  • Library Services
  • Assembly
  • GraphQL
  • High Voltage

About Company

Company Logo

Simplify your cross-border payment operations in high-growth markets. Send and receive funds locally, reaching new customers. One easy integration, unlimited secure transactions.

View Profile View Profile