R2R Data Engineer(R2R (Record to Report)

Programmers.io

Not Interested
Bookmark
Report This Job

profile Job Location:

Austin, TX - USA

profile Monthly Salary: Not Disclosed
Posted on: 7 days ago
Vacancies: 1 Vacancy

Job Summary

Summary

The R2R Data Engineer is a technical expert responsible for architecting and implementing enterprise-scale data infrastructure that powers critical financial analytics and reporting for A Finance. This role requires deep technical expertise in modern data engineering practices including building high-performance ETL/ELT pipelines designing scalable data models and implementing robust data quality frameworks that ensure accuracy and consistency across financial systems.

This position demands hands-on experience with cloud-native data platforms advanced SQL optimization and programmatic data transformations. The engineer will work cross-functionally with business users FDT IS&T data scientists and other engineers to develop production-grade data services that support financial close processes regulatory reporting and strategic decision-making.

You will be working in enterprise data warehouse (Snowflake) Dataiku and lakehouse environments (AWS S3) to design dimensional models implement data governance policies and optimize query performance for large-scale financial datasets.

Responsibilities

  • Design and implement scalable data architectures and dimensional models (star/snowflake schemas) that support financial reporting analytics and machine learning use cases
  • Develop test deploy monitor document and troubleshoot complex data pipelines using modern orchestration frameworks with proper error handling logging and alerting mechanisms
  • Build and maintain RESTful APIs and microservices for data access and integration with downstream applications (e.g.: Blackline)
  • Implement data quality frameworks including automated validation reconciliation logic and anomaly detection to ensure financial data accuracy
  • Optimize SQL queries and data models for performance in Snowflake including leveraging clustering keys materialized views and query optimization techniques
  • Design and implement secure data pipelines with end-to-end encryption role-based access controls and compliance with data privacy regulations
  • Collaborate with data scientists and ML engineers to build feature stores and data pipelines that support machine learning model training and inference
  • Establish and enforce data engineering best practices including code reviews testing strategies (unit integration data quality tests) and documentation standards
  • Evaluate and implement emerging technologies in the data engineering space (e.g.: streaming platforms data quality tools metadata management solutions)
  • Participate in on-call rotation to support production data pipelines and resolve critical incidents

Key Qualifications

Required Technical Skills:

  • 5 years of advanced Python programming experience including object-oriented design asynchronous programming and package development
  • Expert-level SQL skills including complex joins window functions CTEs query optimization and performance tuning in databases
  • Hands-on experience designing and implementing data models in Snowflake including time-travel zero-copy cloning data sharing and cost optimization strategies
  • Proven experience building production-grade ETL/ELT pipelines processing large volumes of data
  • Strong experience with AWS services including S3 Lambda EC2 IAM Secrets Manager and CloudWatch
  • Experience implementing data security controls including encryption at rest/in transit data masking tokenization and row-level security
  • Hands-on experience with CI/CD pipelines using GitHub
  • Strong Git version control skills including branching strategies pull requests and code review processes
  • Proficiency in shell scripting (Bash) for automation and system administration tasks

Preferred Technical Skills:

  • Experience with streaming data platforms (Kafka Kinesis Pub/Sub) and real-time data processing frameworks (Spark Streaming Flink)
  • Knowledge of containerization (Docker) and orchestration platforms (Kubernetes ECS)
  • Experience with data catalog and metadata management tools (Alation Collibra DataHub)
  • Experience with data quality frameworks (Great Expectations Soda Monte Carlo)
  • Experience building and consuming RESTful APIs using frameworks like FastAPI or Flask

Business & Soft Skills:

  • Understanding of financial processes including Record-to-Report (R2R) Order-to-Cash (O2C) Procure-to-Pay (P2P) or financial planning
  • Experience working with ERP systems (SAP Oracle Financials) and extracting data from these platforms
  • Strong problem-solving skills with ability to debug complex data issues and performance bottlenecks
  • Excellent communication skills with ability to explain technical concepts to non-technical stakeholders
  • Experience working in Agile/Scrum environments with cross-functional teams

Education and Experience

  • Bachelors degree in Computer Science Computer Engineering Data Engineering Mathematics Statistics or other quantitative discipline required
  • 5 years of professional experience in data engineering roles with demonstrated expertise in building production data systems
  • Masters degree in related field preferred
Summary The R2R Data Engineer is a technical expert responsible for architecting and implementing enterprise-scale data infrastructure that powers critical financial analytics and reporting for A Finance. This role requires deep technical expertise in modern data engineering practices including b...
View more view more