Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailDesign and build enterprise-scale data pipelines processing billions of daily transactions
Optimize our Hadoop/Spark ecosystem for performance and reliability
Develop real-time data streaming solutions using Kafka/Flume
Implement data governance and quality frameworks for financial data
Collaborate with data scientists to productionize ML models
Modernize legacy data systems to cloud-native architectures (AWS/GCP)
Ensure solutions meet banking security and compliance standards (CCAR BCBS 239)
Should have a minimum of 9 years of experience.
Big Data Tech: Hadoop Spark Hive Impala
Cloud Platforms: AWS (EMR S3 Glue) or GCP (Dataproc BigQuery)
Programming: Scala/Python/Java
Data Modeling: SQL NoSQL (HBase Cassandra)
CI/CD: Git Jenkins Terraform
Banking/financial services experience
Knowledge of data mesh/warehouse/lakehouse architectures
Certifications: AWS/GCP Data Engineer Cloudera/DataBricks
Full Time