Cloud Data Engineer

Brains Workgroup

Not Interested
Bookmark
Report This Job

profile Job Location:

New York City, NY - USA

profile Monthly Salary: Not Disclosed
Posted on: 5 days ago
Vacancies: 1 Vacancy

Job Summary

Our client a major bank in New York City is looking for Cloud Data Engineer.
Permanent position with competitive compensation package (base range is 135-145K) excellent benefits and target bonus.


Must be 2/3 days per week in New York City Office.

Cloud Data Engineer
Will support all aspects of data engineering activities including data design development testing debugging documentation deployment and production support.
The ideal candidate will have strong expertise in building and managing data pipelines using PySpark Python SQL and Airflow.


Key Responsibilities:
  • Design develop and optimize scalable data pipelines on Databricks using PySpark to process large volumes of financial data for real-time analytics and reporting.
  • Implement ETL pipelines for structured and semi-structured financial data from batch and streaming processes using tools such as Apache Kafka Airflow and SQL.
  • Implement Gold Layer transformations in Databricks for curated high-quality datasets that support business intelligence and analytics.
  • Collaborate with cross-functional teams including data scientists analysts and business stakeholders to deliver high-quality data solutions that meet business requirements.
  • Implement DevOps best practices for data engineering workflows including CI/CD pipelines to ensure efficient and reliable data processing.
  • Ensure data quality governance and compliance with financial regulations to support accurate and reliable financial reporting.
  • Automate workflows using Airflow for scheduling and orchestration improving operational efficiency.
  • Optimize Spark jobs and SQL queries for performance and cost efficiency ensuring timely and cost-effective data processing.

Requirements:
  • At least 3 years of experience with Databricks and PySpark for big data processing in a financial services environment.
  • 5 years of strong programming experience in Python and advanced SQL.
  • Hands-on experience with Airflow for workflow orchestration including designing and managing complex DAGs.
  • Knowledge of cloud platforms (Azure or AWS) including data lake architectures and related services such as Azure Data Lake Storage or AWS S3.
  • Understanding of financial domain data and regulatory requirements with experience in handling sensitive financial information.
  • Excellent problem-solving and communication skills with the ability to work effectively in a collaborative team environment.

Core Competencies:
  • Proven experience with Databricks and PySpark for big data processing.
  • Strong programming skills in Python and advanced SQL.
  • Hands-on experience with Airflow for workflow orchestration.
  • Knowledge of cloud platforms (Azure or AWS or GCP) and data lake architecture.
  • Excellent problem-solving and communication skills.

Preferred Qualifications:
  • Bachelors degree in Computer Science.
  • Experience working in Agile/Scrum environments with a strong understanding of Agile methodologies and practices.
  • Familiarity with CI/CD pipelines for data engineering including tools such as Jenkins GitLab CI or Azure DevOps.
  • Background in financial services or investment banking with experience in handling financial data and understanding industry-specific challenges.



Please email your resume or use this link to apply directly:

Our client a major bank in New York City is looking for Cloud Data Engineer.Permanent position with competitive compensation package (base range is 135-145K) excellent benefits and target bonus.Must be 2/3 days per week in New York City Office.Cloud Data EngineerWill support all aspects of data engi...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala