drjobs Lead Data Engineer - ETL & SQL

Lead Data Engineer - ETL & SQL

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Columbus - USA

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Description

Join us as we embark on a journey of collaboration and innovation where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorgan Chase within the Corporate Sector Enterprise Technology and Tech Reference Data you are an integral part of an agile team that works to enhance build and deliver data collection storage access and analytics solutions in a secure stable and scalable way. As a core technical contributor you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

  • Provides recommendationsandinsight on data management governance proceduresandintricacies applicable to the acquisition maintenance validation andutilization of data.
  • Designs and delivers trusted data collection storage access and analytics data platform solutions in a secure stable and scalable way.
  • Delivers data collection storage access and analytics data platform solutions in a secure stable and scalable way.
  • Collaborates with team members across Columbus Argentina and Jersey City to ensure seamless integration and operation of the Fabric service within the Technology Reference Data (TRD) framework.
  • Develops and implements data federation strategies using APIs RDMS and Object Stores to facilitate unified data access for Business Analysts Data Scientists Data Engineers and Software Developers.
  • Utilizes ANSI SQL to enable efficient data aggregation and manipulation across multiple data sources enhancing analytical capabilities and decisionmaking processes.
  • Designs and maintains robust software solutions for authorization and key management ensuring secure connectivity to both private and public cloud environments.
  • Constructs and optimizes data pipelines for ETL/ELT operations ensuring efficient data processing and transformation to meet business needs.
  • Builds and manages container images for custom services and applications as well as thirdparty opensource applications to support scalable and reliable deployment in cloud environments.
  • Evaluates and reports onaccesscontrol processes todetermineeffectiveness of dataasset security with minimal supervision.
  • Adds to team culture of diversity equity inclusion and respect.

Required qualifications capabilities and skills

  • Formal training or certification on data and software engineering concepts and 5 years applied experience.
  • Strong experience with bothrelational and NoSQL databases.
  • Experience and proficiency across the data lifecycle.
  • Experience in building and deploying applications within containerized environments utilizing tools such as Kustomize Kubernetes and Docker.
  • Expertise in designing and implementing scalable data architectures and pipelines.
  • Strong proficiency in programming languages such as Python JAVA and SQL.
  • Indepth knowledge of data warehousing solutions and ETL processes.
  • Ability to work collaboratively with crossfunctional teams including data scientists analysts and business stakeholders.
  • Experience with data governance and ensuring data quality and integrity.
  • Excellent communication skills with the ability to convey complex technical concepts to nontechnical audiences.
  • Experience with version control systems like Git and CI/CD pipelines for data engineering workflows.

Preferred qualifications capabilities and skills

  • Experience with Trino / Presto and other SQL query engines.
  • Proficiency in Big Data technologies with a strong focus on Performance Optimization using best practices.
  • Solid understanding of the Parquet ORC and AVRO file formats.
  • Solid understanding of open table formats particularly iceberg.
  • Experience in building maintaining and optimizing iceberg tables.
  • Proficiency in building and maintaining efficient container images
  • Experience with orchestration tools such as Airflow.


Employment Type

Full-Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.