drjobs Data Engineer (Database and Python-focused)

Data Engineer (Database and Python-focused)

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

New York City, NY - USA

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Client Name: SANS
End Client Name: Morgan Stanley

Job Title: Data Engineer (Database & Python-focused)
Location: New York City- Local with local DL
Work Type: Hybrid 3 Days Onsite
Job Type: Contract
Rate: $75/hour on Corp to Corp

Notes:

  • Prefer candidates from Investment Banking OR Fortune 1000 companies (e.g. Visa Mastercard Walmart Amex).
  • 8 15 years of experience in the USA required.
  • Must be senior and technically mature.
  • Visa: Any (USC GC H4-EAD.)
  • Must attend 1 in-person round in NYC and 2 Zoom interviews.
  • LinkedIn profile is mandatory with each resume submittal no exceptions.
  • This is NOT a Machine Learning or AI Python role. Do NOT submit resumes with only Python/ML experience.
  • Must have strong database experience (not DevOps or ML-heavy candidates).

Job Description:

Morgan Stanleys Institutional Securities Technology team is seeking a Data Engineer with deep database and ETL expertise combined with mid-level Python experience. The candidate will work on Data Quality and Client Confidentiality platforms that handle massive datasets across a hybrid on-prem/cloud infrastructure.

Roles and Responsibilities:

  • Develop troubleshoot and optimize ETL workflows across multiple database systems.
  • Identify and resolve data quality issues and performance bottlenecks.
  • Extract process and deliver millions of rows of data daily.
  • Conduct performance tuning on-prem (DB2 Greenplum Sybase) and in Snowflake.
  • Support and maintain production systems including log analysis and issue triaging.
  • Work within a UNIX/Linux environment to manage and automate data pipelines.
  • Write SQL queries and Python scripts to move and transform data including Kafka/JSON outputs.
  • Collaborate closely with internal teams and support onboarding of new users to the platform.
  • Contribute to the development and productization of the data quality platform.

Required Skills:

  • 8 15 years total IT experience
  • Strong SQL skills must be able to write complex high-performance queries
  • Strong experience with at least two of the following: DB2 Greenplum Sybase
  • Working experience with Snowflake (minimum 1 2 years)
  • UNIX/Linux experience (basic commands file handling troubleshooting)
  • Mid-level Python (for automation and data movement) scripting and basic programming
  • ETL development experience across large data environments (millions of rows)
  • Experience handling structured data (CSV JSON)
  • Basic Kafka knowledge (can be trained)
  • Git Jira and job scheduling tools (e.g. Autosys)
  • Must be open to onsite interview and technical coding round in NYC

Preferred Skills:

  • Experience in financial institutions or data-heavy enterprise environments
  • Familiarity with anomaly detection techniques (isolation forest clustering time-series analysis)
  • Exposure to model monitoring and logging tools
  • Understanding of Data Quality systems as product platforms
  • Basic knowledge of containerization (Docker Kubernetes) for future-readiness

Employment Type

Full Time

Company Industry

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.