drjobs Engineer II

Engineer II

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Bangalore - India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Overview

Connecting clients to markets and talent to opportunity.

With 4300 employees and over 400000 retail and institutional clients from more than 80 offices spread across five continents were a Fortune-100 Nasdaq-listed provider connecting clients to the global markets focusing on innovation human connection and providing world-class products and services to all types of investors.

Whether you want to forge a career connecting our retail clients to potential trading opportunities or ingrain yourself in the world of institutional investing The StoneX Group is made up of four segments that offer endless potential for progression and growth.

Business Segment Overview: Engage in a deep variety of business-critical activities that keep our company running efficiently. From strategic marketing and financial management to human resources and operational oversight youll have the opportunity to optimize processes and implement game-changing policies.

Responsibilities

Position Purpose: We are seeking a skilled Data Engineer to design develop and implement scalable data solutions that support the organizations regulatory financial operational and analytical needs. This role is pivotal in supporting or legacy systems and modernizing our data infrastructure by transforming our on-premises SQL data warehouse into a next-generation Data Lakehouse using Databricks.

Technology Ecosystem:

  • Databases: Microsoft SQL Server (T-SQL SSIS SSRS).
  • Programming languages: Python PySpark Scala
  • Cloud: Azure
  • Big Data: Hadoop

Primary duties will include:

Data Warehouse Development:

  • Collaborate with data warehouse leads to enhance multi-dimensional data warehouses by implementing new features and expanding functionality.
  • Design data processing pipelines and logical/physical database schemas to support reporting and analytics.

Reporting & Troubleshooting:

  • Create recurring/ad-hoc reports and automated data feeds utilizing tools like SQL Server Reporting Services (SSRS).
  • Troubleshoot data issues validate results and perform ad-hoc data analysis to address business needs.

Collaboration & Optimization:

  • Work closely with development teams senior management and other departments to ensure alignment on database and reporting solutions.
  • Optimize SQL queries stored procedures and data processing workflows to maximize performance.

Data Processing & Pipelines:

  • Develop and optimize data pipelines using PySpark Scala and Python for data transformation aggregation and analysis.
  • Build and maintain ETL processes across development staging and production environments ensuring seamless operation of intraday data feeds and nightly jobs.

Qualifications

To land this role you will need:

  • 1 years of experience developing software in a professional environment (preferably financial services but not required).
  • 1 years of experience with Microsoft SQL Server (T-SQL SSIS SSRS).
  • Experience working with production incidents and resolving in SLA bound environment.
  • Experience working with End of Day Rota Systems to support production systems.
  • Experience working with Adhoc data requests for Support Teams.
  • Practical Knowledge in Spark Databricks and big data processing technologies (e.g. Hadoop).
  • Practical Knowledge in building and managing ETL processes and data pipelines using Python PySpark and Scala.
  • Practical Knowlege in Visualization tools like PowerBI
  • Practical knowledge of data warehouse concepts (Kimball methodology).
  • Familiarity development event-driven ETL and SDLC frameworks.
  • Practical Knowledge creating data transformation and aggregation jobs in Scala/Spark.
  • Ability to design scalable data processing pipelines and write unit tests for data transformations.
  • Analytical and results-driven approach to solving business problems using technology.
  • Hands-on experience working in an agile/SCRUM environment.
  • Strong communication skills for both technical and non-technical stakeholders including senior management and cross-functional teams.
  • Fast learner with a passion for exploring and mastering new technologies.
  • Detail-oriented team player with a proactive mindset.
  • Comfortable working in a fast-paced high-growth environment.
  • Confortable working in rotational shift.

Education / Certification Requirements:

  • Bachelors degree or relevant work experience in Computer Science Mathematics Data Engineering or related technical discipline.

Employment Type

Full-Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.