Hi
I am Suresh from IPivot. Please find the job description below for your reference. If interested reply with an updated resume.
Job Title: Data Engineer
Location: New Jersey NJ (hybrid)
Duration: Full Time
Job Description
Lead the development and optimization of batch and real-time data pipelines ensuring scalability reliability and performance. Architect design and deploy data integration streaming and analytics solutions leveraging Spark Kafka and Snowflake.
Ability to help voluntarily and proactively and support Team Members Peers to deliver their tasks to ensure End-to-end delivery.
Evaluates technical performance challenges and recommend tuning solutions.
Hands-on knowledge of Data Service Engineer to design develop and maintain our Reference Data System utilizing modern data technologies including Kafka Snowflake and Python.
Requirements:
Proven experience in building and maintaining data pipelines especially using Kafka Snowflake and Python.
Strong expertise in distributed data processing and streaming architectures.
Experience with Snowflake data warehouse platform: data loading performance tuning and management.
Proficiency in Python scripting and programming for data manipulation and automation.
Familiarity with Kafka ecosystem (Confluent Kafka Connect Kafka Streams).
Knowledge of SQL data modeling and ETL/ELT processes.
Understanding of cloud platforms (AWS Azure GCP) is a plus.
Preferred but not required:
Trade Processing Settlement Reconciliation and related back/middle-office functions within financial markets (Equities Fixed Income Derivatives FX etc.).
Strong understanding of trade lifecycle events order types allocation rules and settlement processes.
Corporate Data System- Funding Support Planning & Analysis Regulatory reporting & Compliance.
Knowledge of regulatory standards (such as Dodd-Frank EMIR MiFID II) related to trade reporting and lifecycle management.
Responsibilities:
Lead the development and optimization of batch and real-time data pipelines ensuring scalability reliability and performance.
Architect design and deploy data integration streaming and analytics solutions leveraging Spark Kafka and Snowflake.
Ability to help voluntarily and proactively and support Team Members Peers to deliver their tasks to ensure End-to-end delivery.
Evaluates technical performance challenges and recommend tuning solutions.
Hands-on knowledge of Data Service Engineer to design develop and maintain our Reference Data System utilizing modern data technologies including Kafka Snowflake and Python.
Minority and Women-Owned Business Enterprise (MWBE)