Job Title: Sr. Data Engineer (Fullstack)
Location: New York NY
Setting: Hybrid Role
Type: Fulltime Role (Permanent)
Position Summary:
Were hiring a Senior Full Stack Developer focused on Data Engineering to help build and scale a modern enterprise data platform supporting Portfolio Management Trading Risk Compliance Middle Office Operations and Finance. This is a new headcount on a globally distributed Data Platform Engineering team (NYC Mumbai) operating in a high delivery agile environment.
This role is hands-on and centered on building robust scalable pipelines designing multi-layered database structures with increasing refinement and delivering curated semantic data products that power reporting analytics and API-driven consumption - including enabling AI/agent workflows on top of trusted datasets.
Key Responsibilities
What were looking for (Must-haves):
- Senior-level hands-on expertise building end-to-end data pipelines database structures semantic data products.
- Strong Python and/or Java engineering skills in production environments.
- Deep experience with cloud data platforms (Azure preferred) and modern warehousing (Snowflake strongly preferred).
- Strong understanding of data flow architecture: ingestion transformation storage layers semantic layer consumption (BI APIs).
- Front end experience (React)
- Proven performance tuning skills across pipelines and warehouses:
- query optimization clustering/partitioning strategies warehouse sizing Spark optimization caching patterns etc.
- Experience supporting near real-time data needs at scale (millions of rows daily).
- Strong communication skills and ability to collaborate in a globally distributed agile team.
- Comfort operating in a high-expectation fast-paced environment with production ownership/on-call.
- Asset management / financial data experience (front office operational/master/reference data exposure).
What Youll Do
- Design and build scalable ingestion and transformation pipelines to acquire new critical data sources (near real-time where needed).
- Build and optimize database structures across three layers of refinement (raw refined curated) to support complex reporting and analytics.
- Develop curated data products and semantic models/metric definitions to enable broad self-serve BI and consistent consumption across teams.
- Deliver data via APIs (REST and GraphQL) to enable end-user application data exchange and internal tooling.
- Analyze and improve existing datasets and pipelines for: data quality pipeline performance brittle integrations & query performance and concurrency
- Operate in a controlled SDLC environment with code reviews approvals change management validation and documentation.
- Participate in production support / on-call rotation.
Nice-to-haves:
- Strong experience building and deploying services using Kubernetes and Docker
- Private Equity data experience.
- BI enablement experience (semantic modeling for Tableau/Power BI/Sigma/Pyramid) or analytics engineering background.
- Experience designing event-driven patterns and data products to support AI/agent workflows.
Job Title: Sr. Data Engineer (Fullstack) Location: New York NY Setting: Hybrid Role Type: Fulltime Role (Permanent) Position Summary: Were hiring a Senior Full Stack Developer focused on Data Engineering to help build and scale a modern enterprise data platform supporting Portfolio Management...
Job Title: Sr. Data Engineer (Fullstack)
Location: New York NY
Setting: Hybrid Role
Type: Fulltime Role (Permanent)
Position Summary:
Were hiring a Senior Full Stack Developer focused on Data Engineering to help build and scale a modern enterprise data platform supporting Portfolio Management Trading Risk Compliance Middle Office Operations and Finance. This is a new headcount on a globally distributed Data Platform Engineering team (NYC Mumbai) operating in a high delivery agile environment.
This role is hands-on and centered on building robust scalable pipelines designing multi-layered database structures with increasing refinement and delivering curated semantic data products that power reporting analytics and API-driven consumption - including enabling AI/agent workflows on top of trusted datasets.
Key Responsibilities
What were looking for (Must-haves):
- Senior-level hands-on expertise building end-to-end data pipelines database structures semantic data products.
- Strong Python and/or Java engineering skills in production environments.
- Deep experience with cloud data platforms (Azure preferred) and modern warehousing (Snowflake strongly preferred).
- Strong understanding of data flow architecture: ingestion transformation storage layers semantic layer consumption (BI APIs).
- Front end experience (React)
- Proven performance tuning skills across pipelines and warehouses:
- query optimization clustering/partitioning strategies warehouse sizing Spark optimization caching patterns etc.
- Experience supporting near real-time data needs at scale (millions of rows daily).
- Strong communication skills and ability to collaborate in a globally distributed agile team.
- Comfort operating in a high-expectation fast-paced environment with production ownership/on-call.
- Asset management / financial data experience (front office operational/master/reference data exposure).
What Youll Do
- Design and build scalable ingestion and transformation pipelines to acquire new critical data sources (near real-time where needed).
- Build and optimize database structures across three layers of refinement (raw refined curated) to support complex reporting and analytics.
- Develop curated data products and semantic models/metric definitions to enable broad self-serve BI and consistent consumption across teams.
- Deliver data via APIs (REST and GraphQL) to enable end-user application data exchange and internal tooling.
- Analyze and improve existing datasets and pipelines for: data quality pipeline performance brittle integrations & query performance and concurrency
- Operate in a controlled SDLC environment with code reviews approvals change management validation and documentation.
- Participate in production support / on-call rotation.
Nice-to-haves:
- Strong experience building and deploying services using Kubernetes and Docker
- Private Equity data experience.
- BI enablement experience (semantic modeling for Tableau/Power BI/Sigma/Pyramid) or analytics engineering background.
- Experience designing event-driven patterns and data products to support AI/agent workflows.
View more
View less