Location:
CN-Shenzhen-HyQ
Shift:
Standard - 40 Hours (China)
Scheduled Weekly Hours:
40
Worker Type:
Permanent
Job Summary:
Lead the design and delivery of LME market data platforms that consolidate multiple real-time and historical market data sources into scalable enterprise data assets for external and internal consumption. This role focuses on enterprise data management and big data engineeringbuilding robust data lake/warehouse foundations orchestrated ETL/ELT pipelines and performant analytics access to support market data commercialisation and business intelligence.
Job Duties:
Key Responsibilities
Market Data & Data Product Enablement
- Partner with product/business stakeholders to define market data product objectives and translate them into data platform deliverables.
- Consolidate normalise and curate market data sets (including derivatives and order book datasets) as governed reusable data assets.
- Define data contracts metadata lineage and quality rules so downstream users can reliably consume market data products.
Enterprise Data Management & Architecture
- Define and evolve enterprise data management architecture across data lake and data warehouse solutions (on-prem and/or cloud).
- Design and operate data lake/warehouse layers using technologies such as ADLS Amazon S3 Google Cloud Storage Azure Synapse SQL Snowflake Amazon Redshift or Google BigQuery.
- Set standards for data modelling governance security controls retention and lifecycle management aligned with organisational policies.
Big Data Engineering & Pipeline Delivery
- Design build and maintain scalable ETL/ELT pipelines for analytics and reporting using code-driven patterns and distributed compute engines.
- Implement and operate workflow orchestration frameworks such as Apache Airflow Prefect (Perfect) or Dagster including scheduling dependency management and observability.
- Engineer processing solutions using big data stacks such as Hadoop Spark Kafka and Flink (Flint) ensuring throughput reliability and cost efficiency.
- Leverage Spark and/or Databricks (built on Spark) to deliver large-scale transformations and performance-tuned workloads.
Data Stores Query Performance & Reliability
- Design data storage and access patterns across data warehouses and databases including NoSQL stores (e.g. HBase) and analytical engines (e.g. ClickHouse Snowflake).
- Drive query and pipeline performance tuning (partitioning caching file formats indexing/cluster keys) and improve SLAs/SLOs for critical datasets.
- Lead incident analysis and root-cause investigations for data-related issues; implement permanent fixes and continuous reliability improvements.
Leadership & Delivery
- Operate effectively in a small specialised teambalancing hands-on contribution with technical leadership coaching and setting engineering standards.
- Promote SDLC best practices CI/CD automated testing monitoring and documentation to improve delivery quality and repeatability.
- Coordinate with global engineering and infrastructure teams to deliver roadmap outcomes and manage dependencies.
Requirements
Education & Experience
- Degree in Computer Science IT Data Engineering or related disciplines (or equivalent practical experience).
- Typically 12 years of experience delivering enterprise data management and big data platforms; experience in financial services exchanges or regulated environments is advantageous.
- Proven experience leading delivery making architecture decisions and managing stakeholders across cross-functional teams.
Technical Skills (Key Words / Must-have)
- Enterprise data management; big data projects; data lake and data warehouse design/operations (e.g. ADLS S3 GCS Synapse SQL Snowflake Redshift BigQuery).
- Big data tech stacks: Hadoop / Spark / Kafka / Flink.
- ETL orchestration: Airflow / Prefect / Dagster.
- Big data computing engines (code-driven ETL): Spark and/or Databricks.
- Database technologies: NoSQL / HBase / ClickHouse / Snowflake; strong SQL fundamentals and performance tuning.
Programming Languages
- Proficiency in at least one language commonly used for data engineering (e.g. Python Scala or Java).
- Java experience is beneficial but not mandatory; selection will be based on overall big data and enterprise data platform expertise.
Core Competencies
- Strong analytical and problem-solving skills; outcome-driven and able to prioritise under changing needs.
- Clear communication and stakeholder management across technical and non-technical audiences.
- Accountable proactive and comfortable operating in a lean team environment.
Company Introduction:
ITD SZ
港交所科技深圳有限公司是2016年12月28日于深圳市前海自贸区成立的外商独资企业
作为港交所的技术子公司港交所科技深圳有限公司主要是为集团及其附属公司提供计算机软件计算机硬件信息系统云存储云计算物联网和计算机网络的开发技术服务技术咨询技术转让经济信息咨询企业管理咨询商务信息咨询商业信息咨询信息系统设计集成运行维护数据库管理大数据分析以承接服务外包方式提供系统应用管理和维护信息技术支持管理数据处理等信息技术和业务流程外包服务
Required Experience:
Exec
Location:CN-Shenzhen-HyQShift:Standard - 40 Hours (China)Scheduled Weekly Hours:40Worker Type:PermanentJob Summary:Lead the design and delivery of LME market data platforms that consolidate multiple real-time and historical market data sources into scalable enterprise data assets for external and in...
Location:
CN-Shenzhen-HyQ
Shift:
Standard - 40 Hours (China)
Scheduled Weekly Hours:
40
Worker Type:
Permanent
Job Summary:
Lead the design and delivery of LME market data platforms that consolidate multiple real-time and historical market data sources into scalable enterprise data assets for external and internal consumption. This role focuses on enterprise data management and big data engineeringbuilding robust data lake/warehouse foundations orchestrated ETL/ELT pipelines and performant analytics access to support market data commercialisation and business intelligence.
Job Duties:
Key Responsibilities
Market Data & Data Product Enablement
- Partner with product/business stakeholders to define market data product objectives and translate them into data platform deliverables.
- Consolidate normalise and curate market data sets (including derivatives and order book datasets) as governed reusable data assets.
- Define data contracts metadata lineage and quality rules so downstream users can reliably consume market data products.
Enterprise Data Management & Architecture
- Define and evolve enterprise data management architecture across data lake and data warehouse solutions (on-prem and/or cloud).
- Design and operate data lake/warehouse layers using technologies such as ADLS Amazon S3 Google Cloud Storage Azure Synapse SQL Snowflake Amazon Redshift or Google BigQuery.
- Set standards for data modelling governance security controls retention and lifecycle management aligned with organisational policies.
Big Data Engineering & Pipeline Delivery
- Design build and maintain scalable ETL/ELT pipelines for analytics and reporting using code-driven patterns and distributed compute engines.
- Implement and operate workflow orchestration frameworks such as Apache Airflow Prefect (Perfect) or Dagster including scheduling dependency management and observability.
- Engineer processing solutions using big data stacks such as Hadoop Spark Kafka and Flink (Flint) ensuring throughput reliability and cost efficiency.
- Leverage Spark and/or Databricks (built on Spark) to deliver large-scale transformations and performance-tuned workloads.
Data Stores Query Performance & Reliability
- Design data storage and access patterns across data warehouses and databases including NoSQL stores (e.g. HBase) and analytical engines (e.g. ClickHouse Snowflake).
- Drive query and pipeline performance tuning (partitioning caching file formats indexing/cluster keys) and improve SLAs/SLOs for critical datasets.
- Lead incident analysis and root-cause investigations for data-related issues; implement permanent fixes and continuous reliability improvements.
Leadership & Delivery
- Operate effectively in a small specialised teambalancing hands-on contribution with technical leadership coaching and setting engineering standards.
- Promote SDLC best practices CI/CD automated testing monitoring and documentation to improve delivery quality and repeatability.
- Coordinate with global engineering and infrastructure teams to deliver roadmap outcomes and manage dependencies.
Requirements
Education & Experience
- Degree in Computer Science IT Data Engineering or related disciplines (or equivalent practical experience).
- Typically 12 years of experience delivering enterprise data management and big data platforms; experience in financial services exchanges or regulated environments is advantageous.
- Proven experience leading delivery making architecture decisions and managing stakeholders across cross-functional teams.
Technical Skills (Key Words / Must-have)
- Enterprise data management; big data projects; data lake and data warehouse design/operations (e.g. ADLS S3 GCS Synapse SQL Snowflake Redshift BigQuery).
- Big data tech stacks: Hadoop / Spark / Kafka / Flink.
- ETL orchestration: Airflow / Prefect / Dagster.
- Big data computing engines (code-driven ETL): Spark and/or Databricks.
- Database technologies: NoSQL / HBase / ClickHouse / Snowflake; strong SQL fundamentals and performance tuning.
Programming Languages
- Proficiency in at least one language commonly used for data engineering (e.g. Python Scala or Java).
- Java experience is beneficial but not mandatory; selection will be based on overall big data and enterprise data platform expertise.
Core Competencies
- Strong analytical and problem-solving skills; outcome-driven and able to prioritise under changing needs.
- Clear communication and stakeholder management across technical and non-technical audiences.
- Accountable proactive and comfortable operating in a lean team environment.
Company Introduction:
ITD SZ
港交所科技深圳有限公司是2016年12月28日于深圳市前海自贸区成立的外商独资企业
作为港交所的技术子公司港交所科技深圳有限公司主要是为集团及其附属公司提供计算机软件计算机硬件信息系统云存储云计算物联网和计算机网络的开发技术服务技术咨询技术转让经济信息咨询企业管理咨询商务信息咨询商业信息咨询信息系统设计集成运行维护数据库管理大数据分析以承接服务外包方式提供系统应用管理和维护信息技术支持管理数据处理等信息技术和业务流程外包服务
Required Experience:
Exec
View more
View less