Location:
CN-Shenzhen-HyQ
Shift:
Standard - 40 Hours (China)
Scheduled Weekly Hours:
40
Worker Type:
Permanent
Job Summary:
This role is a critical part of the Enterprise data team involved in the replacement of legacy ETL (Informatica) tool by providing key data engineering activities including pipeline management analysis & visualisation engineering. The role will be working closely alongside ETL developers and wider technology teams to engineer solutions supporting their strategic roadmap.
This is a high-impact role for a candidate who is passionate about engineering excellence have a strong technical background and excellent IT skills paired with excellent team working and communication skills. This role is a data engineering role (covering backend data infrastructure) and collaborates on solution design implementation deployment testing and support.
Job Duties:
Responsibilities:
- Design implement and maintain robust data pipelines and infrastructure to support LME integration across data warehouses or critical to ensure reliability and scalability.
- Ensure the robustness and quality of data workloads using Python/Java/Scala and modern data engineering practices including automated validation monitoring and comprehensive testing.
- Ensure all technical documentation is accurate up-to-date and accessible to relevant stakeholders.
- Provide internal data analysis and reporting to support business and technology objectives.
- Act as a liaison between technical teams and non-technical stakeholders ensuring clear and effective communication of project status risks and requirements.
- Develop and maintain database architectures including data lakes and data warehouses.
- Ensure data quality and consistency through data cleaning transformation and validation processes.
- Lead incident analysis and root cause investigations for data-related issues implementing improvements to enhance system stability and performance.
- Evaluate possible solutions and designs to establish best approach in terms of customer outcome architecture and cost. Including prototyping technical spikes and proofs of concept.
- Design implement and support scalable and robust data pipelines to support analytics and data processing needs.
- Implement test or process automation Test Driven Development Continuous Integration and Continuous Delivery as required and support the team in implementing best practices.
- Compose high quality documentation and specifications.
- Demonstrate a good understanding of the broader toolset available for data access analytics manipulation order to continually evaluate/assess the suitability of tool choices.
- Work with other data scientists and business teams to onboard Jupiter playbooks/python apps to robust infrastructure with appropriate standards and monitoring.
- Work with Technology Governance Board to contribute towards technical standards and patterns to ensure consistency and adoptability across multiple service teams.
Required Knowledge and Level of Experience:
- Experience: Minimum 7 years in data or software engineering with demonstrable lead at least one production-grade data system within financial services or a similarly regulated industry.
- Data Quality: Proven ability to validate and govern data pipelines ensuring data integrity correctness and compliance.
- Full-Stack Engineering: Hands-on experience with Java (Spring Boot) React ( optional ) and Python covering backend frontend and data engineering.
- Data Engineering Tools: Proficient with modern data engineering and analytics platforms (e.g. Apache Airflow Spark Kafka dbt Snowflake or similar).
- DevOps & Cloud: Experience with containerisation (Docker Kubernetes) CI/CD pipelines and cloud platforms (e.g. AWS Azure GCP) is highly desirable and increasingly standard in the industry.
Bonus for knowledge of:
- Scripting languages preferably Python.
- RDBMS systems PostgresSQL SQL server or similar.
- noSQL or distributed databases (MongoDB etc)
Experience with working on streaming pipelines.
Personal Qualities:
- Curiosity & Proactivity: Demonstrates a passion for continuous learning improvement and staying current with industry trends.
- Collaboration: Works effectively across departments and disciplines building strong relationships with both technology and business colleagues.
- Outcome-Driven: Motivated by delivering real-world outcomes improving enterprise value and supporting business strategy.
Company Introduction:
ITD SZ
港交所科技深圳有限公司是2016年12月28日于深圳市前海自贸区成立的外商独资企业
作为港交所的技术子公司港交所科技深圳有限公司主要是为集团及其附属公司提供计算机软件计算机硬件信息系统云存储云计算物联网和计算机网络的开发技术服务技术咨询技术转让经济信息咨询企业管理咨询商务信息咨询商业信息咨询信息系统设计集成运行维护数据库管理大数据分析以承接服务外包方式提供系统应用管理和维护信息技术支持管理数据处理等信息技术和业务流程外包服务
Required Experience:
Manager
Location:CN-Shenzhen-HyQShift:Standard - 40 Hours (China)Scheduled Weekly Hours:40Worker Type:PermanentJob Summary:This role is a critical part of the Enterprise data team involved in the replacement of legacy ETL (Informatica) tool by providing key data engineering activities including pipeline man...
Location:
CN-Shenzhen-HyQ
Shift:
Standard - 40 Hours (China)
Scheduled Weekly Hours:
40
Worker Type:
Permanent
Job Summary:
This role is a critical part of the Enterprise data team involved in the replacement of legacy ETL (Informatica) tool by providing key data engineering activities including pipeline management analysis & visualisation engineering. The role will be working closely alongside ETL developers and wider technology teams to engineer solutions supporting their strategic roadmap.
This is a high-impact role for a candidate who is passionate about engineering excellence have a strong technical background and excellent IT skills paired with excellent team working and communication skills. This role is a data engineering role (covering backend data infrastructure) and collaborates on solution design implementation deployment testing and support.
Job Duties:
Responsibilities:
- Design implement and maintain robust data pipelines and infrastructure to support LME integration across data warehouses or critical to ensure reliability and scalability.
- Ensure the robustness and quality of data workloads using Python/Java/Scala and modern data engineering practices including automated validation monitoring and comprehensive testing.
- Ensure all technical documentation is accurate up-to-date and accessible to relevant stakeholders.
- Provide internal data analysis and reporting to support business and technology objectives.
- Act as a liaison between technical teams and non-technical stakeholders ensuring clear and effective communication of project status risks and requirements.
- Develop and maintain database architectures including data lakes and data warehouses.
- Ensure data quality and consistency through data cleaning transformation and validation processes.
- Lead incident analysis and root cause investigations for data-related issues implementing improvements to enhance system stability and performance.
- Evaluate possible solutions and designs to establish best approach in terms of customer outcome architecture and cost. Including prototyping technical spikes and proofs of concept.
- Design implement and support scalable and robust data pipelines to support analytics and data processing needs.
- Implement test or process automation Test Driven Development Continuous Integration and Continuous Delivery as required and support the team in implementing best practices.
- Compose high quality documentation and specifications.
- Demonstrate a good understanding of the broader toolset available for data access analytics manipulation order to continually evaluate/assess the suitability of tool choices.
- Work with other data scientists and business teams to onboard Jupiter playbooks/python apps to robust infrastructure with appropriate standards and monitoring.
- Work with Technology Governance Board to contribute towards technical standards and patterns to ensure consistency and adoptability across multiple service teams.
Required Knowledge and Level of Experience:
- Experience: Minimum 7 years in data or software engineering with demonstrable lead at least one production-grade data system within financial services or a similarly regulated industry.
- Data Quality: Proven ability to validate and govern data pipelines ensuring data integrity correctness and compliance.
- Full-Stack Engineering: Hands-on experience with Java (Spring Boot) React ( optional ) and Python covering backend frontend and data engineering.
- Data Engineering Tools: Proficient with modern data engineering and analytics platforms (e.g. Apache Airflow Spark Kafka dbt Snowflake or similar).
- DevOps & Cloud: Experience with containerisation (Docker Kubernetes) CI/CD pipelines and cloud platforms (e.g. AWS Azure GCP) is highly desirable and increasingly standard in the industry.
Bonus for knowledge of:
- Scripting languages preferably Python.
- RDBMS systems PostgresSQL SQL server or similar.
- noSQL or distributed databases (MongoDB etc)
Experience with working on streaming pipelines.
Personal Qualities:
- Curiosity & Proactivity: Demonstrates a passion for continuous learning improvement and staying current with industry trends.
- Collaboration: Works effectively across departments and disciplines building strong relationships with both technology and business colleagues.
- Outcome-Driven: Motivated by delivering real-world outcomes improving enterprise value and supporting business strategy.
Company Introduction:
ITD SZ
港交所科技深圳有限公司是2016年12月28日于深圳市前海自贸区成立的外商独资企业
作为港交所的技术子公司港交所科技深圳有限公司主要是为集团及其附属公司提供计算机软件计算机硬件信息系统云存储云计算物联网和计算机网络的开发技术服务技术咨询技术转让经济信息咨询企业管理咨询商务信息咨询商业信息咨询信息系统设计集成运行维护数据库管理大数据分析以承接服务外包方式提供系统应用管理和维护信息技术支持管理数据处理等信息技术和业务流程外包服务
Required Experience:
Manager
View more
View less