Powering the Future with AIDA
To lead the next phase of our AI evolution weve launched a new business unitAIDAArtificial Intelligence & Data Analytics a strategic engine driving our transformation designed to scale our AI ambitions with precision and purpose.This marks apivotal shiftin how we operate innovate and serve to embed intelligence into every layer of our business.
AtSingtel this is more than a technology upgrade. Its astrategic transformationthat redefines how value is created across the enterprise coreaugmenting human capabilitiesand unlocking entirely new potential. It is a transformation journey by aligningpeople platforms and processesunder one cohesive strategy. Our mission is to buildAI literacy and foster a culture whereintelligence empowers people.
We welcome you to join uson a transformational journey thats reshaping the telecommunications industry and redefining whats possible with AI at its core.Grow with usin a workplace that championsinnovation embracesagility and putshuman potentialat the heart of everything we do.
Be a Part of Something BIG!
- Responsible for building and supporting data ingestion and transformation pipelines in a modern hybrid cloud platform
- Develop basic batch and streaming pipelines working with cloud tools such as Databricks and Kafka under the guidance of senior engineers
- Contribute to the delivery of reliable secure and high-quality data for analytics reporting and machine learning use cases
- Responsible for implementing knowledge base and retrieval-augmented generation (RAG) solution stack to support GenAI agentic use cases
Make An Impact By
- Build and maintain data ingestion pipelines for batch and streaming data sources using tools like Databricks and Kafka
- Perform data transformation and cleansing using PySpark or SQL based on business and technical requirements
- Monitor and troubleshoot data workflows to ensure data quality and pipeline reliability
- Work closely with senior data engineers to understand platform architecture and apply best practices in pipeline design
- Assist in integrating data from diverse source systems (files APIs databases streaming)
- Help maintain metadata and pipeline documentation for transparency and traceability
- Participate in integrating pipelines with tools such as Microsoft Fabric Databricks Delta Lake and other platform components
- Implement and operate data virtualization layer to centralize visibility and control of data across diverse sources
- Contribute to automation efforts using version control and CI/CD workflows
- Apply basic data governance and access control policies during implementation
Skills to Succeed
- Bachelors degree in Computer Science Engineering or a related field
- 13 years of experience in data engineering or data platform development
- Proven ability to independently build basic batch or streaming data pipelines
- Hands-on experience with Python and SQL for data transformation and validation
- Familiarity with Apache Spark (especially PySpark) and large-scale data processing concepts
- Self-starter with strong problem-solving skills and a keen attention to detail
- Able to work independently while collaborating effectively with senior engineers and other stakeholders
- Strong documentation and communication skills
Are you ready to say hello to BIG Possibilities
Take the leap with Singtel to unlock new opportunities and accelerate your growth. Apply now and start your empowering career!
Powering the Future with AIDATo lead the next phase of our AI evolution weve launched a new business unitAIDAArtificial Intelligence & Data Analytics a strategic engine driving our transformation designed to scale our AI ambitions with precision and purpose.This marks apivotal shiftin how we operate...
Powering the Future with AIDA
To lead the next phase of our AI evolution weve launched a new business unitAIDAArtificial Intelligence & Data Analytics a strategic engine driving our transformation designed to scale our AI ambitions with precision and purpose.This marks apivotal shiftin how we operate innovate and serve to embed intelligence into every layer of our business.
AtSingtel this is more than a technology upgrade. Its astrategic transformationthat redefines how value is created across the enterprise coreaugmenting human capabilitiesand unlocking entirely new potential. It is a transformation journey by aligningpeople platforms and processesunder one cohesive strategy. Our mission is to buildAI literacy and foster a culture whereintelligence empowers people.
We welcome you to join uson a transformational journey thats reshaping the telecommunications industry and redefining whats possible with AI at its core.Grow with usin a workplace that championsinnovation embracesagility and putshuman potentialat the heart of everything we do.
Be a Part of Something BIG!
- Responsible for building and supporting data ingestion and transformation pipelines in a modern hybrid cloud platform
- Develop basic batch and streaming pipelines working with cloud tools such as Databricks and Kafka under the guidance of senior engineers
- Contribute to the delivery of reliable secure and high-quality data for analytics reporting and machine learning use cases
- Responsible for implementing knowledge base and retrieval-augmented generation (RAG) solution stack to support GenAI agentic use cases
Make An Impact By
- Build and maintain data ingestion pipelines for batch and streaming data sources using tools like Databricks and Kafka
- Perform data transformation and cleansing using PySpark or SQL based on business and technical requirements
- Monitor and troubleshoot data workflows to ensure data quality and pipeline reliability
- Work closely with senior data engineers to understand platform architecture and apply best practices in pipeline design
- Assist in integrating data from diverse source systems (files APIs databases streaming)
- Help maintain metadata and pipeline documentation for transparency and traceability
- Participate in integrating pipelines with tools such as Microsoft Fabric Databricks Delta Lake and other platform components
- Implement and operate data virtualization layer to centralize visibility and control of data across diverse sources
- Contribute to automation efforts using version control and CI/CD workflows
- Apply basic data governance and access control policies during implementation
Skills to Succeed
- Bachelors degree in Computer Science Engineering or a related field
- 13 years of experience in data engineering or data platform development
- Proven ability to independently build basic batch or streaming data pipelines
- Hands-on experience with Python and SQL for data transformation and validation
- Familiarity with Apache Spark (especially PySpark) and large-scale data processing concepts
- Self-starter with strong problem-solving skills and a keen attention to detail
- Able to work independently while collaborating effectively with senior engineers and other stakeholders
- Strong documentation and communication skills
Are you ready to say hello to BIG Possibilities
Take the leap with Singtel to unlock new opportunities and accelerate your growth. Apply now and start your empowering career!
View more
View less