Senior Lead Data Engineer (Data Platform MLOps)

GFT

Not Interested
Bookmark
Report This Job

profile Job Location:

Hanoi - Vietnam

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

About GFT

GFT Technologies is driving the digital transformation of the worlds leading financial institutions. Other sectors such as industry and insurance also leverage GFTs strong consulting and implementation skills across all aspects of pioneering technologies such as cloud engineering artificial intelligence the Internet of Things for Industry 4.0 and blockchain.

With its in-depth technological expertise strong partnerships and scalable IT solutions GFT increases productivity in software development. This provides clients with faster access to new IT applications and innovative business models while also reducing risk.

Weve been a pioneer of near-shore delivery since 2001 and now offer an international team spanning 16 countries with a global workforce of over 9000 people around the world. GFT is recognised by industry analysts such as Everest Group as a leader amongst the global mid-sized Service Integrators and ranked in the Top 20 leading global Service Integrators in many of the exponential technologies such as Open Banking Blockchain Digital Banking and Apps Services.

Sign-on Bonus: Eligible for candidates who are currently employed elsewhere and able to join GFT within 30 days of offer acceptance.


Role Summary:

As a Senior Data Engineer at GFT you will be responsible for managing designing and enhancing data systems and workflows that drive key business decisions. The role is focused 75% on data engineering involving the construction and optimization of data pipelines and architectures and 25% on supporting data science initiatives through collaboration with data science teams for machine learning workflows and advanced analytics. You will leverage technologies like Python Airflow Kubernetes and AWS to deliver high-quality data solutions.

Key Activities:

  • Architect develop and maintain scalable data infrastructure including data lakes pipelines and metadata repositories ensuring the timely and accurate delivery of data to stakeholders.
  • Work closely with data scientists to build and support data models integrate data sources and support machine learning workflows and experimentation environments.
  • Develop and optimize large-scale batch and real-time data processing systems to enhance operational efficiency and meet business objectives.
  • Leverage Python Apache Airflow and AWS services to automate data workflows and processes ensuring efficient scheduling and monitoring.
  • Utilize AWS services such as S3 Glue EC2 and Lambda to manage data storage and compute resources ensuring high performance scalability and cost-efficiency.
  • Implement robust testing and validation procedures to ensure the reliability accuracy and security of data processing workflows.
  • Stay informed of industry best practices and emerging technologies in both data engineering and data science to propose optimizations and innovative solutions.

Required Skills:

  • 7-8 years of dedicated experience as a Data Engineer.

  • Core Expertise: Proficiency in Python for data processing and scripting (pandas pyspark) workflow automation (Apache Airflow) and experience with AWS services (Glue S3 EC2 Lambda).
  • Containerization & Orchestration: Experience working with Kubernetes and Docker for managing containerized environments in the cloud.
  • Data Engineering Tools: Hands-on experience with columnar and big data databases (Athena Redshift Vertica Hive/Hadoop) along with version control systems like Git.
  • Cloud Services: Strong familiarity with AWS services for cloud-based data processing and management.
  • CI/CD Pipeline: Experience with CI/CD tools such as Jenkins CircleCI or AWS CodePipeline for continuous integration and deployment.
  • Data Engineering Focus (75%): Expertise in building and managing robust data architectures and pipelines for large-scale data operations.
  • Data Science Support (25%): Ability to support data science workflows including collaboration on data preparation feature engineering and enabling experimentation environments.

Nice-to-have requirements:

  • Langchain Experience: Familiarity with Langchain for building data applications involving natural language processing or conversational AI frameworks.
  • Advanced Data Science Tools: Experience with AWS Sagemaker or Databricks for enabling machine learning environments.
  • Big Data & Analytics: Familiarity with both RDBMS (MySQL PostgreSQL) and NoSQL (DynamoDB Redis) databases.
  • BI Tools: Experience with enterprise BI tools like Tableau Looker or PowerBI.
  • Messaging & Event Streaming: Familiarity with distributed messaging systems like Kafka or RabbitMQ for event streaming.
  • Monitoring & Logging: Experience with monitoring and log management tools such as the ELK stack or Datadog.
  • Data Privacy and Security: Knowledge of best practices for ensuring data privacy and security particularly in large data infrastructures.

What we offer you:
You will be working with some of the brightest people in business and technology on challenging and rewarding projects in a team of like-minded individuals. GFT prides itself on its international environment that promotes professional and cultural exchange and encourages further individual development.


Due to the high volume of applications we receive we are unable to respond to every candidate individually. If you have not received a response from GFT regarding your application within 10 workdays please consider that we have decided to proceed with other candidates. We truly appreciate your interest in GFT and thank you for your understanding.


Required Experience:

Senior IC

About GFTGFT Technologies is driving the digital transformation of the worlds leading financial institutions. Other sectors such as industry and insurance also leverage GFTs strong consulting and implementation skills across all aspects of pioneering technologies such as cloud engineering artificial...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Company Logo

We see opportunity in technology. In domains such as cloud, AI, mainframe modernisation, DLT and IoT, we blend established practice with new thinking to help our clients stay ahead.

View Profile View Profile