Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailJob Title: Senior ETL Data Engineer (Ab Initio / Informatica)
Experience: 4 Years
Location: Lahore / Karachi / Islamabad (Hybrid)
Job Type: Full-Time
We are looking for a highly skilled Senior ETL Data Engineer (Ab Initio / Informatica) with strong experience in building robust data pipelines working with large-scale datasets and leveraging modern Big Data and cloud technologies. The ideal candidate should have hands-on expertise in ETL frameworks distributed data processing and data modeling within cloud environments such as AWS. If you have a passion for working with data and enjoy designing scalable systems wed like to meet you.
Design and develop complex ETL pipelines and data solutions using Big Data and cloud-native technologies.
Leverage tools such as Ab Initio Informatica DBT and Apache Spark to build scalable data workflows.
Implement distributed data processing using Hadoop Hive Kafka and Spark.
Build and optimize data pipelines in AWS using services like EMR Glue Lambda Athena and S3.
Work with various structured and unstructured data sources to perform efficient data ingestion and transformation.
Write optimized SQL queries and manage stored procedures for complex data processing tasks.
Orchestrate workflows using Airflow AWS Step Functions or similar schedulers.
Collaborate with cross-functional teams to understand data needs and deliver high-quality datasets for analytics and reporting.
Deploy data models into production environments and ensure robust monitoring and resource management.
Mentor junior engineers and contribute to the teams knowledge sharing and continuous improvement efforts.
Identify and recommend process and technology improvements to enhance data pipeline performance and reliability.
Bachelors degree in Computer Science Engineering or a related field.
4 years of hands-on experience in ETL development data engineering and data pipeline orchestration.
Strong working knowledge of Ab Initio Informatica or similar ETL tools.
Expertise in Python PySpark or Scala for data processing.
Proven experience in Big Data technologies (Hadoop Hive Spark Kafka).
Proficient with AWS services related to data engineering (EMR Glue Lambda Athena S3).
In-depth understanding of data modeling ETL cycle data warehousing and data management principles.
Hands-on experience with relational (PostgreSQL MySQL) and columnar databases (Redshift HBase Snowflake).
Familiarity with containerization (Docker) CI/CD pipelines (Jenkins) and Agile tools (Jira).
Ability to troubleshoot complex data issues and propose scalable solutions.
Excellent communication and collaboration skills.
Experience with open table formats such as Apache Iceberg.
Working knowledge of Snowflake and its data warehousing capabilities.
Familiarity with GDE Collect > IT or other components of Ab Initio.
Hybrid work model with flexibility to work from home and office.
Exposure to cutting-edge technologies and high-impact projects.
Collaborative team environment with opportunities for growth and innovation.
Culture that values ownership continuous learning and mutual respect.
Required Experience:
Manager
Full-Time