Extensive hands-on experience in building Data Pipelines and ETL workflows using IBM DataStage particularly within Data Warehouses and BI environments.
Understanding and documenting complex DataStage workflows to maintain data pipelines.
Experience working with diverse data sources such as RDBMS (Oracle DB2) flat files CSV Mainframe datasets and other target data stores.
Strong skills in Data Analysis writing complex SQL queries and addressing performance considerations.
Ability to reverse engineer ETL code to create mapping documents identify source/target dependencies and document transformations where necessary.
Solid foundation in Data Modelling Designing and Data Architecture.
Expertise in Unix Shell Scripting and Batch Jobs (AutoSys / Control-M).
Familiarity with Data Governance Data Security and Data Privacy principles.
Strong understanding of fundamental Computer Science concepts.
Capability to define solution architecture and data models for project teams including providing guidance on development tools target platforms operations and security.
Ability to fully immerse in product details understand challenges and connect them to data engineering solutions.
Working knowledge of Continuous Integration/Continuous Deployment (CI/CD) pipelines DevOps and underlying deployment infrastructure.
Good to Have:
Familiarity with Amazon Web Services (AWS) Cloud.
Experience with Spark and PySpark.
Knowledge of Snowflake.
Exposure to ETL migration and Cloud Modernization initiatives.
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.