Job purpose / Objective
The purpose of the role is to design develop and maintain the data architecture and infrastructure within an organization to enable efficient and effective data processing storage and the data pipeline and ensure that data is collected transformed and made available for analysis and decision-making.
Primary Responsibility
1. Area : Data Pipeline Development
Key activities:
Design develop and maintain data pipelines to extract transform and load (ETL) data from various sources into data warehouses or data lakes.
Key Result Areas / Indicators : Connectivity percentage
2. Area : Data Modeling and Quality Assurance
Key activities:
Create and optimize data models that define data structures relationships and data transformation processes. Implement data quality checks and validation processes to ensure data accuracy and reliability.
Key Result Areas / Indicators : Data quality 100%
3. Area : Data Integration Scripting and Automation
Key activities: Integrate data from multiple sources ensuring consistency accuracy and timeliness. Write scripts and use automation tools to streamline data engineering processes.
Key Result Areas / Indicators : Data availability 100%
4. Area : ETL Processes and Database Management
Key activities:
Develop and manage ETL processes to automate data movement and transformation tasks. Manage and optimize databases including backups indexing and performance tuning.
Key Result Areas / Indicators : Data security
5. Area : Collaboration
Key activities:
- Work closely with data scientists analysts and business stakeholders to understand their data needs and provide data support.
Key Result areas/Indicators:
No. of usecases
Qualifications :
Education
- Bachelor degree in Computer Science or equivalent
- Data Science Program (desirable)
Experience
- Min 3 years of Experience in Data management within the area of software development rollout and maintenance.
- Min 2 Years of specific experience with data pipeline development monitoring and maintenance.
Additional Information :
Knowledge on Cloud technologies preferred.
Basic Knowledge on IIoT hardware architecture SDLC etc.
Remote Work :
No
Employment Type :
Full-time
Job purpose / ObjectiveThe purpose of the role is to design develop and maintain the data architecture and infrastructure within an organization to enable efficient and effective data processing storage and the data pipeline and ensure that data is collected transformed and made available for analy...
Job purpose / Objective
The purpose of the role is to design develop and maintain the data architecture and infrastructure within an organization to enable efficient and effective data processing storage and the data pipeline and ensure that data is collected transformed and made available for analysis and decision-making.
Primary Responsibility
1. Area : Data Pipeline Development
Key activities:
Design develop and maintain data pipelines to extract transform and load (ETL) data from various sources into data warehouses or data lakes.
Key Result Areas / Indicators : Connectivity percentage
2. Area : Data Modeling and Quality Assurance
Key activities:
Create and optimize data models that define data structures relationships and data transformation processes. Implement data quality checks and validation processes to ensure data accuracy and reliability.
Key Result Areas / Indicators : Data quality 100%
3. Area : Data Integration Scripting and Automation
Key activities: Integrate data from multiple sources ensuring consistency accuracy and timeliness. Write scripts and use automation tools to streamline data engineering processes.
Key Result Areas / Indicators : Data availability 100%
4. Area : ETL Processes and Database Management
Key activities:
Develop and manage ETL processes to automate data movement and transformation tasks. Manage and optimize databases including backups indexing and performance tuning.
Key Result Areas / Indicators : Data security
5. Area : Collaboration
Key activities:
- Work closely with data scientists analysts and business stakeholders to understand their data needs and provide data support.
Key Result areas/Indicators:
No. of usecases
Qualifications :
Education
- Bachelor degree in Computer Science or equivalent
- Data Science Program (desirable)
Experience
- Min 3 years of Experience in Data management within the area of software development rollout and maintenance.
- Min 2 Years of specific experience with data pipeline development monitoring and maintenance.
Additional Information :
Knowledge on Cloud technologies preferred.
Basic Knowledge on IIoT hardware architecture SDLC etc.
Remote Work :
No
Employment Type :
Full-time
View more
View less