- Design develop and maintain end-to-end big data pipelines that are optimized scalable and capable of processing large volumes of data in real-time and in batch mode
- Follow and promote best practices and design principles for Data Lakehouse architecture
- Support technological decision-making for the businesss future data management and analysis needs by doing POCs
- Write and automate data pipelines
- Assist in improving data organization and accuracy
- Collaborate and work with data analysts scientists and engineers to ensure the use of best practices in terms of data processing and storage technologies
- Explore and stay up to date on emerging technologies and proactively shares learnings with the team
- Ensure that all deliverables adhere to our world-class standards
Qualifications :
- 3 years of experience in big data development and database design
- Strong hands-on experience with SQL and experience working on advanced SQL
- Proficient in Python and other scripting languages
- Working knowledge of at least one of the big data technologies
- Experience developing software solutions using Hadoop Technologies such as MapReduce Hive Spark
- Yarn/Mesos etc.
- At least an Upper-Intermediate level of English
WOULD BE A PLUS
- Experience with AWS cloud S3 and Redshift
- Experience with Python
- Knowledge and exposure to BI applications e.g. Tableau QlikView
Additional Information :
PERSONAL PROFILE
- Excellent analytical and problem-solving skills
Remote Work :
Yes
Employment Type :
Full-time
Design develop and maintain end-to-end big data pipelines that are optimized scalable and capable of processing large volumes of data in real-time and in batch mode Follow and promote best practices and design principles for Data Lakehouse architecture Support technological decision-making for the b...
- Design develop and maintain end-to-end big data pipelines that are optimized scalable and capable of processing large volumes of data in real-time and in batch mode
- Follow and promote best practices and design principles for Data Lakehouse architecture
- Support technological decision-making for the businesss future data management and analysis needs by doing POCs
- Write and automate data pipelines
- Assist in improving data organization and accuracy
- Collaborate and work with data analysts scientists and engineers to ensure the use of best practices in terms of data processing and storage technologies
- Explore and stay up to date on emerging technologies and proactively shares learnings with the team
- Ensure that all deliverables adhere to our world-class standards
Qualifications :
- 3 years of experience in big data development and database design
- Strong hands-on experience with SQL and experience working on advanced SQL
- Proficient in Python and other scripting languages
- Working knowledge of at least one of the big data technologies
- Experience developing software solutions using Hadoop Technologies such as MapReduce Hive Spark
- Yarn/Mesos etc.
- At least an Upper-Intermediate level of English
WOULD BE A PLUS
- Experience with AWS cloud S3 and Redshift
- Experience with Python
- Knowledge and exposure to BI applications e.g. Tableau QlikView
Additional Information :
PERSONAL PROFILE
- Excellent analytical and problem-solving skills
Remote Work :
Yes
Employment Type :
Full-time
View more
View less