Roles and Responsibilities:
- Collaborate with users and project team on requirements and expectations
- Analyze business requirements create proper data solutions that meets functional and non-functional requirements
- Design and implement highly scalable and robust data pipelines in Scala/Spark/Java for processing of extreme large amount of data
- Create and maintain reusable frameworks for data ingestion validation normalization and transformation
- Create scripts for test automation CI/CD data migrations data validations
- Provide technical support for incident recovery
Requirements
- Minimum 36 years of working experience as developer or data engineer
- Strong programming skills and has software development experience in Scala Java or Python
- Deep understanding of distributed systems (Hadoop Spark etc.) and its optimization
- Experience in public cloud technologies preferably Azure
- Good knowledge of Oracle database MS-SQL Server Linux networking
- Proficient in SQL shell scripting git unit testing CI/CD tools
- Understanding of design principles coding standards and best practices
- Strong learning ability and problem-solving skills
- Excellent communication presentation and interpersonal skills
Preferred:
- Understanding of REST API NoSQL micro-services ETL
- Hand-on experience with Cloud services is preferred
- Understanding of streaming data processing and change data capture is preferred
- Experience with BI tools is preferred
Personality:
- Detailed and precise able to drive alone the design and development of complex system
- Able to articulate technical aspects in an easy-to-understand way for non-developers
- Strong performer with focus on delivery fast learning and able to apply his knowledge in various domains
- Willing to learn new technologies and pick up challenging work
- Team player
Required Skills:
Roles and Responsibilities:
- Collaborate with users and project team on requirements and expectations
- Analyze business requirements create proper data solutions that meets functional and non-functional requirements
- Design and implement highly scalable and robust data pipelines in Scala/Spark/Java for processing of extreme large amount of data
- Create and maintain reusable frameworks for data ingestion validation normalization and transformation
- Create scripts for test automation CI/CD data migrations data validations
- Provide technical support for incident recovery
Requirements
- Minimum 36 years of working experience as developer or data engineer
- Strong programming skills and has software development experience in Scala Java or Python
- Deep understanding of distributed systems (Hadoop Spark etc.) and its optimization
- Experience in public cloud technologies preferably Azure
- Good knowledge of Oracle database MS-SQL Server Linux networking
- Proficient in SQL shell scripting git unit testing CI/CD tools
- Understanding of design principles coding standards and best practices
- Strong learning ability and problem-solving skills
- Excellent communication presentation and interpersonal skills
Preferred:
- Understanding of REST API NoSQL micro-services ETL
- Hand-on experience with Cloud services is preferred
- Understanding of streaming data processing and change data capture is preferred
- Experience with BI tools is preferred
Personality:
- Detailed and precise able to drive alone the design and development of complex system
- Able to articulate technical aspects in an easy-to-understand way for non-developers
- Strong performer with focus on delivery fast learning and able to apply his knowledge in various domains
- Willing to learn new technologies and pick up challenging work
- Team player
Roles and Responsibilities:Collaborate with users and project team on requirements and expectationsAnalyze business requirements create proper data solutions that meets functional and non-functional requirementsDesign and implement highly scalable and robust data pipelines in Scala/Spark/Java for pr...
Roles and Responsibilities:
- Collaborate with users and project team on requirements and expectations
- Analyze business requirements create proper data solutions that meets functional and non-functional requirements
- Design and implement highly scalable and robust data pipelines in Scala/Spark/Java for processing of extreme large amount of data
- Create and maintain reusable frameworks for data ingestion validation normalization and transformation
- Create scripts for test automation CI/CD data migrations data validations
- Provide technical support for incident recovery
Requirements
- Minimum 36 years of working experience as developer or data engineer
- Strong programming skills and has software development experience in Scala Java or Python
- Deep understanding of distributed systems (Hadoop Spark etc.) and its optimization
- Experience in public cloud technologies preferably Azure
- Good knowledge of Oracle database MS-SQL Server Linux networking
- Proficient in SQL shell scripting git unit testing CI/CD tools
- Understanding of design principles coding standards and best practices
- Strong learning ability and problem-solving skills
- Excellent communication presentation and interpersonal skills
Preferred:
- Understanding of REST API NoSQL micro-services ETL
- Hand-on experience with Cloud services is preferred
- Understanding of streaming data processing and change data capture is preferred
- Experience with BI tools is preferred
Personality:
- Detailed and precise able to drive alone the design and development of complex system
- Able to articulate technical aspects in an easy-to-understand way for non-developers
- Strong performer with focus on delivery fast learning and able to apply his knowledge in various domains
- Willing to learn new technologies and pick up challenging work
- Team player
Required Skills:
Roles and Responsibilities:
- Collaborate with users and project team on requirements and expectations
- Analyze business requirements create proper data solutions that meets functional and non-functional requirements
- Design and implement highly scalable and robust data pipelines in Scala/Spark/Java for processing of extreme large amount of data
- Create and maintain reusable frameworks for data ingestion validation normalization and transformation
- Create scripts for test automation CI/CD data migrations data validations
- Provide technical support for incident recovery
Requirements
- Minimum 36 years of working experience as developer or data engineer
- Strong programming skills and has software development experience in Scala Java or Python
- Deep understanding of distributed systems (Hadoop Spark etc.) and its optimization
- Experience in public cloud technologies preferably Azure
- Good knowledge of Oracle database MS-SQL Server Linux networking
- Proficient in SQL shell scripting git unit testing CI/CD tools
- Understanding of design principles coding standards and best practices
- Strong learning ability and problem-solving skills
- Excellent communication presentation and interpersonal skills
Preferred:
- Understanding of REST API NoSQL micro-services ETL
- Hand-on experience with Cloud services is preferred
- Understanding of streaming data processing and change data capture is preferred
- Experience with BI tools is preferred
Personality:
- Detailed and precise able to drive alone the design and development of complex system
- Able to articulate technical aspects in an easy-to-understand way for non-developers
- Strong performer with focus on delivery fast learning and able to apply his knowledge in various domains
- Willing to learn new technologies and pick up challenging work
- Team player
View more
View less