Band- B1
Role: AWS& Python
Experience- Minimum 2 Years
Location- Noida/ Delhi/NCR (Currently WFH but should be OK for Hybrid Model)
Shift Timings- 12-9PM (UK Shift)
Only Immediate Joiners or serving Notice Period candidates preferred.
Designation: Senior Business Analyst
AWS and Python/Pyspark Developer will be responsible for developing and debugging process flows Related to Data Analytics using AWS services. Understanding of AWS services mentioned below is a must:
- Amazon S3
- AWS Lambda
- Amazon Redshift
- AWS Glue and Data Catalog
- Amazon EC2
- AWS Athena
- Data Lake
Role and Responsibilities:
- Develop Server-less Lambda functions to achieve connectivity between services using any scripting language e.g. Python.
- Usage of AWS Glue Crawlers for populating Data Catalog with metadata.
- AWS Glue Job creation for ETL services for large sets of data in Pyspark
- Data validation using services within AWS.
- Implementation of EC2 servers and their working.
- Usage of Amazon Redshift for Database.
- Performing ETL operations in Python.
- Good knowledge of python OOPS concepts and libraries such as pandas, NumPy, mat plot, etc.
Candidate Profile:
- Work experience as a AWS Developer, SQL, Python/Pyspark or any other scripting language.
- Expertise in at least 5 of the above mentioned services.
- BSc, B. Tech in Computer Science, Engineering or relevant field.
Band- B1 Role: AWS& Python Experience- Minimum 2 Years Location- Noida/ Delhi/NCR (Currently WFH but should be OK for Hybrid Model) Shift Timings- 12-9PM (UK Shift) Only Immediate Joiners or serving Notice Period candidates preferred. Designation: Senior Business Analyst AWS and Python/Pyspark Developer will be responsible for developing and debugging process flows Related to Data Analytics using AWS services. Understanding of AWS services mentioned below is a must: 1. Amazon S3 2. AWS Lambda 3. Amazon Redshift 4. AWS Glue and Data Catalog 5. Amazon EC2 6. AWS Athena 7. Data Lake Role and Responsibilities: Develop Server-less Lambda functions to achieve connectivity between services using any scripting language e.g. Python. Usage of AWS Glue Crawlers for populating Data Catalog with metadata. AWS Glue Job creation for ETL services for large sets of data in Pyspark Data validation using services within AWS. Implementation of EC2 servers and their working. Usage of Amazon Redshift for Database. Performing ETL operations in Python. Good knowledge of python OOPS concepts and libraries such as pandas, NumPy, mat plot, etc. Candidate Profile: Work experience as a AWS Developer, SQL, Python/Pyspark or any other scripting language. Expertise in at least 5 of the above mentioned services. BSc, B. Tech in Computer Science, Engineering or relevant field.