Job Requirements -
Qualifications:
Software Engineer Big Data
5 years in Python/PySpark
5 years optimizing Python/PySpark jobs in a hadoop ecosystem
5 years working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark HDFS YARN Hive and Oozie.
5 years with designing and developing cloud applications: AWS OCI or similar.
5 years in distributed/cluster computing concepts.
5 years with relational databases: MS SQL Server or similar
3 years with NoSQL databases: HBASE (preferred)
3 years in creating and consuming RESTful Web Services
5 years in developing multi-threaded applications; Concurrency Parallelism Locking Strategies and Merging Datasets.
5 years in Memory Management Garbage Collection & Performance Tuning.
Strong knowledge of shell scripting and file systems.
Preferred: Knowledge of CI tools like Git Maven SBT Jenkins and Artifactory/Nexus
Knowledge of building microservices and thorough understanding of service-oriented architecture
Knowledge in container orchestration platforms and related technologies such as Docker Kubernetes OpenShift.
Understanding of prevalent Software Development Lifecycle Methodologies with specific exposure or participation in Agile/Scrum techniques
Strong knowledge and application of SAFe agile practices preferred.
Flexible work schedule.
Experience with project management tools like JIRA.