- Big Data Architect (Spark):
- Extensive experience with Apache Spark for big data processing and analytics.
- Proficiency in Spark Core Spark SQL Spark Streaming and MLlib.
- Knowledge of distributed computing systems and frameworks.
- Familiarity with Hadoop ecosystem components such as HDFS Hive and YARN.
Expertise in optimizing Spark jobs for performance and scalability. - Designing and implementing data platforms on Azure Could using Snowflake and Databricks
- Five years of prior experience in modernizing and migrating legacy data platforms to cloud is a must have experience.
- Hands-on experience with Exadata is plus
Big Data Architect (Spark): Extensive experience with Apache Spark for big data processing and analytics. Proficiency in Spark Core Spark SQL Spark Streaming and MLlib. Knowledge of distributed computing systems and frameworks. Familiarity with Hadoop ecosystem components such as HDFS Hive and YARN...
- Big Data Architect (Spark):
- Extensive experience with Apache Spark for big data processing and analytics.
- Proficiency in Spark Core Spark SQL Spark Streaming and MLlib.
- Knowledge of distributed computing systems and frameworks.
- Familiarity with Hadoop ecosystem components such as HDFS Hive and YARN.
Expertise in optimizing Spark jobs for performance and scalability. - Designing and implementing data platforms on Azure Could using Snowflake and Databricks
- Five years of prior experience in modernizing and migrating legacy data platforms to cloud is a must have experience.
- Hands-on experience with Exadata is plus
View more
View less