Only W2 Candidates required No glider needed for this role
Mandatory Areas
Must Have Skills Data Engineer
Skill 1 8 years of experience in Python SQL and potentially Scala/Java
Skill 2 Big Data: Expertise in Apache Spark (Spark SQL DataFrames Streaming).
Skill 3- 4 Years in GCP
Data Engineer
Bentonville AR- 5 days Onsite
$65/hr on C2C
We are seeking a Data Engineer with Spark & Streaming skills builds real-time scalable data pipelines using tools like Spark Kafka and cloud services (GCP) to ingest transform and deliver data for analytics and ML.
Responsibilities:
Design develop and maintain ETL/ELT data pipelines for batch and real-time data ingestion transformation and loading using Spark (PySpark/Scala) and streaming technologies (Kafka Flink).
Build and optimize scalable data architectures including data lakes data warehouses (BigQuery) and streaming platforms.
Performance Tuning: Optimize Spark jobs SQL queries and data processing workflows for speed efficiency and cost-effectiveness
Data Quality: Implement data quality checks monitoring and alerting systems to ensure data accuracy and consistency.
Required Skills & Qualifications:
Programming: Strong proficiency in Python SQL and potentially Scala/Java.
Big Data: Expertise in Apache Spark (Spark SQL DataFrames Streaming).
Streaming: Experience with messaging queues like Apache Kafka or Pub/Sub.
Cloud: Familiarity with GCP Azure data services.
Databases: Knowledge of data warehousing (Snowflake Redshift) and NoSQL databases.
Tools: Experience with Airflow Databricks Docker Kubernetes is a plus.
Only W2 Candidates required No glider needed for this role Mandatory Areas Must Have Skills Data Engineer Skill 1 8 years of experience in Python SQL and potentially Scala/Java Skill 2 Big Data: Expertise in Apache Spark (Spark SQL DataFrames Streaming). Skill 3- 4 Years in GCP Dat...
Only W2 Candidates required No glider needed for this role
Mandatory Areas
Must Have Skills Data Engineer
Skill 1 8 years of experience in Python SQL and potentially Scala/Java
Skill 2 Big Data: Expertise in Apache Spark (Spark SQL DataFrames Streaming).
Skill 3- 4 Years in GCP
Data Engineer
Bentonville AR- 5 days Onsite
$65/hr on C2C
We are seeking a Data Engineer with Spark & Streaming skills builds real-time scalable data pipelines using tools like Spark Kafka and cloud services (GCP) to ingest transform and deliver data for analytics and ML.
Responsibilities:
Design develop and maintain ETL/ELT data pipelines for batch and real-time data ingestion transformation and loading using Spark (PySpark/Scala) and streaming technologies (Kafka Flink).
Build and optimize scalable data architectures including data lakes data warehouses (BigQuery) and streaming platforms.
Performance Tuning: Optimize Spark jobs SQL queries and data processing workflows for speed efficiency and cost-effectiveness
Data Quality: Implement data quality checks monitoring and alerting systems to ensure data accuracy and consistency.
Required Skills & Qualifications:
Programming: Strong proficiency in Python SQL and potentially Scala/Java.
Big Data: Expertise in Apache Spark (Spark SQL DataFrames Streaming).
Streaming: Experience with messaging queues like Apache Kafka or Pub/Sub.
Cloud: Familiarity with GCP Azure data services.
Databases: Knowledge of data warehousing (Snowflake Redshift) and NoSQL databases.
Tools: Experience with Airflow Databricks Docker Kubernetes is a plus.
View more
View less