Sr. Data Engineer (Spark Kafka GCP)
Duration: 6 months
Location: Charlotte NC Hybrid
Interview: First round is virtual with a coding assessment second round is Onsite technical
Disqualifiers/ Dislikes on Resumes::
No Coding skills
If they are ETL Developers
Below are the MUST have Non-Negotiable Required Skills:
Spark Frame Work 4 Years Minimum
Kafka 3 Years
GCP 3 years
Airflow 3 years
Big Query
Preferred Skills:
Java 3 years
Python 3 years
Microservices 2years
SQL 4 Years
Day to Day Responsibilities
- Attending Scrum calls to provide daily status updates
- Pickup assigned JIRA stories and work with Analysts or Product Owners to understand
- Responsible for the development maintenance and enhancements of Data Engineering solutions of varying complexity levels across different data sources like DBMS File systems (structured and unstructured) on-prem and cloud infrastructure; creates level metrics and other complex metrics.
- Demonstrates Data Engineering skills (Spark Data Frame Big Queries) by writing pipelines for business requirements.
- Responsible for building testing and enhancement of Data Pipelines solutions from a wide variety of sources like Kafka streaming Google Big Query and File systems; develops solutions with optimized data performance and data security
- Taking ownership to test the feature End to End
Sr. Data Engineer (Spark Kafka GCP) Duration: 6 months Location: Charlotte NC Hybrid Interview: First round is virtual with a coding assessment second round is Onsite technical Disqualifiers/ Dislikes on Resumes:: No Coding skills If they are ETL Developers Below are the MUST have ...
Sr. Data Engineer (Spark Kafka GCP)
Duration: 6 months
Location: Charlotte NC Hybrid
Interview: First round is virtual with a coding assessment second round is Onsite technical
Disqualifiers/ Dislikes on Resumes::
No Coding skills
If they are ETL Developers
Below are the MUST have Non-Negotiable Required Skills:
Spark Frame Work 4 Years Minimum
Kafka 3 Years
GCP 3 years
Airflow 3 years
Big Query
Preferred Skills:
Java 3 years
Python 3 years
Microservices 2years
SQL 4 Years
Day to Day Responsibilities
- Attending Scrum calls to provide daily status updates
- Pickup assigned JIRA stories and work with Analysts or Product Owners to understand
- Responsible for the development maintenance and enhancements of Data Engineering solutions of varying complexity levels across different data sources like DBMS File systems (structured and unstructured) on-prem and cloud infrastructure; creates level metrics and other complex metrics.
- Demonstrates Data Engineering skills (Spark Data Frame Big Queries) by writing pipelines for business requirements.
- Responsible for building testing and enhancement of Data Pipelines solutions from a wide variety of sources like Kafka streaming Google Big Query and File systems; develops solutions with optimized data performance and data security
- Taking ownership to test the feature End to End
View more
View less