Role: Data Engineer - AWS Bedrock
Location: Chicago IL (Locals Required)
Interview Mode: Face to Face
Experience: 10 years
As a Data Engineer with deep expertise in Python AWS big data ecosystems and SQL/NoSQL technologies driving scalable real-time data solutions with CI/CD and stream-processing frameworks.
Responsibilities: -
- Proficient developer in multiple languages Python is a must with the ability to quickly learn new ones.
- Expertise in SQL (complex queries relational databases preferably PostgreSQL and NoSQL database - Redis and Elasticsearch).
- Extensive big data experience including EMR Spark Kafka/ Kinesis and optimizing data pipelines architectures and datasets.
- AWS expert with hands-on experience in Lambda GlueAthena Kinesis IAM EMR/PySpark Docker
- Proficient in CI/CD development using Git Terraform and agile methodologies.
- Comfortable with stream-processing systems (Storm Spark-Streaming) and workflow management tools (Airflow).
- Exposure to knowledge graph technologies (Graph DB OWL SPARQL) is a plus. .
Educational Qualifications: -
- Engineering Degree BE/ME/BTech/MTech/BSc/MSc.
- Technical certification in multiple technologies is desirable.
Mandatory skills
- Bedrock Python PySpark AWS
Good to have skills: -
Role: Data Engineer - AWS Bedrock Location: Chicago IL (Locals Required) Interview Mode: Face to Face Experience: 10 years As a Data Engineer with deep expertise in Python AWS big data ecosystems and SQL/NoSQL technologies driving scalable real-time data solutions with CI/CD and stream-processing fr...
Role: Data Engineer - AWS Bedrock
Location: Chicago IL (Locals Required)
Interview Mode: Face to Face
Experience: 10 years
As a Data Engineer with deep expertise in Python AWS big data ecosystems and SQL/NoSQL technologies driving scalable real-time data solutions with CI/CD and stream-processing frameworks.
Responsibilities: -
- Proficient developer in multiple languages Python is a must with the ability to quickly learn new ones.
- Expertise in SQL (complex queries relational databases preferably PostgreSQL and NoSQL database - Redis and Elasticsearch).
- Extensive big data experience including EMR Spark Kafka/ Kinesis and optimizing data pipelines architectures and datasets.
- AWS expert with hands-on experience in Lambda GlueAthena Kinesis IAM EMR/PySpark Docker
- Proficient in CI/CD development using Git Terraform and agile methodologies.
- Comfortable with stream-processing systems (Storm Spark-Streaming) and workflow management tools (Airflow).
- Exposure to knowledge graph technologies (Graph DB OWL SPARQL) is a plus. .
Educational Qualifications: -
- Engineering Degree BE/ME/BTech/MTech/BSc/MSc.
- Technical certification in multiple technologies is desirable.
Mandatory skills
- Bedrock Python PySpark AWS
Good to have skills: -
View more
View less