Job Role: Data Engineer
Job Location: Dallas TX & NYC NY
Experience: Max 8 Years
Job type : Fulltime
Skills: SQL Tableau Python Databricks ETL Pipeline
Job Responsibilities
- Design develop & implement ETL processes on Azure/AWS Cloud using Databricks and PySpark.
- Advanced SQL knowledge capable to write optimized queries for faster data workflows.
- Must be extremely well versed with handling large volume data and work using different tools to derive the required solution.
- Work with onshore team business analysts and other data engineering teams to ensure alignment of requirements methodologies and best practices.
- Experience with handling Realtime Change Data Feed and Unstructured data is a plus.
Qualification:
- Bachelors degree or masters degree in computer science.
- Proven work experience in Databricks Spark Python SQL Any RDBMS.
- Experience with handling Realtime Change Data Feed and Unstructured data is a plus.
- Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS.
- Strong database fundamentals including SQL performance and schema design.
- Understanding of CI/CD framework is an added advantage.
- Ability to interpret/write custom shell scripts. Python scripting is a plus.
- Experience with Azure platform and data bricks/similar platform
- Experience with Git / Azure DevOps
- To be able to work in a fast-paced agile development environment
Job Role: Data Engineer Job Location: Dallas TX & NYC NY Experience: Max 8 Years Job type : Fulltime Skills: SQL Tableau Python Databricks ETL Pipeline Job Responsibilities Design develop & implement ETL processes on Azure/AWS Cloud using Databricks and PySpark. Advanced SQL knowledge capab...
Job Role: Data Engineer
Job Location: Dallas TX & NYC NY
Experience: Max 8 Years
Job type : Fulltime
Skills: SQL Tableau Python Databricks ETL Pipeline
Job Responsibilities
- Design develop & implement ETL processes on Azure/AWS Cloud using Databricks and PySpark.
- Advanced SQL knowledge capable to write optimized queries for faster data workflows.
- Must be extremely well versed with handling large volume data and work using different tools to derive the required solution.
- Work with onshore team business analysts and other data engineering teams to ensure alignment of requirements methodologies and best practices.
- Experience with handling Realtime Change Data Feed and Unstructured data is a plus.
Qualification:
- Bachelors degree or masters degree in computer science.
- Proven work experience in Databricks Spark Python SQL Any RDBMS.
- Experience with handling Realtime Change Data Feed and Unstructured data is a plus.
- Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS.
- Strong database fundamentals including SQL performance and schema design.
- Understanding of CI/CD framework is an added advantage.
- Ability to interpret/write custom shell scripts. Python scripting is a plus.
- Experience with Azure platform and data bricks/similar platform
- Experience with Git / Azure DevOps
- To be able to work in a fast-paced agile development environment
View more
View less