Job Title: Data Engineer
Location: Charlotte NC (Hybrid)
Duration: 12 months
Job Description:
About the Role:
We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python PySpark and Object-oriented programming. The ideal candidate will be responsible for designing developing and implementing new features to our existing framework using PySpark and Python. This position requires a deep understanding of data transformation and the ability to create standalone scripts based on given business logic. Also exposure to AI Tools and building any AI applications will be advantage.
Key Responsibilities:
- Design develop and optimize large-scale data pipelines using PySpark and Python.
- Implement and adhere to best practices in object-oriented programming to build reusable maintainable code.
- Write advanced SQL queries for data extraction transformation and loading (ETL).
- Collaborate closely with data scientists analysts and stakeholders to gather requirements and translate them into technical solutions.
- Troubleshoot data-related issues and resolve them in a timely and accurate manner.
- Leverage AWS cloud services (e.g. S3 EMR Lambda Glue) to build and manage cloud-native data workflows (preferred).
- Participate in code reviews data quality checks and performance tuning of data jobs.
Required Skills & Qualifications:
- 6 years of relevant experience in a data engineering or backend development role.
- Strong hands-on experience with PySpark and Python especially in designing and implementing scalable data transformations.
- Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
- Proficient in SQL with the ability to write complex queries and optimize performance.
- Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
- Excellent communication and collaboration skills.
- Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
- Experience working with AWS cloud ecosystem (S3 Glue EMR Redshift Lambda etc.).
- Exposure to data warehousing concepts distributed computing and performance tuning.
- Familiarity with version control systems (e.g. Git) CI/CD pipelines and Agile methodologies.
- Exposure to AI Tools and hands-on experience of building any AI applications.
Keywords: AWS SQL PySpark Python ETL
Job Title: Data Engineer Location: Charlotte NC (Hybrid) Duration: 12 months Job Description: About the Role: We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python PySpark and Object-oriented programming. The ideal candidate will be resp...
Job Title: Data Engineer
Location: Charlotte NC (Hybrid)
Duration: 12 months
Job Description:
About the Role:
We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python PySpark and Object-oriented programming. The ideal candidate will be responsible for designing developing and implementing new features to our existing framework using PySpark and Python. This position requires a deep understanding of data transformation and the ability to create standalone scripts based on given business logic. Also exposure to AI Tools and building any AI applications will be advantage.
Key Responsibilities:
- Design develop and optimize large-scale data pipelines using PySpark and Python.
- Implement and adhere to best practices in object-oriented programming to build reusable maintainable code.
- Write advanced SQL queries for data extraction transformation and loading (ETL).
- Collaborate closely with data scientists analysts and stakeholders to gather requirements and translate them into technical solutions.
- Troubleshoot data-related issues and resolve them in a timely and accurate manner.
- Leverage AWS cloud services (e.g. S3 EMR Lambda Glue) to build and manage cloud-native data workflows (preferred).
- Participate in code reviews data quality checks and performance tuning of data jobs.
Required Skills & Qualifications:
- 6 years of relevant experience in a data engineering or backend development role.
- Strong hands-on experience with PySpark and Python especially in designing and implementing scalable data transformations.
- Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
- Proficient in SQL with the ability to write complex queries and optimize performance.
- Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
- Excellent communication and collaboration skills.
- Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
- Experience working with AWS cloud ecosystem (S3 Glue EMR Redshift Lambda etc.).
- Exposure to data warehousing concepts distributed computing and performance tuning.
- Familiarity with version control systems (e.g. Git) CI/CD pipelines and Agile methodologies.
- Exposure to AI Tools and hands-on experience of building any AI applications.
Keywords: AWS SQL PySpark Python ETL
View more
View less