If you post this job on a job board please do not use company name or salary. Experience level: Mid-senior Experience required: 7 Years Education level: Bachelors degree Job function: Information Technology Industry: Information Technology and Services Pay rate : $68 per hour Total position: 1 Relocation assistance: No Visa sponsorship eligibility: No
Job Description:
- 7 years of experience in Amazon Web Service(AWS) Cloud Computing.
- 10 years of experience in big data and distributed computing.
- Very Strong hands-on experience with PySpark Apache Spark and Python.
- Strong Hands on experience with SQL and NoSQL databases (DB2 PostgreSQL Snowflake etc.).
- Proficiency in data modeling and ETL workflows.
- Proficiency with workflow schedulers like Airflow.
- Hands on experience with AWS cloud-based data platforms.
- Experience in DevOps CI/CD pipelines and containerization (Docker Kubernetes) is a plus.
- Strong problem-solving skills and ability to lead a team
- DBT AWS Astronomer
- Lead the design development and deployment of PySpark-based big data solutions.
- Architect and optimize ETL pipelines for structured and unstructured data.
- Collaborate with Client data engineers data scientists and business teams to understand requirements and provide scalable solutions.
- Optimize Spark performance through partitioning caching and tuning.
- Implement best practices in data engineering (CI/CD version control unit testing).
- Work with cloud platforms like AWS.
- Ensure data security governance and compliance.
- Mentor junior developers and review code for best practices and efficiency.
Additional Notes:
- Please submit the candidates resume in PDF format.
- Please note that TCS does not consider former full-time employees (FTEs) for rehire. Additionally individuals who have previously worked at TCS as contractors must observe a minimum waiting period of six months before being eligible for re-engagement.
MUST HAVE:
- 7 years of experience in Amazon Web Service(AWS) Cloud Computing.
- 10 years of experience in big data and distributed computing.
- Experience with PySpark Apache Spark and Python.
- Experience with SQL and NoSQL databases (DB2 PostgreSQL Snowflake etc.).
- Hands on experience with AWS cloud-based data platforms.
- Experience in DevOps CI/CD pipelines and containerization (Docker Kubernetes) is a plus.