drjobs Big Data Engineer

Big Data Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Xico - Mexico

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Our client is a rapidly growing automationled service provider specializing in IT business process outsourcing (BPO) and consulting services. With a strong focus on digital transformation cloud solutions and AIdriven automation they help businesses optimize operations and enhance customer experiences. Backed by a global workforce of over 32000 employees our client fosters a culture of innovation collaboration and continuous learning making it an exciting environment for professionals looking to advance their careers.

Committed to excellence our client serves 31 Fortune 500 companies across industries such as financial services healthcare and manufacturing. Their approach is driven by the Automate Everything Cloudify Everything and Transform Customer Experiences strategy ensuring they stay ahead in an evolving digital landscape.

As a company that values growth and professional development our client offers global career opportunities a dynamic work environment and exposure to highimpact projects. With 54 offices worldwide and a presence in 39 delivery centers across 28 countries employees benefit from an international network of expertise and innovation. Their commitment to a customer success first and always philosophy ensures a rewarding and forwardthinking workplace for driven professionals.

We are currently searching for a Big Data Engineer:

Responsibilities:

  • Design build and optimize data pipelines for ETL/ELT processes in data warehousing and BI projects.
  • Develop and maintain complex stored procedures DWH schemas and SQL/PLSQL scripts.
  • Implement PySparkbased solutions for largescale data processing and transformation.
  • Collaborate on Snowflake database architecture performance tuning and troubleshooting.
  • Integrate data workflows with AWS services (S3 Lambda) and orchestration tools (Jenkins GitHub).
  • Manage JIRA workflows for task tracking and Agile project delivery.

Requirements:

  • 5 years of experience in data engineering ETL and data warehousing.
  • Expertise in Python/PySpark for big data processing.
  • Advanced SQL/PLSQL skills (complex queries stored procedures performance tuning).
  • Handson experience with Snowflake Oracle Database and Unix Shell Scripting.
  • Familiarity with AWS cloud services (S3 Lambda).
  • Proficiency in CI/CD tools (GitHub Jenkins).
  • Strong analytical skills and ability to troubleshoot data pipeline issues.

Desired:

  • Experience with Kafka for realtime data streaming.
  • Knowledge of Netezza DB Informatica or Talend.
  • Basic understanding of data governance and workflow automation.


Languages

  • Advanced Oral English.
  • Native Spanish.

Note:

  • Fully remote


If you meet these qualifications and are pursuing new challenges start your application on our website to join an awardwinning employer. Explore all our job openings Sequoia Career s Page:
Snowflake PySpark ETL AWS S3/Lambda Oracle PL/SQL Unix Shell Scripting

Requirements :

Requirements:


Remote Work :

Yes

Employment Type :

Fulltime

Employment Type

Full-time

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.