drjobs Lead Data Engineer – Remote Job.

Lead Data Engineer – Remote Job.

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Peru

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark SQL and Python as well as a solid understanding of ETL and data warehousing principles. The ideal candidate will have a proven track record of designing building and maintaining scalable data pipelines in a collaborative and fastpaced environment.

Key Responsibilities:

  • Design and develop scalable data pipelines using PySpark to support analytics and reporting needs.
  • Write efficient SQL and Python code to transform cleanse and optimize large datasets.
  • Collaborate with machine learning engineers product managers and developers to understand data requirements and deliver solutions.
  • Implement and maintain robust ETL processes to integrate structured and semistructured data from various sources.
  • Ensure data quality integrity and reliability across pipelines and systems.
  • Participate in code reviews troubleshooting and performance tuning.
  • Work independently and proactively to identify and resolve datarelated issues.
  • If applicable contribute to Azurebased data solutions including ADF Synapse ADLS and other services.
  • Support cloud migration initiatives and DevOps practices if relevant to the role.
  • Provide guidance on best practices and mentor junior team members when needed.

Qualifications :

  • 8 years of overall experience working with crossfunctional teams (machine learning engineers developers product managers analytics teams).
  • 3 years of handson experience developing and managing data pipelines using PySpark.
  • Strong programming skills in Python and SQL.
  • Deep understanding of ETL processes and data warehousing fundamentals.
  • Selfdriven resourceful and comfortable working in dynamic fastpaced environments.
  • Advanced written and spoken Engllish is must have for this position (B2 C1 or C2 only).

Nice to have:

  • Databricks certification.
  • Experience with Azurenative services including: Azure Data Lake Storage (ADLS) Azure Data Factory (ADF) Azure Synapse Analytics / Azure SQL DB / Fabric.
  • Familiarity with Event Hub IoT Hub Azure Stream Analytics Azure Analysis Services and Cosmos DB.
  • Basic understanding of SAP HANA.
  • Intermediatelevel experience with Power BI.
  • Knowledge of DevOps CI/CD pipelines and cloud migration best practices.


Additional Information :

Please note that we will not be moving forward with any applicants who do not meet the following mandatory requirements:
 

  • 3 years of experience with PySpark/Python ETL and datawarehousing processes.
  • Proven leadership experience in a current project or previous projects/work experiences.
  • Advanced written and spoken English fluency is a MUST HAVE (from B2 level to C1/C2)
  • MUST BE located in Central or South america as this is a nearshore position (Please note that we are not able to consider candidates requiring relocation or those located offshore).


More Details:

  • Contract type: Independent contractor (This contract does not include PTO tax deductions or insurance. It only covers the monthly payment based on hours worked).
  • Location: The client is based in the United States; however the position is 100% remote for nearshore candidates located in Central or South America.
  • Contract/project duration: Initially 6 months with extension possibility based on performance.
  • Time zone and working hours: Fulltime Monday to Friday (8 hours per day 40 hours per week) from 8:00 AM to 5:00 PM PST (U.S. time zone).
  • Equipment: Contractors are required to use their own laptop/PC.
  • Start date expectation: As soon as possible.
  • Payment methods: International bank transfer PayPal Wise Payoneer etc.


Bertoni Process Steps:

  • Requirements verification video interview.
  • Technical interview


Partner/Client Process Steps:

  • CV review.
  • 1 Technical video interview with our partner.
  • 1 or 2 video interviews with the end client.


Why Join Us

  • Be part of an innovative team shaping the future of technology.
  • Work in a collaborative and inclusive environment.
  • Opportunities for professional development and career growth.


Remote Work :

Yes


Employment Type :

Fulltime

Employment Type

Remote

Company Industry

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.