drjobs Data Engineering Intern

Data Engineering Intern

Employer Active

1 Vacancy
The job posting is outdated and position may be filled
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Miami, FL - USA

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

About the Internship:

This 3month handson training internship is an opportunity to learn realworld practical skills in data engineering while working on real projects that impact the industry. While the internship is unpaid it provides unparalleled experience mentorship and the chance to develop your portfolio for future career opportunities. Youll work closely with our team to understand the fundamentals of data engineering gain exposure to advanced tools and technologies and build projects you can showcase.

 

What Youll Learn:

Data Pipeline Development: Build and manage scalable ETL (Extract Transform Load) pipelines for data processing.

Data Warehousing: Learn to design and optimize data storage for highperformance analytics.

Big Data Tools: Gain handson experience with industrystandard tools like Apache Spark Hadoop and more.

Cloud Platforms: Work with cloud services like AWS Google Cloud or Azure for data storage and analytics.

Data Integration: Learn how to integrate data from multiple sources and maintain data quality.

SQL & Python Programming: Build expertise in writing efficient queries and leveraging Python for data manipulation.

RealWorld Problem Solving: Participate in solving realworld challenges by contributing to live projects.

Version Control & Collaboration: Use Git and agile methodologies to work collaboratively with a team.

 

Key Responsibilities:

Assist in building testing and deploying data pipelines and workflows.

Support the design and implementation of data models for efficient querying.

Clean process and transform raw data into meaningful formats for analysis.

Collaborate with team members to identify and resolve data engineering challenges.

Document processes workflows and learnings to contribute to team knowledge.

 

What Were Looking For:

Passion for Data Engineering: A strong interest in data systems analytics and solving complex problems.

Technical Skills: Basic knowledge of SQL Python or any other programming language. Familiarity with data structures is a plus.

Curiosity & Willingness to Learn: Open to new tools technologies and methodologies.

Team Player: Excellent communication and collaboration skills.

Education: Computer Science Data Science or a related field (students or recent graduates preferred).

 

What Youll Gain:

Practical realworld experience working on live data engineering projects.

Mentorship from experienced professionals with deep industry expertise.

A portfolio of completed projects to showcase your skills.

Networking opportunities and a letter of recommendation upon successful completion of the program.

A stepping stone toward a career in data engineering data science or related fields.

 

How to Apply:

Submit your CV along with a brief statement on why youre interested in data engineering and this internship opportunity. Highlight any relevant coursework personal projects or technical experience.

 

Deadline: Applications will be accepted on a rolling basis until the positions are filled.

 

This is your chance to get handson experience work with realworld data and kickstart your journey into the world of data engineering! Join us at RoyaltyBusayo and be part of a legacy that builds future tech leaders.


Qualifications :

University Hiring Program Eligibility Requirements:

  • University Enrollment: Must be currently enrolled in and returning to an accredited degreeseeking academic program for at least 1 1/2 years (Spring 2027 grad or later) in the Fall.
  • Internship Work Period: Must be available to work fulltime (approximately 40 hours per week) during a 1012 week period starting May or June. Specific start dates are shared during the recruiting process.

Required Skills and Experience

  • Strong foundational skills in data engineering ETL and familiarity with tools like Jenkins dbt and Airflow.
  • Strong coding skills in Python Scala and/or Java with an emphasis on clean maintainable and efficient code for data processing.
  • Proficient in designing implementing and optimizing ETL/ELT pipelines using tools like Apache Airflow dbt and AWS Glue to support scalable data workflows.
  • Basic knowledge of SQL and NoSQL databases (e.g. MySQL Postgres MongoDB) and timeseries databases (e.g. Druid Influx).
  • Familiarity with AWS (EC2 S3 RDS Lambda EKS Kenesis Athena Glue DynamoDB Redshift IAM) and an interest in cloud infrastructure.
  • Understanding of security protocols including SAML OAUTH JWT Token and SSO.
  • Experience in orchestrating data workflows and ETL processes using AWS Data Pipeline or AWS Step Functions.
  • Knowledge of interactive data preparation for cleaning and transforming data.
  • Interest or experience in data analytics (dashboards insights) and tools like Tableau is a plus.
  • Experience with or an interest in CI/CD pipelines and build tools like Jenkins CircleCI or GitLab.
  • Deep knowledge of Apache Spark and Kafka for batch and realtime data processing at scale.

Required Education and Training

  • Currently pursuing a degree in Computer Science Software/Computer Engineering Information Technology Data Science or a related field.

Preferred Skills and Experience  

  • Basic knowledge of SQL and NoSQL databases (e.g. MySQL Postgres MongoDB) and timeseries databases (e.g. Druid Influx).
  • Familiarity with AWS (EC2 S3 RDS Lambda EKS Kenesis Athena Glue DynamoDB Redshift IAM) and an interest in cloud infrastructure.
  • Understanding of security protocols including SAML OAUTH JWT Token and SSO.
  • Experience in orchestrating data workflows and ETL processes using AWS Data Pipeline or AWS Step Functions.
  • Knowledge of interactive data preparation for cleaning and transforming data.
  • Interest or experience in data analytics (dashboards insights) and tools like Tableau is a plus.
  • Experience with or an interest in CI/CD pipelines and build tools like Jenkins CircleCI or GitLab.
  • Deep knowledge of Apache Spark and Kafka for batch and realtime data processing at scale.

 


Additional Information :

All your information will be kept confidential according to EEO guidelines.


Remote Work :

Yes


Employment Type :

Fulltime

Employment Type

Remote

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.