Data Engineer

J Lee Engineering

Not Interested
Bookmark
Report This Job

profile Job Location:

Charlotte, VT - USA

profile Monthly Salary: Not Disclosed
Posted on: 19 days ago
Vacancies: 1 Vacancy

Job Summary

Job Summary

We are seeking a skilled business-minded Data Engineer to design build and maintain robust data pipelines and platforms that power analytics machine learning and operational reporting across the organization. You will work closely with data scientists analysts and engineering teams to ensure high-quality scalable and reliable data solutions that drive strategic decision-making.

Key Responsibilities

Architect develop and optimize batch and real-time data pipelines using modern ETL/ELT tools and cloud services (e.g. AWS Azure or GCP).

Model and maintain data warehouses/lakes with best-practice dimensional or data-vault techniques.

Implement data quality lineage and observability solutions to uphold data integrity and reliability.

Collaborate with cross-functional partners to translate business requirements into technical specifications.

Automate ingestion transformation and deployment processes using CI/CD infrastructure-as-code and containerization (e.g. Docker Terraform Kubernetes).

Monitor troubleshoot and tune data workflows for performance cost efficiency and scalability.

Champion data governance security and compliance policies (including HIPAA SOC 2 or similar as applicable).

Mentor junior engineers and contribute to engineering standards documentation and best practices.

Required Skills & Competencies

Proficiency in Python SQL and at least one compiled or JVM language (e.g. Java Scala Go).

Hands-on expertise with distributed processing frameworks (e.g. Spark Flink Beam) and orchestration tools (e.g. Airflow Prefect Dagster).

Deep knowledge of relational and NoSQL databases (e.g. Postgres Snowflake BigQuery Redshift DynamoDB).

Experience designing deploying and operating data platforms in a cloud environment (AWS Azure or GCP).

Familiarity with DevOps practices: Git CI/CD pipelines infrastructure-as-code.

Strong problem-solving communication documentation and stakeholder-management skills.

Education & Experience

Bachelors degree (or higher) in Computer Science Engineering Information Systems Mathematics or a related technical field or equivalent practical experience.

3 years of professional experience in data engineering software engineering or a closely related discipline building production-grade data systems.

Annual Pay

Base salary range: USD $120000 to $160000 commensurate with location skills and experience.

Compensation & Benefits

Medical dental and vision insurance (company-subsidized).

401(k) plan with company match.

Equity/stock options program.

Generous paid time off (vacation sick leave and 12 paid holidays).

Paid parental leave and family-care support.

Flexible work arrangements: remote or hybrid within the United States with company-sponsored coworking stipends.

Learning & development budget industry conference access and internal mentorship programs.

Employee wellness initiatives (mental-health coverage fitness reimbursement and employee resource groups).

Work Authorization & Location Requirement

This position is open only to candidates currently authorized to work in the United States and residing within U.S. time zones. Visa sponsorship is not available at this time.

Equal Employment Opportunity

We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race color religion gender gender identity or expression sexual orientation national origin genetics disability age veteran status or any other legally protected status.

Job Summary We are seeking a skilled business-minded Data Engineer to design build and maintain robust data pipelines and platforms that power analytics machine learning and operational reporting across the organization. You will work closely with data scientists analysts and engineering teams to e...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala