Data cloud Engineer

Tanu Infotech Inc

Not Interested
Bookmark
Report This Job

profile Job Location:

Seattle, OR - USA

profile Monthly Salary: Not Disclosed
Posted on: 12 hours ago
Vacancies: 1 Vacancy

Job Summary

Hi

I hope youre doing well.

This is Ahamad from Tanu Infotech. We are currently hiring for the following role and would like to share the details with you. If you are interested please feel free to reach out to me at or

Role:Data Engineer

Location:Seattle WA (Hybrid)

Visa-USC & GC

Role Summary:
The Data/Cloud Engineer is responsible for designing building testing and deploying end-to-end data ingestion connectors and ETL/ELT pipelines on the Boeing-provided framework. Working in two-person pods each pod will deliver one data source to production per month across a variety of ingestion patterns (batch streaming CDC). This role is the core delivery engine of the project.

Key Responsibilities:

  • Design and build connectors for prioritized data sources including SFTP REST APIs RDBMS (CDC) Kafka S3 file drops and mainframe extracts.
  • Define source-specific ingestion patterns (batch windows CDC streaming) and map data to canonical landing zones in the lakehouse architecture.
  • Implement reusable ETL/ELT pipelines on the IT-provided framework (e.g. AWS Glue Spark dbt) across raw curated consumption layers.
  • Develop transformation logic handle schema evolution implement partitioning strategies and capture metadata for lineage tracking.
  • Embed data quality checks (completeness schema conformance record counts freshness) with fail/alert behavior within pipelines.
  • Write unit integration and end-to-end tests; validate pipelines in CI/CD and staging environments prior to production promotion.
  • Produce connector runbooks data contracts transformation specs and onboarding guides.
  • Collaborate with source system owners to obtain access sample data and schema/contract details.
  • Participate in 2-week Agile sprints under Boeings sprint planning and task assignment process.

Required Skills & Qualifications:

  • 5 8 years of hands-on experience in data engineering cloud data platforms and ETL/ELT pipeline development.
  • Strong proficiency in Python SQL and Spark (PySpark or Scala).
  • Hands-on experience with AWS data services: Glue S3 Kinesis Lambda Redshift Athena or equivalent.
  • Experience building ingestion pipelines for diverse source types: SFTP REST APIs RDBMS (JDBC/CDC) Kafka/streaming and flat file processing.
  • Working knowledge of lakehouse architectures (Delta Lake Iceberg or Hudi).
  • Experience with dbt or similar transformation frameworks.
  • Familiarity with CI/CD pipelines for data workloads (e.g. GitHub Actions CodePipeline Jenkins).
  • Understanding of data quality frameworks and schema evolution handling.
  • Strong documentation skills for runbooks data contracts and technical specifications.
  • Experience working in Agile/Scrum delivery models.

Preferred Skills:

  • Experience with mainframe data extraction and integration.
  • Familiarity with Apache Kafka (producers consumers connect schema registry).
  • Exposure to data cataloging and lineage tools (e.g. AWS Glue Catalog Apache Atlas DataHub).
Hi I hope youre doing well. This is Ahamad from Tanu Infotech. We are currently hiring for the following role and would like to share the details with you. If you are interested please feel free to reach out to me at or Role:Data Engineer Location:Seattle WA (Hybrid) Visa-USC & GC Role ...
View more view more