Hiring: W2 Candidates Only
Visa:Open to any visa typewith valid work authorization in the USA
Key Responsibilities:
- Assemble large complex data sets that meet functional and non-functional business requirements.
- Identify design and implement process improvements including automation data delivery optimization and infrastructure redesign for scalability.
- Lead and deliver data-driven solutions across multiple languages tools and technologies.
- Contribute to architecture discussions solution design and strategic technology adoption.
- Build and optimize highly scalable data pipelines incorporating complex transformations and efficient code.
- Design and develop new source system integrations from varied formats (files database extracts APIs).
- Design and implement solutions for delivering data that meets SLA requirements.
- Work with operations teams to resolve production issues related to the platform.
- Apply best practices such as Agile methodologies design thinking and continuous deployment.
- Develop tooling and automation to make deployments and production monitoring more repeatable.
- Collaborate with business and technology partners providing leadership best practices and coaching.
- Mentor peers and junior engineers; educate colleagues on emerging industry trends and technologies.
Required Qualifications
- Bachelors degree in Computer Science Software Engineering or related field or equivalent experience
- 7 years of data engineering/development experience including Python or Scala SQL and relational/non-relational data storage. (ETL frameworks big data processing NoSQL)
- 3 years of experience in distributed data processing (Spark) and container orchestration (Kubernetes)
- Proficiency in data streaming in Kubernetes and Kafka
- Experience with cloud platforms - Azure preferred; AWS or Google Cloud Platform also considered.
- Solid understanding of CI/CD principles and tools
- Familiarity with big data technologies such as Hadoop Hive HBase Object Storage (ADLS/S3) Event Queues.
- Strong understanding of performance optimization techniques such as partitioning clustering and caching
- Proficiency with SQL key-value datastores and document stores
- Familiarity with data architecture and modeling concepts to support efficient data consumption
- Strong collaboration and communication skills; ability to work across multiple teams and disciplines.
Preferred Qualifications - Masters degree in Computer Science Software Engineering or related field
- Knowledge of data governance metadata management or data quality/observability
- Familiarity with schema design and data contracts
- Experience handling various file formats (video audio image)
- Experience with Databricks Snowflake or similar platforms
- Experience designing and implementing robust data ingestion frameworks for heterogeneous data sources (structured/unstructured files external suppliers supplier chain systems).
Hiring: W2 Candidates Only Visa:Open to any visa typewith valid work authorization in the USA Key Responsibilities: Assemble large complex data sets that meet functional and non-functional business requirements. Identify design and implement process improvements including automation data delivery ...
Hiring: W2 Candidates Only
Visa:Open to any visa typewith valid work authorization in the USA
Key Responsibilities:
- Assemble large complex data sets that meet functional and non-functional business requirements.
- Identify design and implement process improvements including automation data delivery optimization and infrastructure redesign for scalability.
- Lead and deliver data-driven solutions across multiple languages tools and technologies.
- Contribute to architecture discussions solution design and strategic technology adoption.
- Build and optimize highly scalable data pipelines incorporating complex transformations and efficient code.
- Design and develop new source system integrations from varied formats (files database extracts APIs).
- Design and implement solutions for delivering data that meets SLA requirements.
- Work with operations teams to resolve production issues related to the platform.
- Apply best practices such as Agile methodologies design thinking and continuous deployment.
- Develop tooling and automation to make deployments and production monitoring more repeatable.
- Collaborate with business and technology partners providing leadership best practices and coaching.
- Mentor peers and junior engineers; educate colleagues on emerging industry trends and technologies.
Required Qualifications
- Bachelors degree in Computer Science Software Engineering or related field or equivalent experience
- 7 years of data engineering/development experience including Python or Scala SQL and relational/non-relational data storage. (ETL frameworks big data processing NoSQL)
- 3 years of experience in distributed data processing (Spark) and container orchestration (Kubernetes)
- Proficiency in data streaming in Kubernetes and Kafka
- Experience with cloud platforms - Azure preferred; AWS or Google Cloud Platform also considered.
- Solid understanding of CI/CD principles and tools
- Familiarity with big data technologies such as Hadoop Hive HBase Object Storage (ADLS/S3) Event Queues.
- Strong understanding of performance optimization techniques such as partitioning clustering and caching
- Proficiency with SQL key-value datastores and document stores
- Familiarity with data architecture and modeling concepts to support efficient data consumption
- Strong collaboration and communication skills; ability to work across multiple teams and disciplines.
Preferred Qualifications - Masters degree in Computer Science Software Engineering or related field
- Knowledge of data governance metadata management or data quality/observability
- Familiarity with schema design and data contracts
- Experience handling various file formats (video audio image)
- Experience with Databricks Snowflake or similar platforms
- Experience designing and implementing robust data ingestion frameworks for heterogeneous data sources (structured/unstructured files external suppliers supplier chain systems).
View more
View less