Data Engineer

Not Interested
Bookmark
Report This Job

profile Job Location:

Kyiv - Ukraine

profile Monthly Salary: Not Disclosed
Posted on: 6 hours ago
Vacancies: 1 Vacancy

Job Summary

  • Design and develop scalable reliable data platforms and pipelines for analytics and operations
  • Implement data processing workflows using distributed frameworks (Apache Spark/pySpark Databricks)
  • Evolve the organizations cloud data platform leveraging Azure technologies (Data Factory Synapse Event Hub Azure Data Lake Storage)
  • Model data to support analytics reporting and downstream consumption
  • Integrate and process data from multiple sources ensuring quality and consistency
  • Monitor and optimize pipeline performance ensuring reliability and scalability
  • Collaborate with analysts scientists and engineers to translate requirements into solutions
  • Improve data engineering standards testing and operational reliability
  • Evaluate and introduce new tools and technologies in the data engineering ecosystem

Qualifications :

  • 3 years of hands-on experience in data engineering or building data processing platforms
  • Strong SQL skills and solid Python experience for pipeline development
  • Experience with distributed processing frameworks (Apache Spark/pySpark Databricks preferred)
  • Practical experience designing and implementing pipelines in cloud environments (Azure preferred)
  • Experience with production-scale analytical systems and data modeling
  • Understanding of ETL/ELT design dimensional modeling and data warehousing principles
  • Experience with modern data lake architectures and formats (Parquet JSON)
  • Familiarity with workflow orchestration tools (Airflow)
  • Experience with database development and optimization
  • Ability to design scalable solutions with minimal supervision
  • Strong collaboration and communication skills
  • Professional proficiency in English

Remote Work :

Yes


Employment Type :

Full-time

Design and develop scalable reliable data platforms and pipelines for analytics and operationsImplement data processing workflows using distributed frameworks (Apache Spark/pySpark Databricks)Evolve the organizations cloud data platform leveraging Azure technologies (Data Factory Synapse Event Hub A...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Company Logo

At Sigma Software, we are involved with the client’s team to contribute to the design and development of a technical solution for their tokenized domain reservation platform. We started by assigning a software architect to design the smart contracts and integrate blockchain into the s ... View more

View Profile View Profile