DescriptionSEON is the command center for fraud prevention and AML compliance helping thousands of companies worldwide stop fraud reduce risk and protect revenue. Powered by 900 real-time first-party data signals SEON enriches customer profiles flags suspicious behavior and streamlines compliance workflows - all from one place. SEON provides richer data more flexible and transparent analysis and faster time to value than any other provider on the market. Weve helped companies reduce fraud by 95% and achieve 32x ROI and were growing fast thanks to our partnerships with some of the worlds most ambitious digital brands like Revolut Wise and Bilt.
We are currently looking for a skilled Data Engineer to join the Dataverse team. This role will report into the Data Engineering Lead and can be remote ideally being based in the EU.
What Youll Do:
- Build maintain and optimize scalable ETL/ELT pipelines to process structured and unstructured data.
- Experience in one or more of thefollowing programming languages python Java scala.
- Automate workflows to ingest transform and store data in real-time or batch processing frameworks.
- Architect and maintain cloud-based data platforms such as Snowflake BigQuery or Redshift.
- Manage and optimize relational and NoSQL databases for analytics and operational needs.
- Partner with data scientists analysts and software engineers to deliver end-to-end data solutions.
- Translate business requirements into data models workflows and engineering tasks.
- Implement systems for data validation lineage tracking and quality monitoring.
- Having Streaming knowledge is a plus (e.g. Apache Flink spark streaming).
- Ensure data security compliance and privacy standards are met (e.g. GDPR HIPAA SOC 2).
- Monitor and enhance the performance of data systems to handle increasing scale and complexity.
- Leverage big data tools (e.g. Apache Spark Kafka) for processing and analytics.
- Evaluate and implement emerging technologies to improve efficiency and performance.
- Contribute to the establishment of best practices and reusable frameworks.
What Youll Bring:
- Bachelors or Masters degree in Computer Science Data Engineering or a related field.
- 3 years of experience in data engineering software development or related roles.
- Demonstrated success in building and maintaining scalable data pipelines and systems.
- Proficient in SQL and at least one programming language (e.g. Python Scala Java).
- Hands-on experience with big data tools like Spark Hadoop or Kafka.
- Expertise with cloud platforms (AWS Azure GCP) and services such as S3 Lambda and Dataflow.
- Strong understanding of data modeling warehousing and architecture principles.
- Ability to work with large datasets and solve complex data challenges.
- Familiarity with BI tools like Tableau Looker or Power BI is a plus.
- Strong problem-solving and critical-thinking abilities.
- Excellent collaboration and communication skills for working across teams.
- Fluent English knowledge
Amazing If You Also Have:
- Experience with real-time data processing systems and event-driven architectures.
- Familiarity with machine learning pipelines and model deployment.
- Knowledge of DevOps and CI/CD practices for data engineering.