Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailWHO WE ARE
Optiver is a global market maker founded in Amsterdam with offices in London Chicago Austin New York Sydney Shanghai Hong Kong Singapore Taipei and Mumbai. Established in 1986 today we are a leading liquidity provider with close to 2000 employees in offices around the world united in our commitment to improve the market through competitive pricing execution and risk management. By providing liquidity on multiple exchanges across the world in various financial instruments we participate in the safeguarding of healthy and efficient markets. We provide liquidity to financial markets using our own capital at our own risk trading a wide range of products: listed derivatives cash equities ETFs bonds and foreign currencies.
Optivers Sydney office is one of the primary players within Asian markets trading a range of products. Established in 1996 were an active participant on the Hong Kong Korea Singapore Taiwan and Japan exchanges and act as Optivers APAC head office.
WHAT YOULL DO
Were looking for a skilled Data Engineer with strong Kafka experience to help design build and maintain our real-time data pipelines. Youll work closely with researchers and engineers to ensure the performance reliability and scalability of data systems that support our trading and research platforms.
This is a hands-on role with a focus on Kafka infrastructure stream processing and integration with our broader data architecture. Youll play a key role in shaping our data platform and ensuring we meet the growing demands of our business.
Design and manage scalable Kafka-based data pipelines.
Build and maintain Kafka producer/consumer libraries.
Monitor and optimise Kafka performance latency and reliability.
Collaborate with teams to integrate streaming pipelines into research workflows.
Implement access control authentication and encryption for Kafka.
Support disaster recovery planning and production stability.
Deliver data to S3 and support lakehouse ingestion.
Document systems and share Kafka best practices across teams.
Stay up to date with emerging technologies and recommend improvements.
WHO YOU ARE
3 years of experience in data engineering.
Strong hands-on experience with Apache Kafka in production.
Solid understanding of Kafka architecture and components.
Experience deploying scaling and monitoring Kafka clusters.
Proficient in Python Java Scala or Bash.
Comfortable working with C.
Experience in Linux environments.
Good understanding of networking and security best practices.
Experience with Kafka Streams or Kafka Connect.
Familiarity with tools like Spark Flink or Beam.
Exposure to cloud-based Kafka (e.g. AWS MSK Confluent Cloud).
Knowledge of IaC tools (e.g. Terraform Ansible).
Interest in trading systems or financial data.
WHAT YOULL GET
Youll join a culture of collaboration and excellence where youll be surrounded by curious thinkers and creative problem solvers. Driven by a passion for continuous improvement youll thrive in a supportive high-performing environment alongside talented colleagues working collectively to tackle some of the most complex problems in the financial markets.
In return for your expertise you will be offered a competitive salary package as well as access to a range of Optiver perks including:
The chance to work alongside best-in-class professionals
Competitive remuneration including an attractive bonus structure and additional leave entitlements
Training mentorship and personal development opportunities
Gym membership plus weekly in-house chair massages
Daily breakfast lunch and an in-house barista
Regular social events including an annual company trip
Optiver is committed todiversity and inclusion and it is hardwired through every stage of our hiring process. We encourage applications from candidates from any and all backgrounds and we welcome requests for reasonable adjustments during the process to ensure that you can best demonstrate your abilities.
Full Time