While the herd is moving to AI APIs there is need for engineers who can work on the necessary infrastructure.
GPUs and even TPUs have fundamental constraints when a system needs to be scaled for global deployment of modern Generative AI models. The old architectures that served us well while Dennard scaling held are being replaced by more specialized silicon that allows greater power efficiency for application-specific tasks. AI systems such as transformer-based models are particularly suited to specialist hardware.
Advanced systems require advanced software to run them and we have several roles available in Sydney to work on an AI device that is truly leading edge. The role would suit an engineer with strong experience in multi-threaded multi core systems synchronization methods and pipelined RISC architectures. Bootloader and device driver experience would also be an advantage.
Degree in computer science or similar
Experience with RTOS or embedded Linux in multi-core systems
Ability to interpret HW-centric data sheets and register definitions
Experience with FPGA-based development and system emulation
Knowledge of assembly language programming for pipelined processors
Able to work collaboratively with HW engineers to improve interfaces
Generous salary and option packages are available. Training can be provided in specialist areas such as RISC-V processors modern operating systems and deep learning systems
Required Experience:
Staff IC