drjobs Cyber Security researcher

Cyber Security researcher

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

London - UK

Monthly Salary drjobs

£ 125000 - 135000

Vacancy

1 Vacancy

Job Description

About the AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications including the potential of AI to assist with the development of chemical and biological weapons how it can be used to carry out cyber-attacks enable crimes such as fraud and the possibility of loss of control.

The risks from AI are not sci-fi they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government were building a unique and innovative organisation to prevent AIs harms from impeding its potential.

About the Team

As AI systems become more advanced the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.

The AI Security Institutes Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.

We are building a cross-functional team of cybersecurity researchers machine learning researchers research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations. As such to scale up we require all candidates to be able to to evaluate frontier AI systems as they are released.

JOB SUMMARY

As AI systems become more advanced the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk
areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.

The AI Security Institutes Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.

We are building a cross-functional team of cybersecurity researchers machine learning researchers research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations and to scale up our capacity to evaluate frontier AI systems as they are released.

We are also open to hiring technical generalists with a background spanning many of these areas as well as threat intelligence experts with a focus on researching novel cyber security risks from advanced AI systems.

RESPONSIBILITIES

As a Cyber Security Researcher at AISI your role will range from helping design our overall research strategy and threat model to working with research and infrastructure engineers to build environments and challenges against which to benchmark the capabilities of AI systems. You may also be involved in coordinating teams of internal and external cyber security experts for open-ended probing exercises to explore the capabilities of AI systems or with exploring the interactions between narrow cyber automation tools and general purpose AI systems.

Your day-to-day responsibilities could include:

  • Designing CTF-style challenges and other methods for automatically grading the performance of AI systems on cyber-security tasks.
  • Advising ML research scientists on how to analyse and interpret results of cyber capability evaluations.
  • Writing reports research papers and blog posts to share our research with stakeholders.
  • Helping to evaluate the performance of general purpose models when they are augmented with narrow red-teaming automation tools such as Wireshark Metasploit and Ghidra.
  • Keeping up-to-date with related research taking place in other organisations.

PERSON SPECIFICATION

You will need experience in at least one of the following areas:

  • Proven experience related to cyber-security red-teaming such as:
  • Penetration testing
  • Cyber range design
  • Competing in or designing in CTFs
  • Developing automated security testing tools
  • Bug bounties vulnerability research or exploit discovery and patching
  • Communicating the outcomes of cyber security research to a range of technical and non-technical audiences.
  • Familiarity with cybersecurity tools and platforms such as Wireshark Metasploit or Ghidra.
  • Software skills in one or more relevant domains such as network engineering secure application development or binary analysis.

This role might be a great fit if:

  • You have a strong interest in helping improve the safety of AI systems.
  • You are active in the cyber security community and enjoy keeping up to date with new research in this field.
  • You have previous experience building or measuring the impact of new automation tools on cyber red-teaming workflows.

Core requirements

  • You should be able to spend at least 4 days per week on working with us
  • You should be able to join us for at least 24 months
  • You should be able work from our office in London (Whitehall) for parts of the week but we provide flexibility for remote work


Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below salaries comprise of a base salary technical allowance plusadditional benefitsas detailed on this page.

  • Level 3 - Total Package 65000 - 75000inclusiveof a base salary 35720 plus additional technical talent allowance of between 29280 - 39280
  • Level 4 - Total Package 85000 - 95000inclusiveof a base salary 42495 plus additional technical talent allowance of between 42505 - 52505
  • Level 5 - Total Package 105000 - 115000inclusiveof a base salary 55805 plus additional technical talent allowance of between 49195 - 59195
  • Level 6 - Total Package 125000 - 135000inclusiveof a base salary 68770 plus additional technical talent allowance of between 56230 - 66230
  • Level 7 - Total Package 145000inclusiveof a base salary 68770 plus additional technical talent allowance of 76230

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.

Government Digital and Data Profession Capability Framework - Government Digital and Data Profession Capability Framework

There are a range of pension options available which can be found through the Civil Service website.

Additional Information

Internal Fraud Database

The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal fraud. In instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and getbaseline personnel security standard (BPSS)clearancebefore they can be appointed. Additionally there is a strong preference for eligibility forcounter-terrorist check (CTC)clearance. Some roles may require higher levels of clearance and we will state this by exception in the job advertisement.See our vetting charter here.

Employment Type

Full Time

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.