Global Counsel Germany launches in Berlin. Learn more

Thursday 27 April 2023 |
Event type
Digital
 Event

AI for cybersecurity: Leveraging emerging technologies in the public and private sector

Digital panel discussion with Nick Reese, Deputy Director of Emerging Technologies at the US Department of Homeland Security; Chenxi Wang, Managing General Partner at Rain Capital; Jon Garvie, International Public Policy Practice Director at Global Counsel (Moderator), investigating how advances in AI are shaping national cybersecurity policy, what innovative solutions industry can provide, and government authorities’ plans for exploiting this.

The key discussion points from the event include:

  • AI has the potential to alleviate some of the strains coming from the cyber skills gap, but deploying AI also entails risks. In the short-term, AI will bring immense value by taking over smaller tedious tasks, relieving pressure from experts in Security Operation Centres. By automating easier tasks, experts will be freed up to focus on the more sophisticated parts of their job and companies are already building generative AI into their decision-making processes. However, both panellists agreed that deploying AI could create entirely new vulnerabilities and current use cases still rely on humans being in the loop to manage the risks.  
  • The short time frame of innovation adds a sense of urgency for defenders, as attackers will use the same models that defenders use for vulnerability detection. While better access to data on the defensive side may give defenders a short-lived advantage, this is not expected to last longer than six months. Nation state attackers in particular will have access to vast amounts of data to feed into algorithms and it is expected that attacks targeting the IP of leading AI companies will increase. 
  • Public private partnerships are the only solution to address the knowledge gap that exists between government and industry. Venues of collaboration already exist, for example between the Department of Homeland Security and Critical Infrastructure providers in the US. These venues could be improved by being more strategic and specific about which data these partnerships need to share, which stakeholders should meet how often and for which purposes. Additionally, while the government is increasingly looking to collaborate with small innovative companies, these may not always have the resources to engage with the government.
  • Governments further must play a role in acting as a trusted mediator and facilitator between companies. While one defender may only see threats directed towards them and may face challenges with sharing their data widely, the government can gather data about attack models centrally and could disseminate this information back to industry. Building on existing programmes of information sharing, a future programme could share AI defence models, and attributes of attack models and this provide necessary data to industry, establishing trust and facilitating innovation.
  • If companies fail to innovate in AI, they will face existential risk, the same is true for governments. The panellists concluded that if the US wants to stay at the frontier of AI, it needs to create transparency around AI models and help the development of models that benefit the wider public. An AI moon-shot project would offer the exciting possibility to leverage the best talent, computing power and capabilities for the sake of national security. For such projects to work, the government must improve on its willingness to take risk of failure and create a viable business case for companies collaborating and innovating for the greater benefit of national security.  
     

Event playback

Event playback

The views expressed in this event can be attributed to the named author(s) only.