Monday 20 March 2023 |
Event type
Digital
 Event

Generative AI: anticipating the political and regulatory response

Digital panel discussion with Bea Longworth, EMEA Government Relations at Nvidia; Dr Cosmina Dorobantu, Co-Director of the Public Policy Programme at the Alan Turing Institute; Rt Hon Greg Clark MP, Chair of the Commons’ Science and Technology Select Committee; and Tim Gordon, founder of BestPractice.ai and Trustee at Full Fact, discussing questions around the market for generative AI, its impact on business and society, and the regulatory and governance questions that stem from this.

The key discussion points from the event include:

  • Generative AI applications such as ChatGPT or Google’s Bard have reinvigorated the debate about AI and AI governance. Panellists pointed out that generative AI emphasizes and accelerates issues that already existed in other forms of AI, such as questions around data protection, fairness and accuracy. However, generative AI’s ability to create new content (rather than make predictions) does represent an inflection point; for example, it challenges the assumption that creativity is inherently human, and it raises new concerns around things like intellectual property. 
  • The accessibility of AI tools - and its speedy rate of adoption - shows that there is a clear and broad consumer demand. Current use cases revolve around integrated applications in office apps such as Microsoft’s and Google’s applications, which already enable billions of users to interact with the technology. But generative AI is expected to be taken up much more broadly across different sectors including law, education and healthcare, which will bring with it new regulatory questions.  
  • While panellists were keen to see a fully democratised AI ecosystem, there was an acknowledgement that barriers to entry exist for smaller companies (including compliance burdens and access to large datasets). Bigger companies necessarily have more data available to train their data on, and data access is also unequally distributed amongst sectors. In the health care sector for example, where there are additional data protection concerns, models are therefore being developed on the basis of synthetic datasets. Digital sovereignty also has a role to play, as governments are keen to help their own homegrown companies (as in the case of Sweden’s national AI community). 
  • Generative AI’s risks to security should be addressed, such as its potential to amplify misinformation. AI models in their current shape frequently “hallucinate” by making up false facts or quoting dubious sources. The information sphere may soon be filled with articles written by AI that incorporates false or misleading information without proper editorial process. Panellists agreed that there should be a greater investment in education to address these concerns, but further pointed at the need to assess security concerns around AI models more broadly. For example, foreign models might share sensitive, classified data. To properly assess the quality and safety, the primary concern should therefore be to establish transparency. 
  • Panellists were broadly supportive of the UK’s sectoral approach to regulating AI, and called on the government to ‘make use of what we’ve already got’. That said, there was acknowledgement that some sectoral regulators were much better resourced than others, and also that companies would inevitably be keeping a close eye on the EU AI Act given the European market size. As such, close collaboration with the UK’s international partners and organisations such as the OECD will be critical in ensuring that the UK remains a safe as well as attractive market for AI development.

Event playback

Event playback

The views expressed in this event can be attributed to the named author(s) only.