Date
Tech, media and telecoms

Alondra Nelson, architect of the AI Bill of Rights, on the Biden Administration’s AI executive order and efforts to pass an AI bill.

By
Body

This week on Top in Tech, Conan D'Arcy is joined by Alondra Nelson, the former White House adviser who helped shape the AI Bill of Rights. 

They discuss the Biden Administration’s approach to AI regulation, the role of federal agencies and the prospect of Congressional legislation, and the durability of AI policies under a potential future administration.

Conan D'Arcy:

So, Alondra, thank you very much for joining me today. As ever with AI policy, there's a lot of ground that we could cover so I wanted to zero in on three areas as part of our discussion. First, I'd like to get your view on where things stand with the federal government under President Biden, an assessment of the measures proposed but also a little bit exploring their durability in the events of a Republican victory in November. The second set of issues would be to pivot across over to the Hill and to get your views on where things stand, not only with AI legislation, but also privacy bills in Congress, how you see those and how you see their prospects.

And then finally, it'd be great to look ahead to the elections in November. I'd like to get a sense from you about what interventions, if any, you think the government should be making to regulate content shared online leading up to the election. So, if that sounds good to you, it'd be great to jump in on that first bucket, the government, the federal scene.

Alondra Nelson:

It sounds great, I'm delighted to be with you and I'm looking forward to the conversation.

Conan D'Arcy:

Perfect, and this really ties in with your experience in the White House. Because, if we look at where the US policy towards AI has evolved over recent years, we've gone from two voluntary and guiding frameworks, the blueprint for an AI Bill of Rights, which you are very personally involved in, and as well as NIST's AI risk management framework. So, we had those two pieces but then those were, superseded might not be the right word, complemented perhaps by the White House and its executive order on AI which came in the fall last year which builds upon these framework but also sets out very tangible requirements for federal agencies and AI developers.

So, as someone who's been intimately involved with part of this process at least, it'd be great if you could briefly walk us through how we got here. And I guess I'm interested in your view as a supplement to that about whether you feel this is a deliberate plan process in moving from what were voluntary rules that you helped design through to mandatory rules or whether the pace of industry developments forced a change of approach from the administration.

Alondra Nelson:

These are great questions. I started on day one of the Biden-Harris administration and, tech accountability, the President had, from the very beginning, a tech accountability agenda. There were concerns and conversations during the transition, concerns and conversations that were part of the Biden-Harris campaign that walking, I was going to say walking in the door, but we did the transition mostly on Zoom and Skype and conference calls actually so not quite walking in the door for many of us. This was a big part of the conversation and you would see it and we did a lot of conversations with stakeholders in industry and civil society and academia in that first year. And in the first State of the Union address, the President begins to articulate some of this and his unity agenda. So, part of the unity agenda included addressing his concerns about social media harms, about big tech, power inequities and beginning a national conversation about that.

So, by the time you get to the end of the first year, under the President's direction, we have worked on something called a tech accountability agenda and that has a few spokes. One of them is certainly competition and innovation, so how do you do the work and keep the market competitive and innovative. Certainly, issues of privacy, data privacy and, in particular, privacy with regards to children and young people. Issues around algorithmic discrimination and algorithmic bias. And ... Sorry, I'm going to come back on that one.

So, the President's tech accountability agenda that was developed in the first year in consultation with people, stakeholders from civil society, academia and industry really included a few key pillars. Concern about data privacy and privacy more generally, particularly as it pertains to young people. A concern with issues of competition, how to keep both innovation and competition active in the marketplace as there were concerns about the consolidation of big tech power increasingly growing. Issues around transparency and accountability, how is it that government can hold big tech accountable and then issues and growing concerns around algorithmic discrimination and algorithmic bias.

In January of 2022, the President would write an op-ed in the Wall Street Journal in which he asked Republicans and Democrats to come together around these four issues. The unity agenda for him were these common sense, I think, issues and concerns around big tech across ideological perspectives that he thought were really important and that were important for US leadership in this space. So, that's to say that, from the very beginning of the administration, issues around algorithmic amplification, AI and big tech were always part of the conversation and it is in that first year that we began the work that would lead up to, almost a year of work, that would lead up to the blueprint for an AI Bill of Rights. It's also in that first year that NIST, which is a sub-agency of the US Department of Commerce, would begin the work on the AI risk management framework.

So, part of this was the work of creating a Biden-Harris administration strategy. So, what are our true norths, what are our anchors for the work that we're going to do and then how do we begin to think tactically about what that looks like. So, in the AI Bill of Rights, we use the phrase often moving from principles to practise. Those who follow tech policy know that, over the last five years, there have been innumerable AI principles, AI ethics principles, responsible AI principles but really getting those to have traction and land has been the next challenge that we faced. And so, both the NIST AI Risk Management Framework and the examples and the AI Bill of Rights that are distilled from best practises in industry, civil society and the research community began to do the more tactical work of raising up to awareness examples of how we might begin at scale to do things like red teaming, do things like risk assessment of AI models and systems and the like.

So, I think you might think of these three years of the Biden-Harris administration as the cornerstones of the AI Bill of Rights and the Risk Management Framework as being strategy and the extraordinary President's executive order from October of last year beginning to map out the tactics. And one of the ways I think of thinking about the relationship, Conan, of the voluntary policy white papers, the Bill of Rights and the AI Risk Management Framework, is to think about them as really having a role in creating of the cornerstones that would then be enacted into this executive order and to the Office of Management and Budget memo that would be released later.

Conan D'Arcy:

Well, thanks for taking us through that. So, it sounds like, Alondra, you're describing, when I put the question to you almost, was this continuity in policy or a bit of a break in policy forced by ChatGPT and other apps breaking through at the end of 2022 and into 2023. It sounds like you're describing actually a policy process of continuity and the strategy that you set out was always going to lead to a much more tactical application at some point under the Biden administration and that's what the executive order does.

Alondra Nelson:

Sure. It's definitely continuity but, let's be clear, I think, the public, the introduction to consumers for the first time, AI has been in our world, including Generative AI, for many years now but it has really been in the background of our lives. We hadn't had to think of it and the general public as a policy issue, it was the thing that helped you look at your iPhone and turn it on with your face or the annoying, for some of us, autocorrect that you have when you're trying to send a text message to a friend. But the arrival of ChatGPT in November of 2022 really put in front of the public, the global public for the first time, the power and ubiquity of these tools. And so, it doesn't matter if you're talking about tech policy or housing policy, et cetera, when you have a policy issue that comes to public awareness in that way all at once, it is going to change the political calculation.

So, there was certainly a continuity, as I said, the President mentioned, was beginning to map out his tech accountability agenda in both of his first State of the Union addresses, we were working very hard at the White House having lots of stakeholder meetings. There's a few readouts and early documents of this but it was certainly the case that the fall of 2022 became a policy accelerant because the public was saying, "What is this and what are we going to do about it and is this a good thing, is that a bad thing?" And so, it gave a tailwind to the continuity that I think things moved more quickly but also, I think, in a more concerted effort.

It wasn't just the few agencies that work on technology like OSTP in the White House and NIST in the Department of Commerce, I think it's very much the case that the President's and Vice President's whole of government strategy that you see manifested in the AI executive order really owes a lot to the attention that was brought to this policy space. So, in social sciences, we would say it's an interaction effect, Conan.

Conan D'Arcy:

Yeah, I totally agree with everything you've said there, Alondra, it really was a stark moment, not just for policymakers, I guess, but also for the media and journalists to see AI in action in a very immediate consumer interactive way when ChatGPT broke through. But I also think the other thing that was quite striking that followed ChatGPT, and it's probably in contrast to previous waves of technological change where we saw search or e-commerce or smartphones or social media, at those times, there weren't really, at least as far as I can recall, voices from industry publicly calling, not just for regulation, but for regulation now. Whereas, with AI, there's a whole plurality of views within the tech sector, more broadly within industry on this but there are very prominent voices who, from the start have been saying, we need government or even indeed global governments need to come and regulate this now and that slightly changes the picture for policymakers when they're deciding about the balance of whether to act now or whether to wait in terms of regulatory standards.

But if we can pivot on from that, Alondra, you've talked about continuity and ChatGPT breaking through with somehow a bit of a break in policy, but maybe not a break but added urgency. If we could go into what came out of that, which was the executive order which came out around the time of the Safety Summit in the UK, I think it was end of October, start of November last year, one of the elements there, and you referenced it, were obligations on the Office of Management and Budget in order to set standards in and around federal policies on public procurement and tendering processes and the use of AI and the way in which, essentially, the government can buy AI or can buy services that are powered by AI.

I'd be interested, I think there were ... First, policies were just published maybe the end of last month and it'll be interesting to understand if you have a view on what it is that the OMB is attempting to address here. I know in the past you have written about how government possesses this power to shape markets and industry behaviour by setting the rules for procurement of AI systems and demanding transparency from AI creators. So, I'm just wondering, given that's something you have talked about in the past, do you see what's happening here as an example of the government trying to do exactly that?

Alondra Nelson:

Yes. I think that the headline is that it's really government trying to lead by example in both shaping markets and also trying to figure out in real time how to use these technologies in government. And I think that the ... So, let's go back. The AI executive order is interesting in the context of American policymaking because it's a massive document, it's 111 pages, I think it's arguably the longest ever in American history that we ... And it takes a ecosystem approach to AI regulation and governance. So, there's issues around safety, there's issues around economic and national security, innovation and competition, there's a section that's about workers to the extent that some of the hue and cry from the public and also some folks at industry who were talking about big shifts that might be happening, transformations in society, some of this was about the future of work. And so, the Biden EO weighs in here, there's issues about civil rights, civil liberties, consumer protection, privacy, the whole ... It's a massive, a massive governance document.

And certainly a piece of that and the question, I think, that the OMB memo answers to go right to the heart of your query is how should government ... If government is going to use new AI tools and systems or AI tools and systems that are existing and that are coming online, how should government be doing it, what practically should government be doing. And I think this is important in a lot of different ways, in the smaller day-to-day way, we had, for example, in the US House of Representative, Congressman Ted Lieu who has a computer science background was leaning really forward and saying, "My staff is going to use ChatGPT whenever they want and issuing memos that that was the case." But we've also had, this was probably a year ago that this happened, in the last month, we've had US security officials say there are Generative AI applications that federal employees are not permitted to use because they pose a security risk and, in particular, Microsoft Copilot was called out.

So, there's a lot of, I think, concern around data leakage with Generative AI models meaning, if you use particularly a free version and you put sensitive information in that, does that become part of the training data and then does that make your personal or sensitive information vulnerable to being revealed to other people by design or by accident. So, that's a consideration for government and government can get more traction on that than, I think, a lot of civil society organisations and researchers and even policymakers who have been worried about this for a long time. Because what is happening if you have you been watching the marketplace, while it has been, I think, important for the brand of OpenAI and ChatGPT for the public to know what it is and to be excited, mystified, odd, horrified, all of the above about it, where the real capital and money is happening in this space is in the enterprise or the business to business engagements and the government and its contracting and its procurement is part of that larger enterprise, those are where the huge billion-dollar contracts over many years take place.

So, we might think about, 25 years ago, 30 years ago, when governments around the world were first starting to buy in mass the personal computer and appreciating that, ultimately, every civil servant was going to need to have a personal computer, that happened in waves, maybe over the course of a decade we might say. What you see with Generative AI tools and systems is that this might be happening all at once so there's actually ... These are contracts that are new contracts so it's also the case that some of those two-decade old procurement contracts with computer companies will obtain, that you'll still have those but there's a whole new series of federal contracting that's just about to happen.

And so, why the OMB memo is important is because, as agencies are being told, to the extent possible, to the extent that you can do it in a responsible, ethical way that preserves the privacy, the rights and the safety of the American public, if there are efficiencies or new ways of doing work that can be found, your agency should be encouraged to do it. At the same time, the government's going to set the table for how this procurement should take place. So, it is an extraordinary opportunity in the context of the United States, I think for all governments, but in the context of the United States where we've had real challenges around tech legislation, about getting actual laws passed for the executive branch, I think, to be innovative in the policy space recognising that they do and, in some cases, have for one time, before these contracts get really baked in and become just enduring relationships, to really set the terms of these purchases.

And companies want these large contracts and they're not going to necessarily, I think, open up all of their data to the federal government but, if the cost of doing business is more transparency that can allow for more accountability, if the cost of doing business is requesting certifications around the public's privacy and around not selling tools or causing government to procure tools that might exacerbate things like bias and discrimination, then I think companies are going to do that. So, it's a tremendous lever that happens at the beginning of a new marketplace and I think the OMB memo partly does that.

But I would say the bureaucratic mission of it, if you might say, to your question about, in DC we say the memo is sending in the mail and on what, what's it replying to. I think it's also a sense of confusion, I think it's an attempt to really create coordination around the use of AI tools and systems. So, when I came into the Biden-Harris White House in January of 2021, there were some existing policy guidelines for federal agencies that really required for them to disclose the AI tools and systems that they were currently using and that they were thinking about procuring. And part of what our office, within the Office of Science and Technology policy, we have an entity called the National AI Initiative Office which was tasked with effectively getting this government-wide census of the use of AI tools and systems. And what we found is that it was hard to get compliance, that the data that we got was very thin, people in the policy space have written critically about this.

So, part of what the memo also does is say you have got to answer the mail on this. If you're being asked how are you using them, how are you planning to use them and how are you thinking strategically about protecting the public, protecting their rights, protecting their safety in the use of these systems. So, it's also attempt at coordination which is always a challenge when you are the world's largest employer.

Conan D'Arcy:

Yes, I can well imagine. Alondra, one thing that you alighted on in that answer was the way in which the governments in the US is having to be innovative around the policy tools that it is using and that, clearly, using the power of the federal purchasing power is an important lever, certainly, in shaping the behaviour of large sellers in governments but, also, the hope, I guess, would be that it permeates down into smaller firms as well where the de facto standard setting through procurement tenders. One thing though that gets a lot of attention in Europe, less so on AI, particularly when we look at things like the EU-US data privacy framework, previous agreements that we've seen between the EU and the US on data transfers like Safe Harbour and the Privacy Shield, a lot of the attention there and a lot of criticisms you get, particularly in places like Brussels, is the way in which executive orders essentially rely and can be easily overturned by a subsequent president.

So, interested to get your views. We talked about the EO on AI more broadly, talked very specifically about some of the proposals related to the OMB but, should the Republicans win the White House in November, how durable do you think some of these proposals that have been quite innovative will be? Do you think they would last under a potential Trump 2.0 presidency?

Alondra Nelson:

So, many of them, yes. So, I think that the particular nature of the American political system means that is an issue that has been faced for over a hundred years that new administrations come in, they have different priorities and things come and go. So, I would first say and this is this ... You know, I know this but I think it's worth saying that it is not a particular characteristic of this moment or of tech policy in general but that it is a enduring characteristic of American political culture. And so, I think there are ways that this gets dealt with. One is that administrations coming in are often ... There's a lot of policy flows happening at once, and you come in and you're faced with a lot of work and you have to prioritise what are the things that you want to actively advance, actively lay fallow, there's neither really harm or foul and you just leave it as it is and what are the things you actively want to overturn. And I think administrations are busy and you're having to, I think, quickly create an algorithm to prioritise these things, I think.

So, I think there's just the sheer, I think, volume of the work will require that some things just don't get attention. So, whether or not that will be AI policy, data transfer policy, data privacy issues, I don't know. I think the other thing that gives me a little bit of faith about some continuity across administrations is that, if we go back to that initial tech accountability agenda that the President framed up, these are bipartisan concerns, there is a general concern about data privacy, particularly for young people, there's a general concern about competition, about innovation. So, it is, I think, a space where there can be a Venn diagram, that intersecting area might be very small but that there are issues around tech policy, I think, that are shared.

As an example of this, I mentioned earlier this national AI initiative office that is an entity that sits within the Office of Science and Technology policy where I used to be in leadership and that emerged out of a piece of legislation from the Trump administration. So, there were some AI policy that was passed into law and acted into law at the tail end of the Trump administration and, some instances, some of these came online literally the week before of the transition, days before the transition. And so, coming into office when you have a policy that's been done, that is big, so it's either legislation or a whole policy process that's gone through all of the rigours that a policy process needs to take, if you can look at that policy and say, "Well, this is maybe not exactly what the Biden administration might have done but there are ways that we can implement it that are useful for how we want to proceed."

So, I think that it is so hard to get policy done and particularly if you're talking about actual formal legislation and laws that, when you come into a new policy space, I think you often think, "Well, can we work with this? Can this get us even halfway, three quarters of the way to where we're trying to go and can we use this existing tool?" so there becomes also just an instrumentality about what you have on the table. So, that said, one might anticipate that a new Trump administration would certainly be, I think, less concerned with issues of rights, perhaps less concerned with wanting to prioritise the protection of safety over company, more of a laissez-faire approach for companies and I think that would be certainly a bit more sunlight with the EU approach with some of these things.

But I think things like data privacy, data transfer, we might think about, Conan, the debate about TikTok that's happening in the United States and also I know in the UK and EU, there is wide bipartisan support for that. And so, to the extent that the conversation about TikTok is about where American's data is going, who has access to it, who can leverage it and ways to create algorithms to serve people certain information and not serve them other information on TikTok, that's an opportunity for, I think, bipartisanship and should offer some comfort, I think, to colleagues in Europe. But I would also say ... And obviously, a break between a Trump administration and a Biden administration would be potentially more extreme than you might've seen between Clinton and Bush, for example, but I think some things can potentially endure.

The last thing I will say is that something like the OMB memo on responsible and trustworthy AI that begins to create pathways for the work of government is something that becomes a bit more sticky. Once government creates processes for procurement, they're often very hard to undo and it becomes, also, in the muscle memory of how people do their work. And part of the executive order was creating a new role of the chief AI officer in various different federal departments and agencies, many of these people are civil servants, they will continue to do their work even as there's an administrative transition in the executive branch.

So, I think it's mixed. I think what gives me hope is the fact that we've always had these transitions, sometimes they've been less radical in change in direction but it's always been the case and that there is a reasonable amount, small as it is and fragile, of, I think, bipartisan concern around some of the fundamental issues that the Biden-Harris AI policy addresses.

Conan D'Arcy:

And a lot of that resonates with situation in the UK where we're doing a lot of thinking at the moment about, if there is a general election this year and the centre-left Labour party win the election, which looks likely at the moment, the extent to which you will see policy change and, in some areas, a little bit like you've alluded to, there are some areas where there's clear and obvious disruption. So, for example, they have a slew of policies to bring in new employment and labour rights for workers and that's pretty detailed and, clearly, there's going to be a lot of activity there that marks a break with what the current conservative government have been doing. But in a lot of other areas where there isn't very clear policy from the Labour side or indeed where there is consensus between the two parties, the assumption has to be, unless there's a very big reason why not to, that there will be a lot of continuity regardless because, obviously, the same civil servants will be working on it in the same way that they will on both sides of the Atlantic.

So, it sounds like a relatively similar situation, although I would wager there's, should Trump come in, the level of disruption in other areas that will be greater for you guys and perhaps we'll see should Labour take power. But if we can-

Alondra Nelson:

No, there'll be disruption all around. If you think about using the Silicon Valley, when you talk to people in business school, they talk about the disruption business model. I think you might think about, at least the example of, the prior Trump administration as being this disruption governance model so that's the point and so there will certainly be disruption. You mentioned the UK, one thing that's been interesting, I've been serving as a senior advisor for the UK AI Safety Institute as they're building out this international scientific report that was one of the requested outputs of the initial AI Safety Institute. And I had the great honour of being at day two of the UK AI Safety Summit and what I said when I got the few minutes that I got sitting in a room with Prime Minister Sunak and Vice President Harris and some industry leaders, was that my hope for the AI Safety Institutes in the UK, now in the US and a few other places, would be that we can imagine them as democratic institutions.

And so, what does it mean to think about the work of a UK Safety Institute as being how to figure out tactically how to manifest democratic values into the way that we're going to use AI tools and systems, how to think about where and when it is appropriate to engage the broader public and thinking about the priorities of these Safety institutes, how to think about to the extent that the work of the Safety Institute is part of the work, at least in ... Well, I would say this. I think the difference between the US and the UK with the Safety Institutes is my sense is that the UK is more formally tactical on a very narrow set of issues. What was interesting in the Vice President's speech at the US Embassy in London during the time of the AI Safety Institute was she offered a much broader understanding of safety and so, more democratic in that sense, more ecumenical in the sense of really taking up ... What would it mean to take up President Biden's tech accountability agenda, those pillars, and think of them as the work of a US AI Safety Institute.

So, I'm incredibly encouraged by the Safety Institute, about the creation of new institutions for this very important work but I think it matters who's in office, to your point, about how these will be built out. So, you could imagine a new Labour government in the UK using some of the technical firepower of a Safety Institute to think about labour issues and employment issues as part of the remit of the work, for example. So, it'll be interesting to see how it evolves.

Conan D'Arcy:

Yeah, I think that's totally right. You've had this slightly strange segmentation in the UK between the AI Safety Institute was getting a lot of attention from very senior people within the UK government but is focused, as you say, on a certain set of issues related around safety and particularly around the existential risk. And then you have the more, it's maybe not the right way of phrasing it, but the more day-to-day or maybe near term present potential harms of AI that we are already seeing that are caught up in the UK's AI white paper that is a little bit forgotten and certainly doesn't seem to have, although the process is moving forward, certainly doesn't seem to have the political buy-in and momentum that the work of the Safety Summit does. And you're right, I think the Labour party would certainly pay a lot more attention to issues around employment and labour rights, I think that's one thing that they'll be particularly concerned in with regards to AI but I think also issues around discrimination and bias is something that will resonate with them probably in a way more so than the current governments.

But Alondra, I'm keen to get your views a little bit on what's happening in Congress as well as what's been happening within the administration. You earlier made reference to the fact that it's pretty hard to get tech legislation passed at a federal level, lots going on at the state level, obviously, but a federal level, pretty difficult to do so. That said, there's been a lot going on, flurry of bills introduced, we've got the AI Insights Forum, we've got the bipartisan AI task force but no legislation has yet passed and it seems a little bit stuck perhaps and maybe interested to get your view on whether that's correct or not. You've written your own views about how to move forward the regulation, the right way to regulate AI and you talk about the need for an agile AI governance system.

So, I wondered whether you might just give your views on where you see these various initiatives within Congress, whether you think there's much prospect of anything getting past before the elections later this year and whether the models that you are seeing being proposed resonate with the blueprint that you've set out in that piece I referenced.

Alondra Nelson:

Yes. Well, it's been ... So, one of the things I think I learned in my two years plus stint in Washington is that everything looks bleak and nothing seems possible and then, all of a sudden, it happens. So, during my time in the Biden-Harris administration, we passed the Bipartisan Infrastructure Law as well as the CHIPS and Science Act. And I think for both of those, there were so many fits and starts where you just thought you keep working on it, you're refining it, you're having conversations with folks on the Hill, at the Office of Science and Technology Policy where I was on the staff, we do something called TA which is technical analysis, technical assistance for various pieces of legislation, we ask to weigh in about the technical rigour of them. And so, that's just your workaday thing and you hope that things are going to get passed and then they don't.

So, we had a slightly different, I think, bipartisan chessboard before the midterm elections than we have now so things have just gotten a bit harder for passing legislation that the President would be supportive of. And now, with the current Congress, we are facing probably a historic milestone which may be, given how things are going, the least ever legislation passed in a Congress. So, the prospects are not great but I think those of us who work in the policy space, who have worked in government do so because we are eternal optimists and we are always looking for that pocket of innovation or a way to figure out how to move forward. As you've said, part of what the public awareness of a ChatGPT coming online in the market in a public way and, as I think you rightly said, the sometimes tongue in cheek but sometimes earnest cries from industry to be regulated, "Please, please regulate us," did get people moving including legislators.

So, there is indeed the AI Insights Forum that majority leader Senator Charles Schumer, Chuck Schumer put forward and those have been a series of meetings, I participated in the second one on innovation, Mark Andreessen was at that one as well, it was a fascinating conversation. I think it was unfortunate that those conversations were closed to the public and I don't quite still understand why that was the case in a body that's intended to be the people's house but that is an editorial aside. But we expect the framework to come from that, I would think, in the next few weeks so there will be something there on the table. There's been some initial conversations that suggest it's going to be very innovation centric and is not actually going to take up issues, I think, of rights and safety in a way that many of us would want. There's also this bipartisan AI task force, as you've mentioned, that that work's getting underway.

What's been more encouraging in the last month, I would say, is the introduction of a draught bipartisan proposal called the American Privacy Rights Act. So, part of what I was trying to suggest in the piece that you referenced in foreign affairs that's entitled The Right Way to Regulate AI is that we need to think in different ways about AI governance and we need to move away from a model that is about regulating an object. We are going to regulate telephone spectrum, we are going to regulate aeroplanes , AI is not like that and so we need lots of different ways to approach it. One way is looking at the technology stack or the stack that leads to AI tools and systems, begins with a few foundational things, computational power, access to Clouds, access to semiconductors, to GPUs and data, flows of data.

So, I think what's encouraging about the American Privacy Rights Act, APRA as we're calling it, is that it gets at one of those foundational pillars of what creates the AI ecosystem and in a way that can be enduring. A federal data privacy law, should we get one, will be important and enduring even as the ebb and flow of technology changes. So, I think we can safely say that we are living in a data-centric world and that there will be lots of applications of data and social media and Generative AI and things that we have not even anticipated and that having a foundational principle and law that says that people should have a modicum of data privacy, which is indeed also one of the principles of the AI Bill of Rights, I think is just foundationally important. And you see how useful and agile something like that can be when you think about the introduction of ChatGPT in the fall of 2022 and the ability of an imperfect GDPR, general data protection regulation, in the EU space to at least allow countries like Italy to say we're going to put a pause on this in our marketplace until we better understand that.

So, having regulation that's more foundational and not about the object itself, not about regulating ChatGPT or regulating CloudThree, allows you to have a bit more agility and a durability when new technologies come online. And I think because we can't anticipate, I am a social scientist who works in the space of new and emerging technology so you can even think about, and a lot of my work has been about human genetics, human genetics data, a national federal data privacy act applies to that as well. So, you can do kinds of regulation that operate at a level of breadth and abstraction that allow you to have a lot more pivots in the regulatory space and be more agile as the disruptive business model in Silicon Valley and elsewhere does its work of churning things up.

Part of what the disruption model does is look for small, regulatory places that can be got around. So, in the case of a business like Uber and you think about a city like New York City, they were like, "Well, do we need to abide this tradition of having these expensive taxi medallions in order to drive customers around the city of New York or can we just do this other thing?" I think that kind of disruption is much more difficult when you have something like a federal data privacy law, that's a bit more binary. Is your company or is your business plan or startup violating federal privacy or not?

And so, these were the kinds of, I think, strategies I was trying to get at in my essays in foreign affairs which were much more about what is the outcome that we're trying to accomplish often anchored in fundamental values about American society, for example, and then what are the legislative ways that we might achieve them or, in some cases, we might not ever get a legislative window to achieve them and what else might we do in government to achieve some of these things and I think the AI executive order is a great example of that.

Conan D'Arcy:

So, Alondra, there's one building on that, there's one question I'd love to conclude on. We've skirted around the elections throughout this conversation but I guess, to your point, fundamental rights in the US and, of course, in other western democracies to free and fair elections and a big concern, you can barely escape it in AI policy conversations this year, is around the impact of AI generated content on electoral campaigns. Now, this builds on previous concerns that we've had and we've seen around myths and disinformation in previous electoral cycles, it also builds on the very viscerous debate we've seen in the US about things like deplatforming and who has the right, whether that's platforms or others, to decide what content should and shouldn't be allowed online.

But clearly, that has been supercharged with the growth of Generative AI and the prospect that Generative AI can help malign actors produce content which is highly authentic at scale quickly and that's the difference, I guess, with, say, five, six, seven years ago where you had chatbot farms and bot farms that were not particularly sophisticated even if they could pump out content at scale. So, I guess it'd be interesting to get your sense, just to wrap up, about how grave you think that threat is this year. And although you've been pretty clear it's quite hard to pass federal legislation, in an ideal world, what would you like to see the government do there? Would you like to see the US federal government bring in, say, bans on deepfakes during electoral periods? I know that some similar laws have cropped up in certain states within the US. So, very interested in your views on that as our concluding point.

Alondra Nelson:

Yeah, so I think we know that this is a very consequential election year and it's consequential because there's a lot of elections taking place, many of them in some of the world's largest democracies, the UK, the EU, Indonesia, India, the United States and it's our first Generative AI election, yeah. And so, I think we should be ... I think the grave concerns that you reference are not unwarranted but I think we also just don't know and so we were left in this ... What we do know is that we've had malign actors interfering in elections in recent years with less sophisticated technology and so it's not a stretch, it's not radical speculation to say. Some of these same tools might be used in this election to interfere and intervene in them. And because it is so easy, as you say, to use Generative AI tools to expand both the ease of creating these kinds of interventions and also the scale of them, that we can expect that actors, that malign actors who want to disrupt elections will use them.

So, I think we are ... It's not unwarranted, it's not being, I think, alarmist to be, I think, worried given what we already know. And we also have the profound ... It's important to realise that part of what we are facing here is the interaction between how easy AI tools and systems make it to create voice clones, deepfakes in some instances and I mean easier relative to using Adobe as opposed to DALL E or one of the other image generating tools but it still needs a dissemination vehicle. And so, we are still also left with some of the same challenges we have in the social media space and messaging spaces like WhatsApp and the like. So, we have new challenges of scale and breadth and speed and also just a lower threshold of use. You don't have to be a sophisticated graphic designer to create a visual political meme, for example. But we also have a lot of the same old problems that we have not resolved.

And so, I really had hoped there would be a lot more legislative urgency around some of this at the federal level and I don't think that that's happening. What's been encouraging and admirable in the US context is that, more than a year ago, anticipating this, legislators like Yvette Clarke, the representative from New York, and Amy Klobuchar, the senator from Minnesota, introduced bicameral legislation precisely to outlaw deepfakes in elections and these bills have gone, sadly, nowhere. So, I think part of, I think, the strategy, the multipart strategy that we need to have to have the policy innovation we need around AI is actually looking at the laws that we have. So, it is the case at the federal level in the United States that the Federal Communications Commission has power to outlaw robocalls whether or not they're AI generated and this is actually what happened within three weeks after there was a robocall of President Biden's voice in the New Hampshire Democratic presidential primary discouraging voters from going to the polls and this turned out to be an operative on the campaign of Dean Phillips who was competing against President Biden in that primary.

So, soon after, the FCC outlawed AI generated robocalls but we've also got the Federal Elections Commission that has some interesting authorities around elections and is really still actively trying to figure out whether they can act in this space. And I think one thing Congress could do without even passing legislation is verbally signal or signal through letters to agencies we think that you should use the full spectrum of your authorities in this space and, of course, those authorities include AI because there's a lot of hand-wringing as if AI has introduced some whole new system or dynamic that means that we need to rethink every law or reimagine the social compact. And I think we really, in the policy space and in the public, need to disabuse people of a sense that AI is such a new category, which it is not, it is fundamentally statistics, hardware and software, that the existing rules don't apply.

So, those are things we might do around disinformation. There's obviously technological things, there's work afoot to watermark and digital fingerprint but these things can be undone by actors who want to get around them. But I think I've done some work recently with a journalist, Julia Angwin, in a project that we call the AI Democracy Projects where we tested AI chatbots for just basic election information and we had, as collaborators, bipartisan election administrators and officials from across the United States. And I think one of the things that we found stunning and worrisome was that a lot of the information ... The election officials and AI experts rated more than half of the information that the chatbots put out as inaccurate. And so, as much as we face the risk and harms of potential malign actors seeking out explicitly to mess up, intervene in elections, we also have degraded, inaccurate, partially accurate information about elections.

In one instance, a chatbot said that there was no polling site in a certain zip code. In another instance, several of the chatbots said that you could vote by text message which is not legal in any state in the United States. And so, there's also this death by a thousand cuts of misinformation, of just inaccuracies that erodes people's confidence in the information that they're getting, that could dampen their enthusiasm to get to the polls because they're getting told to go to a certain URL that literally doesn't exist. And so, I think misinformation, as much as we worry about the fancier deepfakes and voice clones, that erosion of just foundational confidence and shared information, this misinformation that's generated by text-based Generative AI is also deeply worrying.

Conan D'Arcy:

Well, I think, Alondra, in this area, even if it's not the same in others, there is a parallel between the UK and the US where it isn't entirely clear. Despite the UK having just passed the Online Safety Act, exactly how AI generated misinformation might be tackled during the course of an electoral campaign which, as we said, is probably going to be this year that the data is in set and we have similar challenges here around where the power lies in order to regulate this. The Electoral Commission here doesn't really have much power around that, some of that may end up resting with Ofcom but there are some questions exactly how that legislation will apply to AI generated content and there certainly isn't a huge amount within that bill beyond setting up a committee about focusing on misinformation. So, it's less clear than, say, the EU which has a Digital Service Act where there's very clear rules and obligations the platforms are going to have to follow to protect electoral integrity and we've seen enforcement action already from the European Commission targeted at X and Twitter though I'm not sure necessarily that was AI-focused but more broadly around the electoral contents.

So, thank you very much for sharing your views and giving us your experience from having worked in Washington and in AI and social science policy circles for several years and to bounce from the legislature to the executive to what's going on in the OMB all the way through to what's happening with the elections. It's been hugely illuminating and I'm sure everyone who's listened to today's episode will have learned a huge deal. So, thank you very much for joining me today.

If you want to get more detail about some of these issues or to take a look at more of the work that Global Counsel and other colleagues in our US office are doing in and around the issues that I've talked about with Alondra today, just take a look in the podcast notes or indeed go on Global Counsel's website at www.global-counsel.com. Thanks very much for joining us today, bye-bye.

Alondra Nelson:

Thanks, Conan.

 

    The views expressed in this podcast can be attributed to the named author(s) only.