Wierd AI

AI & Regulation

Episode Summary

Should AI be left alone by legal scrutiny? In this episode, we talk about the regulatory frameworks in the USA & EU regarding Artificial Intelligence, what technologies are banned and how the future (from the legal perspective) will look like.

Episode Notes

Should AI be left alone by legal scrutiny? In this episode, we talk about the regulatory frameworks in the USA & EU regarding Artificial Intelligence, what technologies are banned and how the future (from the legal perspective) will look like.

References:

EU AI Regulations: https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682
USA AI Regulations:  https://www.bakermckenzie.com/-/media/files/people/chae-yoon/rail-us-ai-regulation-guide.pdf

 

 

Questions?

Adrian Spataru Linkedin: https://www.linkedin.com/in/spataru/
Bohdan Andrusyak Linkedin: https://www.linkedin.com/in/bandrusyak/

 

Episode Transcription


 


 

---- | Adrian | 00:00:11 ----

Welcome to the Wierd AI podcast. We are your hosts: Adrian Spataru.


 

---- | Bohdan | 00:00:18 ----

And Bohan Andrusyak.

 

---- | Adrian | 00:00:20 ----

And in this episode, we're gonna talk about AI. 


 

---- | Bohdan | 00:00:22 ----

And regulations.


 

So, Adrian. And first of all, I want to congratulate you on our 10th episode of this podcast. Yay! And I think it's a good time to talk more about ethics of AI and about how a guy is regulated in EU and in USA.

 


 

---- | Adrian | 00:00:52 ----

So I think we can maybe talk about EU first. And in 21st of April, EU announced a new regulatory framework on AI. Let's break it down Bohdan, what does it really mean? What is this whole thing?


 

---- | Bohdan | 00:01:09 ----

So, first of all, the best part of this report and its biggest advantage is that it creates a risk levels for AI, beginning from unacceptable risk until minimal risk. And I was really glad to read this because of EU stated that it's unacceptable for member countries to create any types of social scoring.

 


 

---- | Adrian | 00:01:40 ----

Yes, and that's one of the unacceptable risk which are banned. So, that's I'm also very happy with that.


 

---- | Bohdan | 00:01:47 ----

So, it's basically everything that can affect livelihoods and lives off people with systems of artificial intelligence will be banned immediately.


 

---- | Adrian | 00:02:00 ----

What I also like is more regulatory control over high risk AI systems and EU defines them under certain categories.

These are like: critical infrastructure, for example, transport where EU could endanger a person or health citizen. This would be, for example, self driving cars, right? EU want to have very strict regulatory frameworks for that. But it's not only that stuff, like law enforcement, stuff like administration of Justice and Democratic processes will have strict obligations regarding risk assessment regarding how the logging of the data, detailed documentation and so on.

I'm very happy about it because they're not, like, just okay. I have a self driving car. We do not supervise it in any way.

You’'re like free to do whatever. No, it's going to be strictly regulated.


 

---- | Bohdan | 00:02:56 ----

Also very interesting. In the document provided together with this proposal is answering additional questions. They clearly state that even if the AI system has security off 99.9% this 0.1% is can be thousands of people, especially in terms of the migration and asylum seeking. it can affect the livelihood of thousands and thousands of people, even if it just 0.1%.

 


 

---- | Adrian | 00:03:28 ----

So again, why do we even need to regulate Artificial Intelligence?


 

---- | Bohdan | 00:03:33 ----

I think it's the same vibe we regulate any other technologies. Like, why do we have seatbelts in cars? Because it's for our safety and for the safety of all the people around us.


 

---- | Adrian | 00:03:51 ----

And even though most AI systems have posed a low risk to society, those AI technologies which have a high risk can have a disastrous damage on the whole society. And to avoid this, we really need to implement these frameworks. So, EU we talked about the high risk and like what's unacceptable. Those things, just to be clear, all anything which is related to biometric identification is also considered high risk for the European Union.

So, it seems like everything that’s related to privacy's out because of those high risk when there's an AI application to that data.


 

---- | Bohdan | 00:04:35 ----

And it also means that like I risk doesn't mean that it will be banned or something, it just the companies that will want to use application in one of these spheres as they will have to go through EU know harder controls, as they would have to show that they didn't have bias in data sets. That, basically make as transparent as possible their solutions. But also they stated that for purpose, of security and police force, they would have access to face recognition and basically biometric identification. So, for those applications, they said it will be possible, but not on us. It would be like case per case where they need to get permission from, I guess, court or other…


 

---- | Adrian | 00:05:33 ----

Legal mechanisms which allow you to get that empowerment for that sort of thing 

---- | Bohdan | 00:05:38 ----

So, it won’t be like a passive surveillance that just happens by default. 


 

---- | Adrian | 00:05:43 ----

So, one example would be cameras in the public. You would have nowhere cameras like by default no cameras in the city. However, before public spaces, big public areas, agencies can apply for “Hey, I want the camera because of the state of the people only from that region”.

So, it will be a very limited basis of application of that law.


 

---- | Bohdan | 00:06:05 ----

And also I think EU will be able to enforce such policies because, for example, GDPR data protection regulation was effectively enforced, and it pushed all companies first check for their security's stop data sharing with other people. And I think it benefited consumers in the end.


 

---- | Adrian | 00:06:33 ----

And they also using a similar system when it comes to penalising these arrangements on these laws. So, for example, whatever reason the EU has concluded that you didn't respect the obligations required for your high risk application. Then EU could see fines up to €30 million or 6% of the total worldwide annual turnover of your preceding financial year, and whatever is higher is what you're gonna pay.

So, it's a very similar penalty system here, as in GDPR.

---- | Bohdan | 00:07:12 ----

Exactly. And this annual percentage is so good that big companies cannot just ignore it, ignore it and be like, Okay, we will pay like 100,000 euros and just continue misusing our systems. It means that if they misuse that, they will pay accordingly.

But EU would not just put these regulations without thinking how they can support people doing AI. EU doesn't want to hinder the development of artificial intelligence, machine learning and other technologies.

So, together with proposals they create suggestion ofthe how to create enough resources to make sure that there will be people that can help deploy AI system. So they put budget for research and PhDs for people that work in AI. And, yeah, those allocates resources to develop the workforce and skill set to ensure that there are enough people that can maintain an audit in new and complicated AI systems.


 

---- | Adrian | 00:08:28 ----

Well, what will also help to make it easier for companies to follow these policies is technical standards. And this is not only happening in the EU, this is actually happening on the global level. So, you might have heard about these ISO standards.

90%,I think, of global trade works through these standards. So, if I would trade something from EU to US or US to China or whatever other country, and your products, if they respect the ISO norms, then you can trade. And this allows also that it's consistent that everybody are following the same rules, and it's easier to even trade technologies. Cause like okay, once maybe one from this, like not only respecting the laws like okay, safe for the EU. But you want to respect all international regulatory rules, whatever, depending on the country. So, the standards allow EU to build a technology which respect those. So, for example, there's a committee on this international organisation for standards where the law of the members, even Austria with the EU norm,so, the local norm standard in here.

And there they talked. They already published seven ISO standards, and the first one was regarding the terminology and vocabulary, which doesn't seem like a big deal. But this is important, especially when we talk about AI.

Like this thing, which thinks by itself it's Sophia. No, it's not. There's actual formal definition on every component of AI, which allows the companies to clarify exactly what they want from each other and what their products are doing.

How can I, as a company comply your laws if there's not a standard and how to respect them? Right? This is, thankfully also moving along and maybe more relevant to bigger companies, of course. But with these standards, we can allow way to not only export and import AI technology, but actually followed the loss.


 

---- | Bohdan | 00:10:34 ----

And now going tow US regulations of AI. For me was a little bit complicated, now going to US. Yes, in some ways very similar to EU because EU have, like, federal laws that apply to all the states and then EU feel like a state laws. And there's already a bunch of laws already accepted and in place. And there's also a lot of proposals that should be passed in next years or they're still in the works.

But they are also putting similar to Eu all the applications that should be our or not all up. So, when I was reading about US regulations, there is a lot of the similarities between what EU was doing and US is doing in terms ofthe what is acceptable and not. So, in the US there was no this risk levels, but also they are very strict on geometrical identification.

Facial identification. Yeah, and then they're more lenient on smaller applications. But what was very interesting for me is they have regulations about bots. I think it's because of their experience of elections, they said that bots should stay like when somebody's chatting and it's chatting with bot, bot need to explain, it's not a real person talking. So, if there is applications that is using a chat bot and not stating that it's a chat bot, it's illegal.

 


 

---- | Adrian | 00:12:26 ----

I think overall, there will be some consensus, both of places with the EU regulation. So, we saw with GDPR.

There are some states which have an equivalent like the California Consumer Privacy Act, which is very similar to what we have here in Europe. So, I think it will kind of converge and especially with the ISO standards the U. S. and Europe and all other agencies define the standards together. I think the regulatory framework will be very similar.


 

---- | Bohdan | 00:12:59 ----

But I have fears from reading about US regulations that they're written in the way that a company needs to notify that they're using, like, facial recognition or whatever, whatever. But it can mean that you just have a licence, an agreement that nobody reads and you put “Yes, I agree”.

And again, your data is used in the same way. If it was before regulation, because the company just added the paragraph, to regulation.


 

---- | Adrian | 00:13:32 ----

Yes, somehow bypassing is basically just by “Oh yeah, on paper looks nice”. But the actual application of enforcing this law is by oh, there's a simple loophole and therefore we don't need to respect them. Yeah.


 

---- | Bohdan | 00:13:44 ----

And that's why I, like, really like that EU stated the social scoring will be banned and didn't dance around it. 


 

---- | Adrian | 00:13:57 ----

That said, that's also quite tricky how do you define social scoring. Because if you think about like credit cards in the US, you have a credit score that can be considered as one proxy for a social score.

Basically, these financial companies have created scores and other kind of scores to measure the performance or measured the trustworthiness of the person. So what is a social score?


 

---- | Bohdan | 00:14:23 ----

So, I think here it cannot be AI created social score. I think this is what it means. Of course, we have, like, credit scores and stuff, but they are not created by AI they are created by, like some formulas that somebody invented like wrote down. And it's like clearly explainable. It's like based on some factors.


 

---- | Adrian | 00:14:49 ----

So, explainable AI versus Non-explainable AI. Not is that you can't say “Well, the current method is also AI” you could say.


 

---- | Bohdan | 00:14:56 ----

But I think is what they have by social scoring they mean like extremes from when your actions are evaluated and put together into one score like how’s there now in China, when you know if you help in the neighbourhood, you get one score on. If you don't help get a minus score on stuff like that.


 

---- | Adrian | 00:15:22 ----

I like that in US and also in Europe. They're pushing this algorithm bias enforcement so not enforcing bias. Okay, verifying if there's a bias in the algorithm.

More specifically, you want to have outwards which explain themselves in order to not have this unattended biases for when they're predicting. And we're already in our past episodes, we talked so many examples of these of algorithmic biases affecting or in funny ways or sometimes more serious and, I think, like if the way these companies can use them AI would be then using Explainable AI to explain an in order to do whatever they want to do.


 

---- | Bohdan | 00:16:10 ----

And also I think there will be emerging a new business because all of the certification and controlling will be dependent on independent auditors. So I think there will be a huge market for creating auditor companies that can check data set for bias check transparency and, whatever EU or US would acquire from their companies to get to the market.


 

---- | Adrian | 00:16:39 ----

And this is actually happening right now. So, there's already seven standards defined by the ISO. Not all of them you really need to, like, enforce like vocabulary.

That's something we all agree on this. We don't need to, like check on audit. But the regarding, like, fairness and explainable on fairness of the algorithm those norms are in definition defined.

You’re going to have, like in the automobile industry where we have regulatory audits. Like if you're and they give you, like, the certification for that norm, this is gonna happen and is already acting during the company's trying being external auditor enforcing or giving certificates for these AI standards.


 

---- | Bohdan | 00:17:24 ----

We should become auditors ourselves, but we just certify weird applications. Is it weird enough?


 

---- | Adrian | 00:17:33 ----

Are you weird? Yes. Okay, good.


 

---- | Bohdan | 00:17:38 ----

I think for the end. I have a very interesting question in this. The EU reports this example of dangers.

AI of toys using voice assistance, encouraging dangerous behaviour of minors.

And I think I couldn't find examples of such toys that use voice to tell kids “do something dangerous”. Have you encountered such things?


 

---- | Adrian | 00:18:06 ----

No, but they must have. It's so specific that they might have an idea. 

They have. Who knows? I would think maybe on Alexa, you can consider that a toy, you can roll it around with.

 

---- | Bohdan | 00:18:25 ----

Yeah, exactly. That's quite interesting to see, like what they were thinking about. Because everything else is like general

No, like, specific and serious, like toys using voice.


---- | Adrian | 00:18:41 ----

But if you guys have an idea what toy which influences children, then write us up on LinkedIn or Twitter check the links in the description. And we wish you a great day. 

 

---- | Bohdan | 00:18:58 ----

We will hear you, see you at our next episode.