The EU's Artificial Intelligence Act came into force August 1, 2024. This is the most comprehensive legislation on the topic. What does it entail, who is affected, how to comply? Meeri Haataja, CEO and Co-Founder of Saidot is one of Europe’s leading experts on AI governance. This episode discusses policy, innovation, social impact, and human interest stories of one humanity’s most disruptive modern technology.
Deep Pockets
Episode 5, Season 4
Meeri Haataja
EU’s AI Act
Petra: Welcome to Deep Pockets with Petra Söderling, the show about governments and innovation. With each episode we bring you a person and a topic that is part of this larger concept of how countries and regions can create economic advantage by investing in innovation. We're now in season 4.
Can you believe it? I call this season the random rendezvous. After organizing, scripting, interviewing, editing and marketing 27 episodes, I wanted to give myself a little slack. This season I will invite interesting people I meet online in events or through work. It will be an open mic approach, no scripting, no theme, just me and the guest talking about whatever we feel like, for how long we feel like. Our theme song is by New Orleans Jazz icon Leroy Jones.
I hope you enjoy this and other episodes. The European Union's Artificial Intelligence Act came into force on August 1st, 2024. This is after years of preparations, negotiations, industry and interest group lobbying. What is this all about? I invited one of Europe's top experts on AI governance and I'm going to let her explain what that term means to tell us why this act is needed and how it impacts Europeans and also people and companies from other countries. Mary Hatay is the CEO and co-founder of CIDOT, that's S-A-I-D-O-T, a company providing a leading enterprise SaaS platform and service for AI governance and transparency.
She helps enterprises develop and operationalize responsible AI strategies for positive economic and social impact. Welcome to Deep Pockets, Mary. Great to be here. Thank you for inviting me.
Let's get warmed up first by talking about you. How did you become interested in technology in the first place and how did you end up founding one of Europe's leading AI companies?
Meeri: Yeah, I think I got interested about technology first time when I got access to internet during my high school years. It was IRC, Internet Relay Chat, that was the first social media, so I got really engaged with that and found new friends, even my husband via that, so that's probably the first contact with technology and internet for me. I started in School of Economics in Finland, now there's all the university and I majored in Math or Quantitative Methods for Business, so just got interested about Math and Statistics and that was what led me to then got involved in analytics and data related work when I entered work after my studies. So, I kind of partially accidentally ended up in data related work, but then basically I've been doing a whole professional career in data and analytics related work, later than AI related work and that was the problem. Then it was later, something like seven years ago when I started to engage more in the compliance related or regulation related, privacy related matters during my work in the finance sector and during those years my eyes opened to the impact of technology that I was doing in my work and my colleagues were doing and I figured that we are not necessarily seeing broadly enough the impact that we're doing or making in our society and in enterprises. So, that got really interesting about the AI's impact and also how we can change the way how we work so that we take responsibility and accountability also on the wider impacts that we are making with AI and that eventually then led to founding site at 2018.
Petra: Okay, 2018. That's really fascinating. I ask this question from all of my guests and every time the career path is different, it's so fascinating how people end up where they are. Also, ad break, if you're interested in this story, I have interviewed Mere in my book Government and Innovation, that's in the chapter about Finland and AI because you also worked for the Finnish AI, the national strategy of Finland. But the reason why I invite you to Deep Pockets today is the European Union's Artificial Intelligence Act that finally came into force just over a week ago. So, can you tell us what the act is, what it does for European companies trying to develop or implement AI technologies?
Meeri: Yes, absolutely. It's very exciting time really because this process didn't start last year or like very recently. Already 2021 was actually first time when this first version of the regulation was put out for comments and even before that, there was several years of preparation.
So, it's been a really long process. AI Act is the world's most comprehensive regulation specifically on AI. EU regulation applying to all the EU member states in the similar way. It applies to organizations who are putting AI systems into market in the EU region.
And really the purpose of the regulation, there are actually a lot of them. It's really targeted to improve the functioning of the internal AI market in the EU, promote this trustworthy AI or human-centric AI and really protect citizens and users, humans related to health, safety and fundamental rights, related risks that concern AI and the use use of AI. But also, Commission has a target for supporting innovation with this regulation by setting clear rules where companies can operate as well. So, there are actually really broad targets, targets with the regulation, but importantly to really protect people related to AI-related risks.
About what kind of this regulation, it's really a risk-based regulation. So, it brings forward this risk categorization of AI systems and really sets them requirements for organizations who are building or deploying these AI systems according to the level of the risk of the system. So, completely prohibiting the European market or EU market and then there are a bunch of systems, AI systems that are considered high risk and those are facing quite significant governance related requirements then to govern or mitigate against those potential risks that are associated with these systems. And then there are limited risk AI systems, then some general purpose AI models that get other types of or different kinds of requirements. So, a really risk-based approach is here. Can
Petra: you give us examples? Can you give an example of what's totally forbidden and what is limited?
Meeri: Yeah, for example, in the prohibited category, there are for example social scoring type of systems. So, in the EU, it's not allowed to do basically social scoring.
Petra: Like this Black Mirror episode?
Meeri: Yes, yes, yes. So, that's completely prohibited in the EU. The biggest or like the most attention has been going to these high risk systems that those other systems are really that are facing most of the or biggest set of requirements from the AI Act. These are, there are sort of two categories of systems. First is those AI systems that are already facing safety related regulations in the EU. So, for example, AI systems that are medical devices. So, these systems are considered as high risk and they face also requirements from the EU AI Act. So, safety regulated AI systems, one category. And then there are these kind of, they call it standalone AI, high risk AI systems. These are for example, AI systems that are used in doing recruiting related decisions about people. And yeah, basically systems that are making influential decisions regarding people's that influence people's lives.
Petra: Yeah, we've seen bad examples of that. We've mentioned a few terms. So, let's clarify those to our listeners and for me please. So, for example, responsible AI, and then you talk about AI governance. Can you give us like a little bit of background on these dedicated terms?
Meeri: Yeah, this is really interesting. It's been a lot of iteration and of course everyone uses from their own perspective these terms. But by responsible AI, typically we refer to AI that is ethical, technically robust or safe and also lawful. In Europe, we often use trustworthy AI as a synonym for this responsible AI term. In the US, I think responsible AI or Rai or AI is the term that is referred to that one. Then on the other hand, AI governance, we typically refer to those formal processes and practices that help enterprises to ensure that they develop and deploy AI in a responsible manner. So, that's then the operational level. How do you ensure that you operate responsibly when in the context of AI?
Petra: Okay, so if responsible AI includes things like ethical AI, then how do you define ethical AI? What does that mean?
Meeri: Yeah, it's actually very often used as a synonym for responsible AI. But I think more precisely when talking about ethical AI, we are focusing on ensuring that AI aligns with the ethical norms of the stakeholders, both the operators who are bringing this to the markets and then also the stakeholders that are influenced by these systems. So, that's really sticking to ethical norms of that context where its operator system is operating. But as said, these are the such terms that are very often used as synonyms referring to say mass responsible AI.
Petra: Okay, thanks. I think we're all on the same page. So, many of the big tech companies that are creating these AI tools are actually American companies. So, how does this EU legislation affect them? And does the US, by the way, have a similar law? And another question, some fear that Europe is once again regulating itself? Go ahead, answer that please.
Meeri: Yeah, so the EU AI Act impacts everyone who is bringing AI systems to the European market. So putting their AI based products on the market or in service in the EU. So definitely it's very relevant for US based companies who are building AI based products or AI models and taking them to the EU market.
So so from that perspective what we are seeing is really that it is an interesting regulation not only for companies operating in the EU but generally for companies who are operating in a global marketplace. It provides the clearest standard, regulative standard now on what is coming and a lot of different countries are working on AI regulation. Also there is a lot of going on in the US in terms of AI regulations. There is no similar federal level regulation in the US but there is definitely a lot of activity going on in the state level. For example in Colorado there is this Colorado Artificial Intelligence Act and in New York City there is automated employment decision tool law. So there is a lot of going on in different levels in the US as well. And while saying that it's not only Europe and US but also we see similar kind of actions then in several Asian countries as well. Interesting.
Petra: It's similar to previous privacy laws that Europe is probably more strict or leading. I don't know. But you mentioned earlier that the EU's AI Act is also meant to enable innovation but some fear that Europe is once again regulating itself out of business because American companies they just do what they want. They create de facto standards by just creating their products and shipping their products and we hear this saying that America innovates and Europe regulates. What's your feeling about that?
Meeri: Yeah. I totally see or recognize that saying and definitely there are grounds for saying like that. However when we look at the AI market at the moment and even there has been studies on for example related to how have companies now been able to leverage and realize the opportunities from generative AI for example. And practically there's a lot to do still on that area. So generally business leaders are not satisfied by the by the level of like how well companies have been able to leverage the opportunity that has been now on on table. And when asking what are the reasons for why they haven't been able to realize those opportunities or benefits yet from generative AI. One of the top reasons is unclearity related to AI safety and regulations and how to use this technology in a responsible manner.
So this is what we're also based on my experience when we discuss with a lot of companies. It is actually practically this unclearity on control how to control how to control the risks related to these models and and AI systems is already limiting or slowing down the way how companies are able to leverage and capture the value from these technologies from that perspective. I see that we are in the moment when when any clarity is helpful for companies in finding the like in a safe space well where to operate knowing what does it look like what where we shouldn't engage due to like the two high risks and and and then like bringing that clarity. So so there are these both sides sides always and and sometimes this regulative clarity can be actually a driver for for innovation and being able to then move move
Petra: on in that that does make a lot of sense when you're the CEO you want to be able to make longer term strategies and know and trust the regulatory environment. Okay let's go into your company Saitot tell us what it is that you do.
Meeri: Yeah so so we really are focused on helping enterprises who are building developing or using AI based systems to operationalize high quality AI governance in their works. So so there's a lot of principles and good practices standards and these regulations that give a lot of guidance but the question is then what does it mean operationally in the everyday work of those AI teams who are building these systems or business teams who are taking into use this system system.
So Saitot is really focused on on helping these teams to to put it into practice. Practically we provide or do this by providing SaaS platform for AI governance. So using our product our customers build their AI inventories. So one place where you actually see know and are able to follow what is going on in AI in our company whether those are your own systems or third-party systems customers use it for doing AI risk management for their different AI systems to understand and meet also regulatory requirements and then collaboration. AI governance is always a collaborative effort between technical specialist, legal compliance specialist, business users and owners, different stakeholders. So it's really a place also where these different stakeholders can come together and do good governance effectively.
Petra: Okay that's great and where can people find this tool?
Meeri: Saitot.ai is the easiest and then also LinkedIn of course.
Petra: Okay I tend to get a little bit philosophical also towards the end of this episode. So how do you see the future of AI developing? Like what are the biggest technical, social or regulatory challenges? And I wanted to go back to this book that I read in 2017 when it first came out live 3.0 by Max Tegmark. So in 2017 he was laying out these different scenarios you know how he could play forward but that was seven years ago. So how do you see what are the biggest challenges and where is AI headed?
Meeri: This is a really challenging question because the pace is so fast that it's hard to predict. I've been saying that it's even like hard to predict what is happening in one year from this moment.
So like now looking even further it's really hard. No one really actually knows but it's so interesting and we will learn as we go by. I'm really interested about AI agents and how do we connect AI to automated workflows and how AI powered agents start to interact with other AI powered agents and AI systems to conduct more processes and transactions independently. This is definitely going to be a huge driver for efficiency in a lot of different processes but also makes this whole interconnectedness and autonomy of these systems makes also it from a governance perspective very interesting and probably also very challenging as we continue on this track.
So that's something like pretty practical right now. There are some topics that will definitely require a lot of thinking from policymakers around the world. Like many things that even AI act isn't really fully addressing one of the most worrisome or sort of themes that are very difficult is the whole influence of AI in the spread or creation and spread of this information and misinformation the whole concept of truth what is true and like you know what is not done like you know how does that say change our societies. I think it's a really difficult question and challenge and can be extremely influential in how our societies work and how our kids start growing up and yeah very interesting social impacts on that. Also the whole question on how does AI impact work is something that we talk way too little. There has been some very interesting examples. Generally I want to really encourage companies to share publicly like how they have used AI and like how it has influenced their work.
Petra: Also positively because there's so much fear mongering. Yeah exactly so just like for everyone us together to understand more about the impact. So I think this sharing is really important and really want to encourage on that one. I think everyone should expect from their own work perspective that AI will radically change the way how our work is done. So it's really important there too
Meeri: even though it's difficult to forecast exactly how that happens but it's important that you recognize that and take action on keeping up with the development and also taking into use AI. The whole question of liabilities and red dress I think that is also one topic that we relatively little discussed still we tried to sort of proactively avoid risks or harms from happening and that's of course very important but the fact is that there will be harms created by AI or driven by AI driven systems. So we should start talking more about the liability and red dress related issues how do we then fix a situation where the harm already happened. So hoping more more discussion about that. Yeah a few few things just to Neima some of those topics.
Petra: Okay last question about you. So we're recording this in August it's summer so I want to ask you what is the most interesting thing in your life at the moment. What makes you get up in the morning? What makes you smile.
Meeri: There are so many so many things it's really really hard to but maybe I need to say at the top of the list it's decided team. We're 20 persons right now incredibly talented driven people so I enjoy really like every second with the team but also in this situation of the market now it's been a long way and we've been expecting these regulations to take form and now of course for us as a company it's extremely interesting couple of years ahead when when companies start to take action on on AI governance and really putting operationalizing that so really working with all of those customers who want to find ways for effectively doing good governance and having and having a privilege to like me support them in this this journey so a lot of reasons to smile.
Petra: This has been Meri Haataja CEO of sai.ai. Thank you for visiting Deep Pockets. Thanks for having me it was fun. You've listened to Deep Pockets with Petra Söderling. To subscribe to content please go to PetraSöderling.com. The wonderful music you heard is by Leroy Jones an iconic New Orleans jazz hall of fame trumpetist.
You can find this and other Leroy Jones tunes at your favorite online or offline music store. Thanks for listening and be sure to subscribe, like, rate and share our episodes. It means a lot to me and to my guests. Thank you.
Nous utilisons des cookies pour analyser le trafic du site Web et optimiser votre expérience du site. Lorsque vous acceptez notre utilisation des cookies, vos données seront agrégées avec toutes les autres données utilisateur.