In this episode of Deep Pockets, Petra sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the delicate balance between trust, power, and technology. Chris unpacks his research on the risks and benefits of trusting relationships, and the moral obligations that come with endorsing others.
The conversation ranges from the global “grievance” mood captured by the Edelman Trust Barometer to the complexities of AI governance and responsible tech. Chris shares why organizational culture is key to ethical AI, how incentives shape behavior, and why trust isn’t always the goal; sometimes warranted distrust is healthier. It’s a thoughtful deep dive into ethics, responsibility, and hope for a more accountable digital future.
Find Chris on LinkedIn https://www.linkedin.com/in/chris-mcclean/
Find the Edelman Trust barometer 2025 https://www.edelman.com/trust/2025/trust-barometer
My guest today, Chris McClean, is a digital ethicist responsible AI lead and risk specialist currently working as Avanade an IT services [00:01:00] and consulting company. As the global lead for digital ethics at Avanade, Chris leads their AI governance team and responsible AI policy implementation. He also leads avanade's responsible AI and digital ethics advisory practice.
This includes delivering training and workshops, conducting system and program assessments, and guiding responsible tech program design. I have invited Chris here at Deep Pockets because I was told his PhD work is looking at the quote, risks and benefits of trusting relationships and the moral obligations we carry when placing our trust in parties in a position of power.
So I thought this is extremely interesting and I wanted to have Chris here. Uh, Chris is also, uh, something called a corporate citizenship champion, which I'm sure he will tell us what that [00:02:00] means. Mm-hmm. And he's a former industry analyst and research director at Forrester. Welcome to Deep Pockets, Chris.
Chris: Thank you, Petra. It's great to be here.
Petra: Okay, so let me hand the mic right back at you to properly introduce yourself. Who is Chris McLean and why should we trust him?
Chris: Uh, sure. Yeah. Well, first of all, thanks for the, the time and, uh, really great to be here. Um, so my background is actually, I, I got started in, in marketing of all things about 25 years ago.
Uh, that was my undergrad degree and spent about five years, uh, in marketing and, and market research. Really at that time, focused almost entirely by accident on, uh, technology and specifically technology related to security, privacy, and risk management. And I very quickly realized that the point of that that I really liked was the, the market research.
You know, understanding big trends, you know, what's going on in the world, how is that affecting? Business and society and just how we, we live and treat each other. [00:03:00] So I spent a lot of time around, uh, information security and privacy, but very quickly moved into market research around enterprise risk management, corporate compliance.
I did a decent amount around, uh, corporate social responsibility and sustainability. Really just looking at how, again, how companies behave and how we behave within these big organizations. And along the way, I got a master's in business ethics. Uh, basically looking at ways to build, um, compliance or ethics programs within large companies.
Again, spending a lot of time around, um, enterprise risk management as well. And then, uh, just about six years now, uh, started with Avanade and now focused my entire time on, on ethics, you know, specifically, uh, around technology, uh, digital technologies and. You know, in that role, I, you know, as you mentioned, I, I spent a, a good deal of time internally on our policies and practices, you know, helping with things like training and awareness.
Basically, how do we make sure that the things that we are building as a company [00:04:00] are as ethically sound and responsible as possible? And then a lot of my job is helping our clients build their own programs around responsible tech. You know, things like AI governance and responsible AI specifically over the last three years.
And then as you mentioned, I started a PhD program, um, just about four years in now, uh, with the University of Leeds, um, idea Center. And my focus for the dissertation is really how do we go about trusting when, you know, what does it mean to trust another person, another party, a system, or an institution.
You know, what are the kind of benefits that we get, but even more so what are the, the risks that we take? You know, how are we vulnerable when we trust? And very specifically my dissertation is looking at not just at how are we vulnerable when we trust, but how do we potentially make other people around us more vulnerable based on the actions that we take, uh, in the, in the name of trust or, or informed by the trust that we have.
So, yeah, lot lots going on there. I guess, um, the question of, of why should you trust me? Um, all I can say [00:05:00] is, you know, I try to be open and authentic and, uh, I, I think I act with integrity and things like that, but I'm not sure I have a tremendous amount to say about why other people should trust me.
That's really up to them, right? All I can do is act with the best qualities that I have, but some people might trust me because they know me very well, because I've been a friend to them and I've come through in a, in a tough situation. Other people might say. Oh, this guy has, you know, good ideas and I think they're helpful ideas that I can implement.
Um, but you should really only think about, you know, the amount of power that I have to influence how your life goes. And for most people, that's not very much, right. I don't, I don't have a whole lot of influence over most people's lives, so they can feel free to trust me with small pieces of advice or insight.
But the moment where I start to have power and influence and authority over your life, or how your life goes. That's when you should start to really do some due diligence and understand, you know, I guess get to know me better.
Petra: Yeah. Uh, that's so interesting. I actually, I hadn't thought that by [00:06:00] inviting you to my podcast, I might be making other people vulnerable.
Mm-hmm. Because I trust you. But now I'm putting them in a position that they should also trust you if they trust me. Is that how it works?
Chris: That's right. So you are giving me some sort of endorsement, right? Yeah. You are contributing to the power, the influence, or the authority, or at least the perceived authority that I might have.
Yeah. And other people might. If they trust you, they might take your endorsement to mean something. Ooh, now I'm getting nervous. Get more trustworthy. Nervous.
Petra: Yeah. Okay. Um, so you're actually, so I trust you because you're an expert. You're actually also very humble. I, in preparing, uh, for this interview I listened to of you previous talks, one of them, uh, was one that you gave me, but then I also found out that you've been on Bloomberg TV to talk about risk and stuff, and you never even mentioned any of these super high profile.
News, um, media outlets where you've also been. Um, but anyway, so the one that I listened to, uh, [00:07:00] uh, is a podcast episode from, uh, ethics Untangled podcast. Shout out to Ethics Untangled, where you discussed the philosophy of trust. And in that interview, you described trust as this widespread force. That prevails our social interaction, our friendships, romantic relationships, and how we interact with public and private organizations.
So can you go, uh, into a little bit more detail about this, uh, philosophy of trust?
Chris: Yeah. Of, of course. Happy to. I, that's, I spend a tremendous amount of time reading and, and writing about the philosophy of trust. So. The, the way I take it, uh, I, I'm defining trust as a kind of stance that, that we take, that another party is a good steward of some power.
Uh, and, and I'll actually break that down a little bit because I think, you know, each of those words, uh, has, has a, a bit of important meaning. So when I say stance, uh, you know, this is really kind of like a position or an orientation we take towards somebody or toward a party. You can think of it like a [00:08:00] political stance.
You know, I have a, a stance that this party is a, a good, um, you know, steward of the power that they have. They are, uh, they're legitimate, they're competent, they're trustworthy. They, they act with integrity and things like that. So that stance is kind of a position, but that means it's a position that's open to be.
Interrogated or challenged in some way. And the power is really about the discretionary power of this other party. So if I think about, you know, trusting somebody to watch my house, uh, they have some discretion over how they treat that responsibility, right? They could accidentally leave the door unlocked or they could, you know, walk out with my television set or something like that.
They have kind of discretionary power within that kind of realm that I trust them. And the, the stewardship of power is basically my understanding of how I expect them to treat that, that power, right? So when we think of stewardship, that's usually in terms of. Environmental stewardship or maybe financial stewardship over like a company or financial interest and stewardship [00:09:00] implications are really, you know, this person is going to act responsibly.
They're going to, you know, use this asset or this good, uh, in a way that I deem to be responsible and they're going to be held, uh, accountable in, in some capacity. So again, if I trust somebody to watch my house. I hope that they would do the right thing. I don't know all the situations they might come across, but I think whatever situation they come across, they're probably going to do the right thing, but I'm gonna hold them accountable at some point to say, I didn't like what you did there.
Maybe I trust you a little less because of that. Or You did a great job. You, you know, you, you saved my house from some harm. I'm gonna trust you more. So I have these kind of decisions that I can make within that, um, holding them to account. So again, the, the point of my thesis is to say. You know, this underlying force that you mentioned is, is important because, you know, we can't do everything in the world that we want to, right?
We are vulnerable in certain ways, so in some cases we have to trust. If we want access to good, let's say food or healthcare or [00:10:00] education or security, we, we kind of have to trust each other in, in certain ways. But also, if I want to achieve things in the world, if I wanna have great experiences, I might have to trust.
You know, an airline pilot or a tour guide or you know, uh, a teacher if I want to learn things. So there are ways that you can accomplish more. And in so doing, you are making yourself more vulnerable. And my point is just like in the way that you trust me and endorse me, when we trust another person or party, we are adding to their power.
We are giving them more power. We're supporting them in some way. And if we are supporting people in power. That have power over other people and they use that irresponsibly. We share some burden, we share some responsibility in how they treat other people. So my, my request, my, my hope is that we are just maybe a little bit more diligent when we place our trust in parties that have, uh, an outsized position of power over others.
Petra: For sure. Yeah. I mean, [00:11:00] just look at social media, how easily people share content without thinking that it's an endorsement.
Chris: And in some cases, that's okay. If it's a, you know, a silly meme, that's okay. But when you share content that's related to whether or not people should get vaccinated or wear masks or vote in certain ways, that has very real consequences.
And so we bear a larger burden. To do our due diligence and really understand the perspectives and incentives and, you know, maybe the, um, the credentials with which this person is giving this advice or, or taking this point of view.
Petra: Yeah. Let, let's talk about that social trust, uh, mood. Uh, there is something called the Edelman Trust Barometer that I learned from the, um, uh, ethics Untangled podcast.
Mm-hmm. Mm-hmm. Uh, it's, uh. I, I think it's like a PR firm that, uh, issue this barometer every year. So I looked at the Edelman Trust Barometer. Uh, they actually issued their 25th, um, barometer just this [00:12:00] year, 2025 in June. And the results, I watched their video. They say that we have now descended into something called grievance.
Like the, the trust, um, barometer, uh, I don't know, noun that they use is grievance. And the grievance is a result of the, the history from previous barometers where first we started to demonstrate fear. And this is a global study by the way. So first we started to demonstrate fear. And then we experienced polarization, and now we are in this feeling that things are unfair.
That, and that we are being, like most of us, we feel like we are being treated unfairly. Uh, can you expand on the Edelman Barometer and, and grievance and the, like, the current mood of, uh. Trust in, in politics and
Chris: society. I'll try. Yeah, I'll try. That's a, that's a, that's a big request. But yeah, trust Barometer is very famous.
Edel, Edelman's been doing this for a long time. Edelman's a, you know, very large and, and complex, you know, PR and communications firm. Uh, but this trust [00:13:00] barometer, I, I think, has done a really good job of. Highlighting some of the concerns that we have, and they look into how we trust as society and as individuals, how do we trust our institutions, uh, political or religious or, um, you know, educational healthcare.
Uh, how do we trust each other, the media. So very kind of in-depth study and, and I agree, I think over the years that I've been tracking it, um. It does seem like things are just getting worse and worse every year. Mm-hmm. And this idea of grievance, I, it feels about right. You know, if you go and, you know, if you look at the news or if you go on social media and just even talk to people, I do feel like there is a widespread concern that a lot of our institutions that.
Historically we've relied upon and we've, we've built up in, in our own image, hopefully, and that that should be in service to us. Mm-hmm. And, and they're failing in that capacity. And I do think, you know, that these are important conversations. I do feel like it is important to understand why this lack of trust is not only pervasive, but, but worsening every year.
[00:14:00] But this idea of this, um, let's say crisis of trust that's been growing, I think might be framing the issue a little bit, uh, incorrectly. If you think about this definition of trust as a stance about stewardship of power, it might actually be that this distrust is warranted. You know, if we can make the argument that these institutions, you know, political parties and, and you know, uh, corporate entities and so forth are misusing or abusing their power.
That warrants distrust. That is, we should be more diligent about how these people and parties and positions of power are, uh, using their discretionary power. Yeah. How they're behaving and, and in ways that impact, uh, our livelihoods, our wellbeing, our experiences. So if. If my definition is that trust is a stance about somebody's stewardship of power, we would make the argument that these entities are not using their power in the way that we would define as good stewardship.
Again, this trust with [00:15:00] distrust would be warranted, and so instead of saying. Let's fix our trust. Let's look at how these political parties and institutions are communicating with us, and let's look at transparency and things like that. I think maybe those are the, the wrong directions to go. We should instead be looking at, you know, what kind of power do these different institutions wield?
What do we expect from them as far as good behavior, good stewardship, and then how might we go about holding them to account? And that holding to account could mean. Our votes, our investments, our time, our energy, um, you know, maybe it's more regulation, maybe it's more oversight in some capacity. I don't know if I, I, I can list out all the different ways that we mitigate that power or we, um, enforce better oversight.
But I think that could be the focal point rather than simply looking at whether or not we trust these different institutions or, or parties.
Petra: That is, uh, so interesting. I was actually, yesterday I was being interviewed, uh, for another podcast [00:16:00] and, and excuse me, I was talking about the role of, uh, go, it was a quantum technology podcast.
I was talking about the role of government, uh, how, how, um, the interaction between government and private companies. And I was asked the question, how could the government be more agile and more nimble and, and more, um. Uh, uh, you know, more alike companies and my argument is that they shouldn't. Uh, and, uh mm-hmm.
I was thinking about this, this reference in like, and I think this applies to trust. I don't know if this is a, a fair, um, reference, but the government to me is more, a little bit like parents, that they set the rules of the house, like they own the house and they, they set the rules and they make sure that the children are able to get.
But the children then have the freedom to play and, and, and be free and be agile and nimble and and so forth. But you don't want the parents to be too agile and too nimble because you want them to be responsible. So you want the children to trust the parents to make sure that they have food [00:17:00] and, and shelter and uh, healthcare and so forth.
Um, I don't know.
Chris: I I like that. Yeah. And, and obviously it goes both ways. So if you think about that, that metaphor, um, there are certain kids that probably warrant a little bit of distrust and some that warrant a little bit more trust. So if you say, you know, I, I trust you, uh, to go play in the backyard, and they behave for a couple hours and they come in and everything's fine and maybe.
Uh, the next day you say, oh, well maybe I trust you to go and play in the front yard, and maybe there's a little bit more harm that you can do or that you can come across. But you know, you're demonstrating good behavior. So I'm willing to kind of expand your, um, your freedom, your discretionary power in a lot of ways based on your, your good behavior and the oversight that I've been able to, um, enact.
And then if you take that metaphor and take it back into the, the corporate world, you might say, well, as an industry. There are probably some, some players that are not behaving in a way that we would, [00:18:00] we would describe as good stewardship of discretionary power. Maybe we need to tighten the screws a little bit.
Maybe we need a little bit more oversight. Okay. Let's bring you back into the backyard. Let's wait till you behave for a little bit longer so you start exhibiting some good behavior. Yeah. And then we can talk about giving you a little bit more freedom and responsibility. Go. I, I think that that metaphor actually plays out quite well.
Petra: Yeah. Yeah. Oh, oh my gosh. I'm already thinking about, uh, some companies. I'm not naming any names. Okay. Uh, that was really interesting. Thank you. Uh, so let's move on from, uh, you know, the. Political and, and social trust and move on to technology. So your day job at, uh, Avanade is a senior director of digital ethics.
So what is, or what are digital ethics and what do you actually do there?
Chris: Sure. So, um, my job is twofold. Um, I'll start with kind of defining digital ethics. So if I think. What is the discipline of digital ethics? Primarily, it is looking at the lifecycle of a product or [00:19:00] technology and making ethically informed or values informed decisions throughout that technology lifecycle.
So at the very earliest stages of conception, like I have a. Business or social problem I'm trying to solve. And I think technology is the answer where I have a product in mind that I think I'd like to build and, and release to the world or to, you know, these employees from that early stage of conception through design, development, testing, implementation, operation use.
And eventually, you know, off-boarding. At each point I'm making ethically informed decisions. Um, the first step is usually I wanna understand what the impact of that technology could be. What are the potential harms, but also what are the potential benefits to all the various stakeholders, the users, affected, stakeholders, society, the environment, you know, our institutions, what have you.
And for each of those impacts, they could be positive or negative. So I'm trying to reduce the, the negative impacts and I'm trying to improve or, or [00:20:00] accentuate the positive impacts. And to do that, I am. Putting some controls in place, putting some processes in place based on ethically informed values.
Things like, you know, I'm trying to accomplish accessibility, inclusivity, fairness, maybe transparency, accountability, sustainability, you know, all these are, are values that could inform these decisions. So that's, that's kind of the big picture. What is digital ethics? Uh, for me, again, my, my job is twofold, so I.
Own our internal responsible AI policy and then the practices around that. So again, kind of step by step, how are we designing, developing, implementing, and operating AI in a way that's responsible as much as possible? And then the other part of my job is working with our clients to build out responsible AI or AI governance programs.
Sometimes that's workshops with executives. Sometimes it's actually helping them write policies and, and principles and all the kind of process diagrams and flow charts and RACI charts to say, okay, these are basically the [00:21:00] things that we do to make sure our technology practice is responsible. Hmm. And then in, in some cases, we actually help them implement, uh, technology that facilitates these processes, you know, facilitates risk and impact assessments and control assignments, and all the kind of documentation that goes into that.
Some of this is, uh, driven by regulation, you know, the EU AI Act, uh, you know, some of the standards like iso uh, 40 2001. Some of it is just frankly, a risk management exercise. You know, we wanna reduce the likelihood that we are going to cause harm. We're gonna have reputational damage, we're going to get sued, or there's gonna be some regulatory enforcement action.
Um, but a lot of this is still driven by ethics, right? These companies that I work with, actually most organizations have some kind of. Articulated principles, values that they say, you know, these are the values that we live by. And I think a lot of the time, my job is to help people understand how do we.
Help you translate these values or these principles into your technology decisions and [00:22:00] practices. So that's, that's basically what I, what I do for a living.
Petra: Yeah. So on that note, um, thank you. I mentioned, I watched two of your interviews before, um, this, this session today. And the second one was this conversation at the IT.
GRC Forum. GRC stands for governance, risk and Compliance. So, um, at this seminar you talked about how success with AI and especially the GRC governance, risk and compliance part of it, requires a focus on organizational culture before anything else. So tell us more about the organizational culture. What does it have to do with that?
Ethical ai.
Chris: Sure. Yeah. And, and the, the GRC um, discipline has gone back, you know, 20 years or so. There's a lot of kind of rigor and, and, uh, research and very impressive programs across all the companies that, that I work with. Uh, this AI governance space is a little bit new, so it's basically trying to figure out how do we fold AI [00:23:00] into these existing governance risk and compliance practices.
Although there are a lot of new. Nuances in AI around kind of ethical challenges, responsible AI that we really just haven't dealt with before. And I think that, uh, that doesn't cause the, the, um, the reason or does, it doesn't argue for having a good culture, but it certainly accentuates the issue. And the reason it does that is because AI is just moving so quickly.
That we are never going to be able to stay ahead of all the different kind of nuances around the, the capabilities, the risks, the, the data that's being used, the, the practices around ai. I think there is some really good, um, progress being made around technical controls. You know, things like automated fairness testing or content moderation or things like that.
But because things are moving so quickly, we will never get to the point of automating these systems to just behave responsible responsibly every [00:24:00] time. So we need people that are engaging with these systems that are designing or developing or implementing or operating or using these systems to understand their impact, to understand what does it mean to engage with AI and to basically set it, uh, in a place where it's going to be making decisions.
You know, this kind of discretionary power that I mentioned earlier. What does it mean to, uh, have AI that's making decisions around who we hire, who we promote? You know, what vendors do we use? Like all of these are decisions that that involve discretionary power. So if we're going to trust them, we have to understand how they work.
And these are humans making decisions about when we trust 'em enough to put them in these positions of power. And you can't do that unless there's a culture of responsibility. Hmm. Understanding what these technologies are capable of, what they can do well, what they can't do well, where they might have risks or, uh, harms, um, that are, that are affecting the people around us.
All of this. Centers around a culture [00:25:00] of being informed and being kind of actively involved in those decisions again? Uh, I think there are a, a, a fair number of people that think you can just. Automate these systems to behave responsibly. And I think that's never going to be the case.
Petra: Okay. Wow. Fantastic.
So fascinating. Um, my last question mm-hmm. Is, is there anything that makes you hopeful? 'cause all of this is a little bit pessimistic, so anything that makes you hopeful or optimistic about the ethics, trust, and technology.
Chris: Yeah, I appreciate that question. So I do end up feeling like I am the lone pessimist in the room fairly often that I work around, you know, 60,000 people and most of them are technologists and are very excited about, you know, all these new AI systems and ag of course, and multi-agent systems and, and that's great.
I'm, I'm very glad and I think we're doing some really cool and interesting things with AI and a bunch of other technologies and I am often the one that says. Wait a minute now, have we thought about this potential harm or this potential risk or [00:26:00] this regulatory, uh, requirement that we have to meet? Um, so the, there's in all fairness, a decent amount to be pessimistic about.
I think in a lot of ways the industry is moving very quickly and, and maybe not paying enough attention to the harms that as an industry that we are bringing to the world. But I'm optimistic because most of the people that I talk to, especially the people that are building, you know, designing and, and developing these systems, they want to do the right thing.
So I feel like, you know, that kind of energy, the, um, eagerness to get involved in making more ethically sound and responsible decisions is there. I think there are two things that we really need to work on. Uh, so one is this kind of awareness that. Everybody involved in technology has a part to play in currently creating technology that is causing harm and is, you know, potentially risky to the people around us.
But that also means that they can make better decisions on a day-to-day basis to improve things. So that awareness is key, and I think. It's [00:27:00] going well. I think there's a lot of progress there and a whole lot more progress to be made. But the second thing that I think we really have to work on is the incentive structures.
You know, if you have a thousand engineers that all want to do the right thing, but their incentives are based on how quickly they can get these new features out to market and they have to move fast and they have to meet these deadlines even if they know exactly what to do and they, they want to do the right thing if they don't have the space.
They don't have the, the room or the, you know, the incentive structure to do the right thing. Um, they know that if they try to pause things or slow things down, uh, they'll get let go and somebody else will jump into their place gladly and push things forward. So I think that incentive structure needs to be part of the discussion and, and maybe part of the solution.
And that could be. More regulation. It could be market pressures, it could be competitive pressures. There's a lot of different ways to, to, um, kind of move around that incentive structure. But those are the two things that I feel like need work. But I'm still optimistic. I, I wouldn't be doing this job if I didn't think that we can make progress along those lines.
I [00:28:00] do keep having these conversations and keep doing the, the work that I do because I, I am optimistic that things can get better.
Petra: Terrific. This has been Chris McLean, digital Ethicist at Avanade. You can find him on LinkedIn and do follow him. He shares a lot of really cool content on trust and digital ethics.
Uh, thank you for coming to Deep Pockets, Chris.
Chris: Thank you. Great to be here.
Nous utilisons des cookies pour analyser le trafic du site Web et optimiser votre expérience du site. Lorsque vous acceptez notre utilisation des cookies, vos données seront agrégées avec toutes les autres données utilisateur.