Our Digital Life Part 5: Artificial Intelligence

Michael speaks with Ostfold University College's Henrik Saetra about the role of Artificial Intelligence in sustainable development.

(00:01):
Well, I'm in over my head. No one told me trying to keep my footprint small was harder than I thought it could be. I'm in over my head. What do I really need? Trying to save the planet over someone. Please save me. Trying to save the planet oh will someone please save me?

(00:24):
Welcome to In Over My Head. I'm Michael Bartz. My guest today is Henrik Saetra . Henrik is an associate professor at Otsfold University College and head of the R & D research area, the Digital Society, which investigates the interplay between technology, individuals, and society. Heinrich uses political science, psychology, and philosophy of technology to analyze the implications of big data, artificial intelligence, and social robots. Heinrich has worked on environmental ethics, looking at technology's role in humans, moral and physical environment, and how technology can enable or inhibit sustainable development. Heinrich regularly holds lectures, speeches and seminars both nationally and internationally. Welcome to In Over My Head, Heinrich.

(01:05):
Thank you very much, Michael. Nice to be here.

(01:07):
So in talking about our digital life, one technology that seems relevant is artificial intelligence. Although this concept has been around for a very long time in recent years, AI has become embedded into every fast of our lives. This has brought us many advances, but that's not the whole story. I'm sure it could be argued that any technology is a double-edged sword. However, the stakes seem bigger when it comes to ai. There are deeply ethical and moral questions at play, and I'm looking forward to discussing some of these with you today. In your recent book, AI for Sustainable Development Goals, you look at how artificial intelligence affects the UN's 17 sustainable development goals, the tackle, everything from the environment, the social justice and the economy, politics, health and work. We won't go through them one by one, but generally what is meant by sustainable development in this context?

(01:49):
Sure. I and the sustainable development calls authors, I guess rely heavily on the concept of sustainable development as it was developed in the 1987 report from the Front Line Commission and our common future where to talk about sustainable development as consisting of three interlocking dimensions, the social sustainability, the economic sustainability, and environmental sustainability. That kind of shows that there are really deep tensions to be tackle when it comes to dealing with sustainable development. But that's, yeah, it's necessary to see this as interdependent dimensions that involve some really, really deep and difficult questions related to stuff such as economic growth and inequality, for example. And the role of technology is interesting in that aspect.

(02:36):
Yeah, absolutely. And, it seems to me that, yeah, it's definitely a challenging thing to balance that economic growth with environmental sustainability because sometimes those things are in conflict. So I find that very interesting. And so in your book, obviously it's about artificial intelligence. It might be helpful to just define what we mean by artificial intelligence in this context.

(02:56):
Yeah. and that could be a book in itself, right? There's a lot of debate about what, what AI is and what it isn't. And I won't really be that technical in disrespect, and I'm not in the book because few authors are. I relying some popular definitions that kind of resembles what they use in the national strategies on ai, these kinds of things. Then also what's been written in a paper, for example, by VEA and other authors in 2020, that's quite influential where I say that AI is software with at least one of the following capabilities, perception, decision making, prediction, knowledge extraction, pattern recognition, interactive communication, and logical reasoning. They define AI as a software capable of doing stuff that humans used to be required to perform. And this includes stuff like different forms of machine learning and logical AI and different technical aspects. But I think it's good to work with this kind of common sense definition of AI is really advanced software.

(03:50):
And so in your book you talk about how AI can help address climate action. So tell me a bit about that.

(03:58):
Yes, that takes us to, Yeah, because in the book, I go through each of the goals pretty much, but I go through them in the three dimensions, the social, economic and environmental. And this takes us straight to SDG 13, which is climate action. But I think it's useful to kind of also remember that broader framework, what can AI do for promoting climate action in terms of large-scaled societal effects. And then we have those kinds of intermediate effects that call meso effects on regions or companies, organizations, groups of people, but also micro-level effects on you. And I can AI help on that level as well. So I think if we go through each of them very briefly on the micro level, for example, AI can help you and me to, to understand our carbon footprint, how our behaviour affects greenhouse gas emissions. It can analyze our patterns, it can provide us with information, it can not just, it can change our behavior, which is good, right?

(04:52):
That can help. And that also takes us straight to SDG 12, which is about responsible consumption. So these goals are interrelated because responsible consumption also is a part of climate action. So it's these kinds of effects. On the meso level, I think we have reduction of emissions through optimization of processes and industry, the optimization of energy grids increased resilience of energy grids. AI is used in these sorts of context. So we can make processes a bit more efficient. So would say a lot more efficient, but at least a bit more efficient energy grids a bit more efficient. So we won't need as much energy. So it's these sorts of things. AI code in theory, we also be used to help us create new forms of materials, for example, and do this kind of radical innovation that cut emissions and makes processes more environmentally friendly, which is also something I place in the company level at the, at least first.

(05:46):
But on a macro level as well, we get this large scale insights optimization across regions and systems optimization of policies. For example, I might be able to deduce patterns and help make us make better policies, make better regulations, but also just being able to analyze massive amounts of data and help us through research and through other processes to optimize processes. I'm kind of portraying it in a book and not as a solution that helps us immediately get to the Paris agreement goals, right? But it could have this, at least incremental effects, at least the partial effects of optimizing processes. But I think we need a lot more, this is our main mitigation efforts, right? Cutting emissions. But in terms of adaption also, it's also used to understand where we'll extreme weather events hit, for example, what must be do in the future in order to be more resilient in order to adapt to the unavoidable changes. So it's these things as well. Not very specific, but there are a lot of different initiatives related to this, but it's not very mature technology. So it's interesting because it's, there's a lot of hype around AI and there's a lot of theoretical papers on how AI can optimize this and do that and do all sorts of different things. But the kind of real radical great impact hasn't really been demonstrated yet, I would say. So it's still partially hypothetical, but of course there are some effects demonstrated.

(07:04):
Oh, absolutely. For sure. And so within those different levels, as it is right now, where do you see the biggest impact with artificial intelligence?

(07:12):
I would say that as on the, on the meso level, those who have the infrastructure, they have the, those who have the competencies required, which is hard to come by in certain places. Also the infrastructure, the computing infrastructure and energy infrastructure and all these kinds of different things and the data and knowledge, they are able to most effectively use AI to optimize their processes. And that means optimizing energy usage. It means also optimizing and reducing waste, for example, in their processes by using AI to plan how much demand will there be tomorrow. So you won't just produce unnecessary goods, for example, and you'll reduce waste in products and materials and in energy and all these different things. But I think that's on a muscle level mainly these days.

(07:55):
Yeah. And if I remember correctly, in the book you also talk about how it can affect the ocean or the land. You wanna take a bit about that?

(08:02):
Yeah, sure. That's the next goal, right? If we go from 13 to 14. AI can be used to benefit life below water. And then the targets of the SDGs are relatively specific. But we could do this sustainable management of marine and coastal ecosystems, for example. That's one. But you could use AI to track fish stocks, for example, to know where are they, how many fish are there, How healthy are these stocks? Should we protect them or not? Should we adjust our extraction of these resources? So that's just one potential way to use this. But what I talk about in the book, and you mentioned in the beginning is that is a double-edged sword as well, right? So those who want to protect this fish stock can track them but took it others, right? They have access to the imagery they need and the equipment they need and the sensors and the data they need. You could also use AI to hunt this and extract these resources more effectively. So that's kind of the double edge nature of AI.

(08:51):
Yeah, that's definitely a recurring theme. And the thing that came to mind for me first was that yes, these technologies are good, but what is the downside? And you touched a bit on the companies and tech, so that's something that interested me. Obviously, the big tech companies are investing heavily in artificial intelligence, and I have to maybe assume that their interests are more economic than environmental. Do you see a conflict with this?

(09:15):
Well, some wouldn't. I guess it depends a bit on your perception of political economy and the beneficial nature of markets. I guess that's kind of an old debate. I really see this as a valid concern. You asked me before how AI kind of helped climate action. Yes, it can have this beneficial effects because that's what you asked me about. But it also has a climates cost, right? Training this large scale models that just gives us fancy pictures and texts and all these things, for example, has a carbon cost, it costs energy to use AI as well. So that's kind of one potential downside. But definitely their goal isn't to save the earth or save the environment, right? They have a various stakeholders, but I think none are the solution enough to think that the big tech companies are in it. Save the, save the world.

(09:56):
And I think that's kind of a problem as well, because that goes to what I also want to stress is the role and the importance of getting politics and regulation on this ball and getting it in play. There's a lot of talk about AI ethics and sustainable AI and all these kind of different initiatives from the industry itself, because they would very much like not to be regulated, right? Not to have politicians and regulators interfere in their working. So I think that's definitely a concern. And I think that's definitely an important thing to keep in mind that these companies aren't really developing tech for good as their main purpose. They might try it and it might be beneficial and it might be profitable, but still that's not their goal, right? So we need to make sure that this technology actually contributes to reaching the goals we find important.

(10:38):
So do you see government intervention as the most effective way to mitigate that?

(10:42):
I think it's necessary. It's, we might not be favourable at the ideal solution, but I think in lieu of any other good solutions, good ways to regulate this, I think we've seen what big tech does if it's allowed to run relatively free. And I think there are some consequences that we need to tackle. And yes, that means saying no to certain advantages and certain comforts and certain things, but I think it's, yeah, if we are serious about social or sustainable development in all three dimensions, I think it's, it's necessary. And that involves saying no to some things. So.

(11:12):
What sort of things would you be saying no to?

(11:15):
I, I'd say for example, a lot of different internet of things appliances are not really necessary for me. They're just producing data using more energy. They're making my devices last shorter time period than they would otherwise do. I'm not using Siri for example. I'm not using this different AI-based services. That's nice and a bit fun, but don't really provide me with anything good. So I think yes, we might say no to some of the most advanced features on social media and some of these different things. We might have to abstain from some such things, but nothing really radical in my opinion. Nothing really detrimental to my life at least.

(11:52):
Sure. And if you don't mind me asking, like are you abstaining from those things because of data privacy for instance?

(11:58):
Yes, partially. Partially because of that and partially because I'm not really very happy with what it does with me and with my relations with other people. So I think there's an argument there, but it's become infrastructure in a sense, right? It's become really difficult to opt-out of these things. It has a really high social cost. So that's kind of also points to the problem of making this kind of putting this on individuals as opposed to putting this on regulators and society at large to say, what sort of data do we want gathered and what sort of uses of this data do we allow and want to allow? I think that's kind of a necessary step in order to fix some of these problems related with the costs of private social in infrastructure which AI and data and based services has become.

(12:41):
Yeah, it's definitely where I'm at. These technologies are good in that we should use technology to advance society. But yeah. What is the cost? And you, you touched a bit on that social cost. Can you tell me a bit more about that?

(12:52):
The social cost of technology? Well, yeah, of course with data, we have all this well known aspects related to discrimination and bias in data and services for example, that applies to people exposed to AI used as a decision maker or decision support tool. For example. You can get biased in discriminatory outcomes that are opaque, right? We can't really understand, we won't necessarily even uncover that it's biased, but it is. So that's one aspect of it. And I think one more serious aspect that I like to foreground a bit more in terms of the sustainable development goals is a lot of attention about local and regional and global differences between those who have access to technology and the infrastructure required and those who don't, right? Because I think a lot of, at least here in Norway around me, everyone has what's required to use AI and the services we need in a relatively fair and straightforward way. That's not the case everywhere, right? So I think those social costs related to who gets the benefits of these new AI-powered services and tools, for example, they're quite expensive. They require expensive equipment, a lot of them, and they're not really accessible without relatively advanced infrastructure, societal infrastructure. So I think in the global sense also, you have this other issue related to inequality that's also important, discrimination and bias within societies, but also this huge differences between societies.

(14:13):
Yeah. If I remember correctly in the book, you touch on the new colonialism, how that technology could be the thing that separates those people and yes, obviously it causes inequality and I even remember you talk about it's less related to the environment, but more about work and how employers might use that to track their employees and how it's benefiting them more than it is their employees, right?

(14:32):
Yeah, definitely. AI shifts power in general. I think at work, for example, you get more control. You can use AI for monitoring and controlling and getting the upper hand on your employees if you want to, because knowledge is power. Then if you assume that this is right, then the knowledge derived from increased data gathering and analysis is powerful and it provides people with power. And the data colonialism thing goes to who owns the data gathered throughout the world from social networks, for example. It's not really local nations in Africa, for example. It's usually the big tech companies invest and China that owns kind of the data, gathers it and uses it and then applies it again in these same societies without them really owning or developing or deriving the benefit from it themselves. And that's kind of part of the data colonialism part I haven't coined, but others have coined also the fact that they buy all every startup before they get successful, right? They're bought up and integrated into some other sort of constellation. So it's these sorts of issues.

(15:30):
No, yeah, for sure. And it's, I think, yeah, definitely looking at the, like you said, that opacity is, we don't know who's using it and for what purpose, right? So I think for me, that's the biggest problem. And, with AI, as you talked about, we don't know how they're learning, We don't know. Like it's just advances so much faster than we can even comprehend it sometimes. So I think for me, that's the biggest concern with ai. It's different than any other technology.

(15:54):
Yeah. Yeah, I think it's, that's definitely a huge part of the problem because first of all, we might agree to give away our data for a particular purpose at some point in time, some service that we think, yes, giving away my data for this is good. I, I accept this, but this data is repackaged and repurposed and shared and sold, right? And put together in so different many ways that we have no idea about, no insight into. So saying that we have agreed once and that's fine. It's very problematic. And I also think to kind of saying that, yes, I agreed to data being gathered for this purpose, but also in the future you get kind of new possibilities with analysis of all data, for example. So you might accept something at a certain point in time, but then it's used for something completely different and perhaps more sinister in our perception, for example, later on. So there are a lot of problems related to data which are not really tackled by the sustainable development goals in that sense, but still, Yeah.

(16:45):
Yeah. You did talk about how there are limits for sure, and one interesting thing you said earlier was talking about how it's not benefiting the people in those communities, right? Various countries and such. So how it's benefiting the big tech companies, but not developing countries for instance. Do you see that changing in the future?

(17:02):
If they kind of manage to do what they say that they will in the sustainable development goals? Then goal number 17, is it partnership to reach the goals? And it's a lot of talk about technology transfer and there's a lot of talk about local development and promoting local and regional development and supporting developing countries in their efforts to build their own infrastructure. And the industries related to this social things, I don't really see it happening yet, but I, they have said that that's the goal by 2030, right? So it's really clear in their written statements, but it's not really happening at the pace I think it needs to be happening yet. So that's kind of a really crucial point. I think this kind of technology transfer and this sort of also dealing with how do we break up this sorts of sort of monopoly infrastructures that really difficult to transfer for a government. The government can't, can't just say that, okay, we want you private company to transfer your technology here, for example. That's difficult, right? As long as it's purely private and market-based.

(17:58):
Yeah, I mean we touched a bit on government intervention. Are there any other solutions you could see to solving that problem?

(18:05):
I don't really see the market fixing this itself, but the techno-optimist tend to believe that as, as soon as the problem becomes pressing enough, the market will price a solution high enough to make it happen, right? So some people hope that as soon as the climate crisis becomes serious enough, and I, I wonder if they think it is serious enough already, but that might kind of drive prices for the solution up and that diverts a lot of capital towards solution, right? So the market has some sort of mechanism, but it's, i, I think it's too slow. We can't really just wait for that to happen. But I think government intervention is really necessary. But, I think we also need to perceive government intervention not as intervention by some other serious sinister entity, but that this requires us to rethink politics and our role in politics a bit more. So say that the democratic politics, transparent and participatory institutions and these are all also part of the SDGs and the SDG 16, which is about this effective institutions at all levels, which requires politics to be about what you and I want and how you and I and everyone else is involving themselves in politics to make sure that technology does what we want as a society. I still think that politics is what we needed, but I don't think politics has to be something bad. So I think we need to work on that as well.

(19:21):
And you talk about there are, you touched on it briefly before there limits to the sustainable development goals might come to artificial intelligence. So what are those limits?

(19:29):
I think part of the limits here is that if you talk about privacy for example, if you really value privacy and you are very opposed to surveillance, the kind of surveillance you'll find in a really smart city, for example, based on digitalizing and using AI and data to improve all kinds of services, for example, if you're aware, those kinds of things that's not really discussed in the SDGs, the need for privacy, the right to privacy, those kinds of issues aren't really problematized data and technology is perceived as something good as is growth. But you could say that yes, the agenda 2030 and all the goals are based on the human rights. So in a sense you can can sort of indirect, they say that it's there but it's not really discussed. So I think that's one problem. I think another problem is that this is a global framework, right?

(20:14):
Everybody has to sign up for this to come into effect and that means for example, that democracy is not mentioned once in the actual goals or targets. So there's a lot of kind of roundabouts weights. They try to approach this without saying these words that some would object to. And the same with the L B T Q people for example, a kind of diversity and inclusion, those kinds of issues that are problematic in some parts of the world is not discussed. You have this one goal, gender equality really heavily focused, right? But you don't have these other aspects of diversity in inclusion that I and others here for example, think is also important in order to fight discrimination and all these things. You could say that they're indirectly covered, but that's cuz the kind of sum the limitations I'd say.

(20:56):
Let's touch on those a little bit. So I guess that kind of ties into more of the larger ethical questions around AI and our data and privacy. So tell me a bit about some of those areas that weren't covered and what you think about them.

(21:09):
Yeah, that's really important. And that's kind of goes in, brings a need to discuss kind of what's usually is discussed in the AI ethics world, which is also kind of a real, it's sort of problematic academic area, right? Because as there is hype in the AI industry, there's also a lot of hype in the ethics industry if you want, right? So everyone is proposing new ethical frameworks and new AI ethics rules and guidelines and principles. So there's a lot of hype and there's a lot of really old and fundamental insight being just forgotten or ignored because everyone is scrambling to produce some new AI ethics. When we already have computer ethics and we already have a lot of technology ethics and science ethics, a lot of these things are really covered in basic stuff. I think my approach would be interdisciplinary narrative here, it's really important that engineers and developers meet social scientists and other sorts of, from other sorts of disciplines and get together.

(22:02):
I think that's crucial for promoting understanding of the effects of AI and I think then getting more people involved and knowledgeable about what's going on and then promoting action through knowledge. I think that brings me to how politics play a role here. If you kind of raise awareness of what's at stake, what's, what's the cost here? I think more people would care. Not everyone, of course, democracy and relying on those solutions is problematic. So no really eased the solution, but I think AI ethics, the true and meaningful form of AI ethics is really important and it's a highly interdisciplinary research and industry field that's important, but hasn't really landed.

(22:39):
Do you feel like those conversations aren't happening with the different various groups and stakeholders that need to be in the room?

(22:46):
I think it's happening more and more, but I think also the amount of different initiatives and efforts from different non-governmental institutions, governmental institutions and academic institutions is crowding out this getting to grips with what would be, for example, a good global solution to these issues. Because these aren't really solved at a national level. They aren't really solved by individuals alone either. So that brings us to another point of politics, the need to see the global challenge here and the need to cooperate and make uniform, relatively uniform of these rules related to how we regulate the use of data and the use of AI. It's natural to talk about this in this context, climate change also with kind of this global public good, right? I think it's so sort of the same with data and AI and we have this talk about the global AI war and the cold AI war for example, right? Between China and the US who controls the data, who controls the infrastructure, who has the best models, who rules this world of ai. So it's yeah, interesting. Yeah.

(23:43):
No, let's talk a bit more about that. It's a very interesting topic. So yeah, what are your thoughts on that?

(23:48):
The cold war of AI ethics? There are definitely, definitely attempts here to derive the benefits you could have from AI. If again, knowledge is power, then getting people's data for China, for example, to have data on other, on individuals from other societies would be greatly beneficial, right? Of, of its own citizens. It has already, and it's using it to control them in relatively kind of, we would say horrendous ways, but at least controversial ways. But also having data on other people in other societies, it's deeply valuable in terms of that's the kind of intelligence that the intelligence agencies of all only would could dream of. So if you have this control over data control over, yeah, mainly data I'd say would be the kind of the key part that this, it's really crucial to get data you want because a lot of the actual algorithms and the kind of computing solutions, they are relatively well known and published in academic ac for, right?

(24:42):
So kind of the, this kind of the knowledge of the algorithms is it's less important that it's kind of building the infrastructure, the supercomputers and using the data making robust infrastructures for making use of the data. So I think yes, there is kind of this fight for AI and Putin even said that it's his goal. We need AI to kind of rule the world, right? But you can use that in in terms of intelligent weapon for example. And you get it in all these different applications of AI as well, which you get autonomous weapons for example. You know, that changes war a bit if people start using those kinds of tools and mechanisms in Brazil, in Ukraine for example, they now have more advanced weapons in Ukraine then, and then the Russian do and that makes a difference. So yes, AI is really kind of, it's, it is powerful and a good sense and in, in a bad sense. So there is a scrambled gain control and the EU for example, it's it's goal is to regulate ai but it's also to use and develop AI as a competitive advantage for growth and for positioning itself in a global scale. And that goes from most of the large regional actors in politics I think.

(25:44):
And like you talk about in your work, it's a social lens and you know, it's about the people as well. So I think it's easy with AI to look at the analytics and the data and the technical side of things, but yeah, what's, what are, what are, what is the role of people when it comes to AI in the future?

(26:01):
Yeah, it's a broad question, but that's, that's interesting. I think if you go to the basic philosophy of technology, we have this notion that technology is political in a sense. Technology abuses certain values. I think there is a danger that when we start using increasingly advanced AI systems that we kind of make these sort of our ideal then and we start to optimize everything and we see that we can in theory, optimize how humans behave and act and how social systems act and behave and we get this sort of development potentially away from this nice messiness about human nature, right? I think it's important to say no to extreme optimalization and extreme rationality and preserve some areas of playing around and doing the things that provides at least my life with more value than doing what is perfect and optimal and most efficient at all points in time, right?

(26:54):
And that's not really what a human life is about to me. So I think it's important that we keep that in mind and I think this tendency to use AI to control people, manipulate people, optimize people as we optimize tools and cyber-physical systems and these sorts of things because we have knowledge of people, you have also the potential to guide them, right? And steer them not necessarily forcefully through manipulation for example. So I think it's important to both resist the urge to see this sort of optimal solutions as best, but also to definitely resist the urge to apply AI in this sense to control and reduce this human irrationality. Because I think irrationality is often quite valuable.

(27:36):
Absolutely. And, and I think, you know, it's, it's also like where is it most applicable as well, right? So if it is very useful in certain contexts, that's great, but maybe AI doesn't need to be in every facet of our lives and used to collect every bit of data. So I, I really appreciate that. You talked a bit about abstaining, like you don't use some of those certain technologies. Is that the best way to have control in your life when it comes to artificial intelligence?

(28:01):
Yes, that's kind of definitely an important first step. The most obvious step, almost the only one we can take. It won't be fully effective because data about me is also collected through my relations with other people, for example, right? And I can't really abstain from being outside where there are cameras and other sorts of sensors and all these sorts of things. The smart meters in my house, it's required by law here. So there are all these sort of different things I can't opt out of, but the things I can opt out of, I think it's good to opt-out of and, and I prefer not having a microphone, for example, in my room at home for example. Yeah, I don't use Amazon Echo or Siri or these sorts of things. I like this notion of having some control of who has access to at least parts of my life. So yes, abstaining would be the most effective solution for preserving those aspects in a personal life. I guess it won't really fix the large-scale problems unless we also do something with raising awareness and gaining some momentum.

(29:00):
Yeah, I think that awareness point is really key as well, cuz maybe you want more privacy and someone else doesn't and I think just making informed choices is probably helpful as well. Right. So it's not just a total ban on technology or just totally embracing it, but you're trying to do it in the most intelligent way, I would say.

(29:19):
Yeah, I think that's really important. But that's also kind of one of the things I've written about before is privacy as a public good. If I say that I value my privacy and if I have a right to privacy in some sense, then it's really problematic if you have the freedom to say that. But I don't care about privacy, so I'll collect all my data, I'll allow them to have everything here, including all my data about my relations with Henrik who I meet every day and hang out with, right? So if we allow every individual to make this decision, then we're bound to have some suboptimal outcomes because that goes to some political theory and the harm principle for example. If you use your freedom in that sense, you are sort of harming me as well unless we figure out how to break this relational privacy aspect.

(30:02):
So in that sense, I guess I'm relatively pro-government intervention more so than those most yeah, most optimistic about market-based solutions to privacy and informed consent and these sorts of approaches. So I think the EU is doing something right. For example with the GDPR, they're preventing certain forms of data gathering and usage practices with the AI act applying AI in high-risk areas such as schools with children for example. There we won't allow manipulative uses of AI. So in certain settings we need to consider this public good, some sort of public infrastructure that we need to control. In some sense, I think we need to accept that there's a role for government to play here and that these are crucial infrastructures that impacts our lives in very many different ways. So I think it's legitimate to say that we want some political control over this technology.

(30:49):
Oh absolutely. And yeah, that's a good point you made about just even interacting with the world, right? Like you can make those individual decisions, but yeah, I gotta go outside sometime and-

(30:59):
Yeah, at least I'd like to, right?

(31:00):
Right. Yeah. And I think that's where, where my mind goes to where, you know, I don't want to constantly be opting out of things or this website has cookies and oh should I, ah, should I use this certain yeah browser. And I think that's where that larger change, cuz on the individual skill, you, you only have so much bandwidth to make those decisions and eventually you just say, ah, okay fine. Just I accept all things just except everything. So yeah, that's, that's interesting that the European Union is spearheading those initiatives, especially to protect children and stuff. Like that's really, really good.

(31:29):
Yeah, it is because I think it's kind of been demonstrated that a notice and choice approach you are provided with a notice about cookies and you can make your own choice because that's kind of the ideal liberal solution to you're informed and you make your own decision, right? That's a notice and choice regime that's being largely replaced in Europe by a bit more prohibitive approach of certain practices that they deem unnecessary for producing valuable things and potentially conducive to harms that we want to kind of just prevent and not give people a choice that they won't really understand because it's, it is an impossible choice. We talked about data being repurposed and repackaged and spread and so you saying yes to a cookie being gathered, you have no idea about what really happens. So it's not an informed choice either. It's just kind of a superficially informed choice.

(32:14):
Yeah and I think it goes back to that inequality too cuz you want to use that certain app or you want to access that website, that social media, whatever it might be, you have to give up some of your privacy and some of your data in order to even just be part of the world, right?

(32:29):
Yeah. I have a son for example, right? And at least previously his sports team, they had this group on Facebook. So yes, I have a choice, I could say I don't want a Facebook account, but it's sort of a false choice if the costs associated with saying no become too high. So I think that that's really an important point to keep in mind and also an important point to keep in mind for everyone like you and I as well when we set up groups on different platforms, we should consider that this drives the need to be there for people that might not want to be there. So finding different solutions might be a good idea at times.

(33:00):
Yeah and I know in some other reading I've done it talked about how, yeah, it's challenging because let's say with social media, because it's the place where everyone is meeting, you can't just say, Hey, I'm over here. If no one's there then and then it's not effective cuz it's not social. It's really difficult to have everyone shift over all at once to maybe a more sustainable platform or, or a better business model.

(33:20):
Yeah, it's really difficult and it's really that kind of just generates this impossible dilemma for all of us. Academic researchers like me for example, I might be critical of social media and big tech companies, but I'm using Twitter, right? I have to use Twitter because that's how I get ahead in this job because I need attention, right? So it's always this kind of, you feel like a hypocrites always, right? If you're trying to change the system, but you also have to play along and strengths of the system. So this is, this dilemma also plays out in people's lives. I things I think you might want to abstain from certain things, but it's required so you just kind of run around feeling bad about using things that you have to use.

(33:56):
And it might then be kind of problematic for me to say that I want the government to fix this so I won't feel bad about or have to make those tough choices. For me, it's based on I think this would be good for society at large and individuals in general. So I think it's, yeah, that's what politics is about, discussing these things. So...

(34:10):
That makes me just think about just generally, you know, with artificial intelligence, we've talked a bit about privacy and government and things like that. So this show is about empowering citizens to take action on the climate crisis. And so when it comes to artificial intelligence, what can people do?

(34:26):
Yeah, I think it's probably come out through this conversation that it is difficult for individuals to do this. I think it's important to keep this in mind when we choose our politicians and if we choose to get involved in politics, for example, That's the thing I've been saying here, that's slow, right? And that's slow and uncertain way of enacting change. But in general rejecting unsustainable practices when you see them in services you could turn off with this also different sorts of tracking and cookies and different sort of AI services, personalization. You can turn all those things off. So there are all, are all these small things you can do, You could engage in obfuscation and putting bad data into this sort of data sets for example, that's also really strenuous and demanding of expecting people to do that. So I think in general, figure out what you need in order to be happy and that to me at least involves saying, well I don't really need this to a lot of new high-tech services based on AI and data gathering practices.

(35:20):
So I think for me at least that's opting out of this what is unnecessary. And that's not just services, that's a lot of products as well, which I think makes sense. And I think there's a lot of new and good legislation and offers out there in terms of more reusable and recyclable and more circular products and options that are more robust, lasts longer and can be repaired to these sorts of things. Making those kinds of choices I think is good. And I think AI doesn't really play that much of a factor in these kinds of products, maybe in producing them and doing innovations, but that's kind of, that's a good thing. If so.

(35:52):
That's very helpful. Thanks for that information. This has been a very interesting conversation, so thanks so much for coming on the show, Henri.

(36:00):
Thank you very much, Michael. Look forward to future episodes it's an interesting podcast.

(36:06):
Well, that was my conversation with Henrik. Clearly as with many of the topics that larger societal change is what we need to focus on and I'm glad that he also talked about abstaining from certain things cuz that's where I'm at and it's nice to hear that other people are doing that too. Well, that's all for me. I'm Michael Bartz. Here's the feeling a little less over our head when it comes to saving the planet. We'll see you again soon. In Over My Head was produced and hosted by Michael Bartz original theme song by Gabriel Thaine. If you would like to get in touch with us, email info@inovermyheadpodcast.com. Special thanks to Telus STORYHIVE for making this show possible.

(36:45):
I'm trying to save the planet. Oh, will someone please save me?

Our Digital Life Part 5: Artificial Intelligence
Broadcast by