50. Privacy and the end of the data economy with Carissa Veliz

This episode we're chatting with Carissa Véliz on the transforming of power, how personal data is toxic, end of the data economy, dangers of privacy violations, differential privacy, what you can do to help, ethics committees and more...
Date: 31st of December 2020
Podcast authors: Ben Byford with Carissa Véliz
Audio duration: 45:37 | Website plays & downloads: 342 Click to download
Tags: Author, Academic, Privacy, Personal data, Data ethics, Data economy, Social media, Business models | Playlists: Philosophy

Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, and a Fellow at Hertford College at the University of Oxford. She works on privacy, technology, moral and political philosophy, and public policy. Véliz has published articles in media such as the Guardian, the New York Times, New Statesman, and the Independent. Her academic work has been published in The Harvard Business Review, Nature Electronics, Nature Energy, and The American Journal of Bioethics, among other journals. She is the author of Privacy Is Power (Bantam Press) and the editor of the forthcoming Oxford Handbook of Digital Ethics.


Transcription:

Ben Byford[0:00:04] Hello and welcome to the 50th episode of the Machine Ethics podcast. This month, in our first episode for 2021, we're talking to Carissa Veliz, author and academic. This episode was recorded early, back in October 2020. And we chat about AI as a moving target, transformative nature of power, how data is toxic, the business of social media and the end of the data economy, dangers of privacy violations, differential privacy, Carissa's new book, 'Privacy is Power', and much more.

Ben Byford[0:00:36] If you enjoy this episode, find more episodes at machine-ethics.net. If you would like to contact us, you can contact at hello@machine-ethics.net. You can follow us on Twitter machine_ethics or on Instagram the machineethicspodcast. You can also support us at patreon.com/machineethics. Thank you so much and hope you enjoy.

Ben Byford[0:01:01] Welcome to the podcast, Carissa. If you could just briefly introduce yourself and who you are and what you do.

Carissa Veliz[0:01:08] Thank you, Ben. My name is Carissa Veliz. I'm an associate professor at the University of Oxford at the Institute for Ethics in AI. And I'm a fellow at Harvard College. And I work on privacy and digital ethics, more generally, in particular, the ethics of AI.

Ben Byford[0:01:27] Awesome. So we have spoken previously before. So the first question we always ask, Carissa, on the podcast is about the nature of AI. So to you, what is AI?

Carissa Veliz[0:01:40] AI is difficult to define because it's a moving target. So famously, whenever something becomes part of mainstream, what used to be almost magic suddenly becomes just everyday things, and it's not considered AI anymore.

Carissa Veliz[0:01:57] But generally, AI is the effort to make intelligence with computers. So to make computers intelligent, as intelligent as human beings.

Ben Byford[0:02:08] Yes, so I guess it's a relative thing. Because you get this sticky question of what is intelligence? But it's the general intelligence of you and me doing things in the world, making actions, that sort of thing. But then artificially creating that.

Carissa Veliz[0:02:25] Exactly. And it is really tricky because we've already created some kinds of intelligence that are much smarter than human beings. So for instance, typically a computer can add and subtract and do all kinds of computational tasks much better and much faster than human beings. But at the same time, it can't do things that, for us, are really simple, but they're actually quite complex. Like tying your shoelaces.

Ben Byford[0:02:50] Yeah, exactly. That's a good example. Because I wouldn't be able to calculate a prime number up to the hundredth one very, very quickly. But I can damn right do my shoelaces up pretty quickly. So that's one for team human then.

Carissa Veliz[0:03:09] Exactly.

Ben Byford[0:03:10] So you have a book out which I have mostly read. I have to say that I've not read the conclusion, which is embarrassing, and one other chapter. Because I had to speedread it to get to this interview. Full disclosure. But I read pretty much everything else and made some notes. So check out 'Privacy is Power: why and how you should take back control of your data'. So Carissa, is privacy power?

Carissa Veliz[0:03:38] Definitely. One of the things that I was very impressed by recently in my research was this insight by Bertrand Russell, the philosopher Bertrand Russell, who argues that we should think about power like energy. One of the characteristics of energy is that it can transform itself from one kind into another. And so does power.

Carissa Veliz[0:03:58] So if you have enough economic power, you can buy yourself political power through, for instance, lobbying very hard or buying politicians and so on. If you have enough political power, then you can transform that into economic power by making sure people hire you. Or you can transform it into military power.

Carissa Veliz[0:04:19] So one of the things that we have been surprised by in the digital age is a new kind of power that is born out of data and data analysis. And this power is related to the ability to predict and influence behaviour. And just like the other kinds of power, it can transform itself from one kind into another.

Carissa Veliz[0:04:39] So if you have enough of this kind of predictive power through data or data power, you can transform it into economic power, because you can either sell the data or you can sell access to people and access to people's attention. Or you can transform it into political power because you're so big, like Google and Facebook, that you have really big lobbying muscles, and so on.

Carissa Veliz[0:05:03] And because we weren't so familiar with this kind of power, it crept up on us. And we didn't identify, for instance, monopolies when they were growing, because we're too used to economic power. So we thought that the litmus test for whether something was a monopoly was if it could really up their prices and not lose any customers. But it turns out that companies like Facebook and Google are supposedly free, in that you don't pay money, but you pay something else. You pay your data.

Carissa Veliz[0:05:34] And it didn't occur to us that our old litmus test was not going to work for new kinds of power. So really, the litmus test should be something more general, like a company being able to have exploitative policies, whether it's too high a price or collecting too much of your data or other kinds of bad policies without it losing any users or customers.

Ben Byford[0:05:58] Yeah. So there's this overarcing idea that we should take back control of our privacy. So it's in the title. We should value the data that we trail behind us but also we actively produce. And we actively put in places. And one of the really nice overarcing ideas in the book is that data is toxic. So I wondered if you could just go into that briefly.

Carissa Veliz[0:06:31] Yes. One of the things that I'm trying to change is the mentality that privacy is only a personal choice or an individual matter. And I argue that, no, it's actually a political one. And it's much more of a collective problem than it is an individual one.

Carissa Veliz[0:06:46] So another thing about power that we were just discussing is that, if we give too much of our data to companies, it shouldn't surprise us when they're designing the rules of society and they're basically leading our lives. If we give too much of our data to governments, then we risk sliding into authoritarianism. We can only maintain a healthy and strong democracy if the bulk of power and the bulk of data stays with citizens.

Carissa Veliz[0:07:15] And one way to think about this is that data is a kind of toxic asset. And I use the analogy to asbestos. Asbestos is a very interesting material because it's very cheap, it's very easy to mine and it's incredibly helpful. Among many of its qualities is that it's very hard for it to catch fire and it's also very durable. So we have been using it in buildings, in cars, in plumbing, in all kinds of of structures. And it turns out that it's incredibly toxic.

Carissa Veliz[0:07:56] There's no safe threshold of exposure. And hundreds of thousands of people die every year from cancer because of asbestos exposure.

Carissa Veliz[0:08:07] In the same way, personal data is very cheap, is very easy to mine. It's very helpful. But it turns out it's toxic. It's really dangerous stuff. And it's toxic for individuals, it's toxic for institutions and especially toxic for society.

Carissa Veliz[0:08:22] For individuals, it's exposing us to all kinds of harm. So it's exposing us to harms like data theft, like public humiliation, even extortion. Many times identity theft. So people stealing your credit card and using it or stealing your identity and committing crimes in your name. And it is exposing us to discrimination, to all kinds of harms that are making our lives worse off.

Carissa Veliz[0:08:52] Regarding institutions, data is toxic because it's a liability. Every piece of personal data is a potential leak, a potential lawsuit. And we're seeing it now with, for example, companies like British Airways who got fined £183 million pounds, before the Covid pandemic, for losing data in a data breach. The most recent one was, I believe, Hilton Hotels, again, got fined a huge amount of money because of a data breach. And of course, Cambridge Analytica had to close down, and so on.

Carissa Veliz[0:09:33] And sometimes it's not only that it's very pricey but also it's a huge loss of face.

Carissa Veliz[0:09:38] And then finally, data is toxic for society. It's undermining equality and it's undermining democracy. So you and I are not being treated as equal citizens. We are being shown different opportunities online. We are paying different prices for the same product. We're not being treated as equals. And it's undermining democracy because it's fuelling fake news, it's polarising society, it's fragmenting the public sphere. It's having all these kinds of consequences that are so toxic that we would be much better off without personal data being bought and sold.

Ben Byford[0:10:13] Yeah. And I guess from a technical side, if you simply didn't have the information or some sort of marker or just category of person coming to your website, then you wouldn't be able to tailor those adverts. So you wouldn't be able to tailor that information to those different types of people because you just wouldn't have that information to be able to make those systems work at that point. So taking back control of that privacy or just not doing those techniques means that, hopefully, we get to a more equitable situation, right?

Carissa Veliz[0:10:53] Exactly. Just like you don't tell your prospective employer certain things, like what religion you profess or whether you have small kids or whether you're planning to have kids. Or all kinds of things that could count against you unfairly because they shouldn't count. Well, in the same way, we shouldn't tell hundreds of companies our most intimate details, like what we buy, where we live, who are our friends, who's our family, our political tendencies, our sexual tendencies. They don't need to know these things to treat us the way we deserve to be treated.

Ben Byford[0:11:25] Yeah. So my main problem about this conversation that we're going to have today is that I mostly totally agree with you. So I almost have to take the devil's advocate position. Because I read the book and I'm going, yeah, yeah, of course. And nodding my head as I'm reading about it. So it's difficult for me to challenge you on these ones.

Ben Byford[0:11:53] Because I somewhat do believe in the fact that it is unnecessary to, one, do a lot of these practices. But secondly, just to treat people in this way, basically. It really comes down to how we are, as groups of people in organisations, treating hundreds of millions of people. And it's fascinating that, just within that abstraction of a private organisation, that we can do some really horrible and evil things.

Carissa Veliz[0:12:30] It's really appalling. When you look at the categories that, for instance, data brokers use or even ad marketing companies use, it's horrible. So people who have been abused, people who have been the victims of rape, people who are HIV positive, people who suffer from premature ejaculation or impotence. These are the categories that these data vultures are using to target us and to exploit our vulnerabilities. It's really quite appalling.

Ben Byford[0:12:59] Yeah. And there's no accountability there because, of course, it's really difficult to then see that stuff in the real world. Because we are getting presented with different information everywhere.

Carissa Veliz[0:13:10] Exactly. And you don't know what data they have on you. It might be inaccurate. And very important decisions about your life are being made on the basis of this data. Whether you get a loan, whether you get a job, whether you get an apartment. How much you wait in a particular waiting list, including just customer service. Your whole life is being affected by this data that you can't even access.

Ben Byford[0:13:35] Yeah. And I don't know if you've ever done this but, since GDPR came in and it became somewhat easier to request your information, I requested my Facebook information. And it is massive. You get this zip file. Obviously, the actual reason that I wanted to do that was so that I could preserve some of the images and things that I'd shared on there before not using the service that much anymore. But the troves of information that they have on you is astounding when you have it and you look at it and understand it.

Ben Byford[0:14:15] It's bizarre.

Carissa Veliz[0:14:17] Yeah, it is really. I have done it and it did freak me out as well. And it freaked me out even more when I suspect that that's not everything they have. I suspect they have a lot of extra data analysis from that raw data that they don't even share.

Ben Byford[0:14:33] Yeah, that's a really good point, actually. Extrapolations and extra categories and all these sorts of things that might be useful for them within the advertising context, I guess.

Ben Byford[0:14:46] But also there's been historical research projects done on us, as individuals, within the platform. Not just Facebook, but all sorts of different platforms, where they are experimenting on thousands of users just because they are able to.

Ben Byford[0:15:07] It's strange because an experiment could be just changing the colour. In developer world, it might be an AB test and we're testing whether people will enjoy this coloured blue slightly better than the colour blue we originally used. And we can see how they react to it, essentially.

Ben Byford[0:15:27] But you can also do things like withholding likes and then producing likes later on, which was an example from Instagram, one of the experiments they did. And on Facebook, equally, they did something around promoting negative emotions through the information that they showed you within your feed, to see if you would post negatively, the sentiment analysis would be more negative, and manipulating people's behaviour that way.

Carissa Veliz[0:15:59] And this study in particular that you're citing was incredibly unethical because they didn't take into account... For instance, they didn't exclude people who might be suffering from depression or bipolar disorders. People who are really having a bad time. And you might be tipping the balance even worse for them, if you were encouraging all these negative emotions. That was absolutely terrible.

Ben Byford[0:16:23] Yeah. And it's just such a shame. Because you could make people's lives so much better as well. So it's interesting that, even if you were to do an experiment like this and you thought that that was a good thing to do and you considered some of the things you were saying about types of different people who might be grossly affected by it in a detrimental way, why wouldn't you just try and produce positive sentiment from people? That would be my obvious counterpoint.

Carissa Veliz[0:16:55] Yeah.

Ben Byford[0:16:56] Why were you making people's lives worse?

Carissa Veliz[0:16:58] There are many reasons why that happens. I think one of them is just the economic incentive. Because there's an economic incentive to collect as much data as possible, then that leads to all kinds of bad consequences in which data just gets collected and sold to the highest bidder. So that's one example.

Carissa Veliz[0:17:17] But also it turns out that negative emotions are more effective at hooking people to social media than positive emotions. And that's why we see fake news being more viral than true news.

Ben Byford[0:17:32] It's a shame because it's a fair comment and something that we psychologically are aware of. Or some people are psychologically aware of. But it's a shame that we still can't use positive. Given that fact, we need to fight against that stuff with positive information or positive ways of working or more useful... It's a shame that we are always fighting the economics of the situation, which is actually to be negative to us. So do you think social media and privacy are incompatible in that way?

Carissa Veliz[0:18:11] No, not necessarily. I think social media, as it's designed at the moment, and certainly with a business model based on personal data. But that's just a business model. It could be very different. So if we paid Facebook $10 a year or a month or whatever it was, then we could design a very different platform in which different kinds of interactions are encouraged and in which nobody is profiting from your data. So nobody is collecting it and nobody wants it.

Carissa Veliz[0:18:39] And you made a good point about it's a shame how personal data that could maybe be used for good is being used in ways that are so destructive. But I think that's just the nature of personal data. Sooner or later, even when it's used for something good, if it's kept, sooner or later, somebody will abuse it.

Carissa Veliz[0:18:59] So I found it interesting that you agreed with the book so much. Because many people find it very radical for me to call for the end of the data economy. This is the first book that actually says that. It says this is so toxic that we should just end it. But really that's only surprising because we are seeing it from the perspective of the status quo, in which we are already used to this kind of trading.

Carissa Veliz[0:19:24] But if we were to ask somebody from the 1950s, hey, we have this idea and we have this business model and this is going to work like this. Do you think that's a good idea? They would probably feel absolutely horrified. It's like, no. What do you mean? Are you going to sell lists of the people who have been victims of rapes? And that's going to be the basis of a business model. That's awful. People shouldn't profit from that.

Carissa Veliz[0:19:48] So what's radical and what's really extreme is having a business model that depends on the massive and systematic violation of rights. Privacy is a right for a very good reason. And that is because personal data ends up getting abused.

Carissa Veliz[0:20:02] So imagine we argued that it would be better if all our houses didn't have doors and didn't have locks. Because that way people could come in and they could share their stories and they could leave you with nice food and they could do nice things for you. And of course, some neighbours might do that. But eventually, at some point, somebody will misuse it and somebody will enter your home and rob something. And we need to have locks in place.

Carissa Veliz[0:20:29] And then we need to unlock it for the right people and the right circumstances and the right purposes.

Ben Byford[0:20:35] I like one of the examples you give in the book of one of these practices which is reading through people's mail. Having a postman read through your letters would be a legal offence. But you have all your emails with Google or whoever, and they are able just to look at that information verbatim. There is no way you can not let them do that. If you are sending information to someone who has a Google account or a similar account or you have one yourself, that is just what you are literally doing.

Ben Byford[0:21:14] Yet we don't send these people to jail or we don't uphold them to a standard that is to allow us to have that privacy, to lock those doors, essentially, to withhold some information that we don't want to give to a service.

Ben Byford[0:21:31] But we actually do want the service itself. I would like to pay for email which doesn't have spam and that's it. I don't need you to do all the other stuff. I'll give you some money. Fine.

Carissa Veliz[0:21:43] Yeah, exactly. And at some point in the book I argue that, in 2013, Google was already an incredibly big and very successful company and earning lots of money. And journalists in Forbes magazine figured that, if users were to give Google just $10 a year, Google would earn the same thing as it was earning that year. And I think most Google users who are not in a position of extreme poverty would be very happy to pay $10 a year for Google Maps and Gmail and Google search and all of these things. And we don't need to be sold for that.

Carissa Veliz[0:22:22] And our democracies don't need to be eroded. And all these bad consequences are really so unnecessary.

Carissa Veliz[0:22:28] And some people think, well, maybe you're just exaggerating. Maybe the consequences of personal data aren't so bad. But if you look at history, privacy breaches have indirectly killed more people than other threats like terrorism. So I tell the story in the book about the Second World War. And one of the first things that the Nazis did when they invaded a city was to visit the registry because that's where the data was held and they were looking for the Jewish population.

Carissa Veliz[0:22:59] So there was a study that compared the European country that had most data on people, which was the Netherlands, because there was this person called Lenz, who was a fan of statistics and who wanted to design a system that would follow people from cradle to grave, versus the country in Europe which had collected the least amount of data consciously for privacy reasons. And that was France. Since 1872, they had made the decision not to collect data about religious ancestry or affiliation.

Carissa Veliz[0:23:34] And the result was that the Nazis were able to find about 73 percent of the Jewish population in the Netherlands and kill them. And in France, they found 25 percent of the Jewish population and killed them. The difference between those two countries were hundreds of thousands of lives.

Carissa Veliz[0:23:52] And there's this particular story about the registry in Amsterdam. The Dutch realised that registries were very dangerous. So in 1943, a resistance cell tried to destroy the records in the registry. They went into the building. They set fire to the records and they had a deal with the firemen because they knew some firemen who were sympathetic to the resistance. And the deal was that the firemen would try to get there as late as possible to allow the fire to do its job. And they would use much more water than necessary to destroy as many records as possible. Unfortunately, they were very unsuccessful.

Carissa Veliz[0:24:35] Not only were they found and killed, but also they only managed to destroy about 15 percent of records. And 70,000 Jews were found and killed.

Carissa Veliz[0:24:49] And the Dutch had made two mistakes. One is that they had collected too much data. And the second one was that they didn't have an easy way to delete data quickly in the event of an emergency. And we are making both of those mistakes at a scale never seen before.

Ben Byford[0:25:07] Yeah. Those are shocking accounts, as is everything from that period, really, to be honest. It's funny because it is one of those arguments that comes up where it's, if you've got nothing to hide, then what's the problem situation? But as you've stipulated, it might not be now. It might be in the future. It might be somewhere else. It might be some organisation that you haven't met yet and it might be your next boss. And all these individuals, all these organisations, all these governments, might be able to do something to you based on the things that you have put out there in the world.

Ben Byford[0:25:49] That visual exhaust that you've created. And it might not be something that affects you right now.

Ben Byford[0:25:55] And I think in the book you mentioned, you might be some white guy and you don't really think about these things. But there are people out there who are thinking very seriously about these things and it affects their lives greatly already, right now. And it might affect yours at any time in the future. And indeed, your children and their children.

Carissa Veliz[0:26:19] Exactly. And you might have something to hide and you just don't know about it. So for instance, you maybe develop Parkinson's or Alzheimer's. And companies can be trying to infer this just by the way you swipe your finger on your phone. And you might not get your job because a company might suspect that you will develop Parkinson's disease in the next 10 years. And that information might actually be wrong. You're still going to suffer the consequences of that privacy loss.

Ben Byford[0:26:49] So we talk a lot about AI on this podcast, the Machine Ethics podcast. Hello, if you're just tuning in. It feels like a radio station. It's definitely not. And I wonder, because obviously a lot of the machine learning techniques rely on huge amounts of data.

Ben Byford[0:27:07] And obviously data is all sorts of things and doesn't necessarily have to be people's data. It could be weather data and atmospheric data and pressure and how many drops of rain there are and all sorts of different types of data. How many rats are in this area. I don't know. There are different types of data, obviously.

Ben Byford[0:27:28] But machine learning often relies on lots of data. So is there some issue with a lot of these techniques that we're using at the moment and the requirement for there to be data to use in order for them to to be successful and accurate?

Carissa Veliz[0:27:49] That's a great question. So there are many responses. One response is that most of the data that we have and that we need and that is most useful is actually not personal data. So it's data about the weather, data about the quality of the air, data about all kinds of things that doesn't have to do with us. And recently somebody told me that their business calculated that only six percent of data is personal data. I haven't actually verified this. But this would be something interesting to look at.

Carissa Veliz[0:28:21] What exactly is the proportion? So that's one thing.

Carissa Veliz[0:28:24] And in fact, if you look at the most incredible advances in machine learning, they haven't been done with personal data. So there are things like Alpha Go Zero, which actually used its own data, playing itself millions of times to get so good at the game Go. Or it's things like trying to find a possible antibiotic and looking at molecules.

Carissa Veliz[0:28:49] And it might not be a coincidence that the most incredible advances haven't been with personal data. Because personal data sometimes expires very quickly. So for instance, how many times have you liked something on Facebook and then a year later you actually don't like it anymore? But you never go back and click unlike. So in that way you move places. You change in all kinds of ways that makes personal data very quickly out-of-date.

Carissa Veliz[0:29:16] Another answer is that, even with personal data, we can use it in ways that are safe. Or in fact, we can deal with things that we would deal with as personal data without it being personal data. So what I'm referring to here is a technique like differential privacy, in which you never collect the personal data in the first place. If you collect data with a differentially private algorithm, there's no original database that can be breached or leaked. You insert mathematical noise into the database from the start so that no individual can suffer privacy losses.

Ben Byford[0:29:59] Yeah. And just to clarify, that technique is about making inferences from lots of data. So you couldn't necessarily make an inference about a particular person, but you could make an inference about these million people, without them being exposed, if you like, to having their personal information instantly available or whatever.

Carissa Veliz[0:30:22] Exactly. So a nice way to explain this. I got this example from Aaron Roth. Is say you wanted to figure out how many people voted for Trump or for Brexit in a particular city. And normally you would just call up a few thousand people and ask them directly. But then you have a phone number and an answer. So that's very dangerous.

Carissa Veliz[0:30:44] If you do it in a differentially private way, you call them and you ask them to flip a coin. If it lands on heads, they tell you the truth. If it lands on tails, they flip a coin again. If it lands on heads, they tell you the truth and if it lands on tails, they tell you a lie. So that means that 75 percent of your answers are true, 25 percent of your answers are a lie. And you can statistically adjust that such that you get very accurate statistical responses to your questions.

Carissa Veliz[0:31:14] And the person never tells you how the coin lands. They just give you an answer. So every person has plausible deniability. Every person can say, no, I didn't vote for Brexit or I didn't vote for Trump.

Ben Byford[0:31:27] Yeah. Unless you're also tracking them with a tracking device in their house or on their phone at the same time.

Carissa Veliz[0:31:37] Yeah. Well, even then, if nobody knows how the coin landed except that person, you never collect that data in the first place.

Ben Byford[0:31:46] Yeah, I know. I'm just being silly. So in that case, you get somewhat less accurate data, but if you've got lots of it, it doesn't really matter. Like you're saying, it levels itself out, essentially. Because you already know what the probability space is in those answers.

Carissa Veliz[0:32:04] Exactly. And then the final response to the concern about AI needing lots of data is that it's conceivable that, in the future, AI will need less data. So what we want to build with AI is something that's actually intelligent, not that only mimics intelligence. And if you take a child, it's incredible how quickly they learn. So if you teach them something, you only have to teach them sometimes once or maybe twice, and they can immediately reproduce that.

Carissa Veliz[0:32:35] And not only that, but they can apply it to other circumstances and new circumstances.

Carissa Veliz[0:32:39] And machine learning can't do that or it needs a lot of data. Instead of one or two data points, it needs thousands or hundreds of thousands or millions. And there's an argument to be made that there are algorithms that are so rudimentary that, no matter how much data you throw at them, it's not going to solve the problem. It's not going to make them intelligent, just the amount of data. And as we build better algorithms that are actually more intelligent, they will need less data.

Ben Byford[0:33:09] Yeah. That is the frontier of AI at the moment, I guess, is doing more with less.

Carissa Veliz[0:33:17] Exactly.

Ben Byford[0:33:18] So in the book, there's a chapter on what we can do. The types of things that you would advise a person, an average person, who has devices and is concerned with these things. And there's a whole list of things that you recommend to do practically. But also about communicating with people and talking to them about your concerns and giving back those people that autonomy to make those decisions, which is really, really useful. So we can go into that.

Ben Byford[0:33:55] But I'd just like to say, go and read the book.

Ben Byford[0:33:59] But also there isn't actually specific guidance for people designing this stuff. So I was wondering, did you have a secondary list which would be, when making technology, when designing, when developing, what kinds of things should you be thinking about in order to, hopefully, preserve some of this toxic data issue, preserve the right of privacy?

Carissa Veliz[0:34:24] Yeah. So I do have a small section on what you can do if you're in tech. And part of it is just thinking differently. So many times people in tech think about what they want to build and then they think, well, once it's built, I'll add privacy concerns and ethics into it. And that's just not the way it works. Because ethics and privacy have to be baked in from the start. Otherwise, if you just try to add them on it later, it just leads to a disaster.

Carissa Veliz[0:34:54] So for instance, if you have the wrong business model, then no matter how much ethics you try to add on to it, it's just going to fall apart.

Carissa Veliz[0:35:02] So you have to think about these things from the start and take into account all stakeholders, try to imagine possible bad consequences. And then very specifically, I recommend that people talk to either an ethics consultancy or academics. And I recommend some academics that they can follow.

Carissa Veliz[0:35:24] Or talk to an ethics committee. There are places in which ethics committees offer their services to start-ups. So one place... Full disclosure, I'm working on that ethics committee... is Digital Catapult. And this is a UK initiative in which start-ups get all kinds of help, like computational power and all kinds of other perks. But they also get to talk with an ethics committee who tries to help them build a more ethical product.

Carissa Veliz[0:35:52] So find opportunities for ethics. Don't leave it as something that you will do at the end.

Ben Byford[0:36:00] Yeah. And just a plug for myself, I guess. We also have Ethical by Design, which is the consultancy that I run, which also deals with groups of people, like yourself, to make those recommendations to organisations, hopefully, at the inception through the making process. So that we can cover all those ethical questions. That's my hope. And we have different people from different backgrounds, like yourself. So use those things, go and talk to those people, do some research, read some books. And make better products.

Ben Byford[0:36:42] Are we just saying that your book is quite stark about the digital economy. And I was just wondering if there is light at the end of the tunnel. If there is a vision that you have where things could be better and what kind of thing that'd look like.

Carissa Veliz[0:37:02] Yeah. So one more thing for people who work in tech is, don't have a business model that depends on personal data. That's just the past. It's going to end and your business is not going to thrive in the long run. It's like opening a business that is based on oil right now. That's the past. And if you look at, for instance, Scandinavian oil companies, they're veering towards clean sources of energy. And that's the smart thing to do because that's the future. So the future is really more privacy, not less.

Carissa Veliz[0:37:35] So don't have a business model that depends on personalised ads or personal data.

Carissa Veliz[0:37:41] And yes, I definitely see light at the end of the tunnel. I think we are going through a civilising process, similar to the one we went through in the offline world, in which we made rules to make life more liveable and more easily bearable. So we don't have to go to the supermarket and ask ourselves whether the food we're eating is going to poison us. And we don't have to get on an aeroplane and wonder whether the security is good enough. We have standards for that.

Carissa Veliz[0:38:14] And just like we have standards for that, we should have standards for cybersecurity and for privacy. So we need regulation. There's no other way around it. We need to ban trades in personal data. We need to ban personalised content. We need to implement fiduciary duties so that our data can only be used for our benefit and never against us.

Carissa Veliz[0:38:34] But for that to happen, we need people to pressure and to rebel against the data economy. And so what ordinary people can do is find opportunities for privacy. Talk about privacy. Read about privacy. Tweet about privacy. Don't expose others. So don't take pictures and then upload them without asking for people's permission. Whenever you receive a message that is clearly violating someone's privacy because they're making fun of them or there's an image that is of them that they don't want to be out there or that they are exposing their private messages or something, don't forward it and express how much you disagree with that action.

Carissa Veliz[0:39:16] And choose privacy-friendly options. So instead of using Google search, use Duck Duck Go. It's great and it doesn't track you. Instead of using WhatsApp, use Signal. It's fantastic and it's not based on a personalised data model. Instead of using Gmail, use ProtonMail. There are many alternatives out there that is just as easy, just as free and just as good and that are not based on your personal data.

Ben Byford[0:39:47] Great. So if you want to contact me, it's benjaminbyford@protonbipm.me, I think. So a quick plug for them because, yeah, we just need people to get on board with the privacy economy, I guess. This new era.

Carissa Veliz[0:40:04] Yeah. Something else that people can do is to ask companies for your data and ask them to delete your data. It's very important because, if rights are only on paper and never come to life, then it's as good as if they weren't there.

Carissa Veliz[0:40:21] So when you ask a company for data, you show them that you care about it, you make them work for it. And if they don't comply with your request, you create a paper trail such that, when regulators look at that company, they'll see that they're not using people's data with their consent. And they can get fined and they can pay the consequences for that. So it's really important to ask companies, demand companies, that they respect your right to privacy.

Ben Byford[0:40:48] So the last question we usually have on the podcast is, Carissa, what excites you and what scares you about this technological future? And I guess we've covered some of that already. But I was just wondering, to give you opportunity to paint the fearful and the really exciting and glorious things that you can see coming through.

Carissa Veliz[0:41:12] What scares me is that I can see that we're building the perfect architecture for the perfect totalitarian state. A totalitarian state that cannot be challenged. Because as soon as you start to even think about organising or rebelling against it, they know it before even you do maybe. And they squash it. So we are in a very dangerous moment in which, if we continue to walk in this direction, we will end up with a very scary circumstance in which an authoritarian regime can take over and be extremely powerful.

Carissa Veliz[0:41:45] More powerful than anything we've seen in the past, by far. So that's the scary circumstance.

Ben Byford[0:41:52] That's pretty good. That's pretty scary.

Carissa Veliz[0:41:55] The optimistic view is that we're still in time to take back control of our data. Much of the world is still not digitised. Rules are changing in the right direction. People are becoming more aware. And I can see a future in which we have cutting-edge technology, but we have technology that we own, that works for us, that we don't work for it. That we're not slaves of it. That when I use my car, I want my car to take me somewhere.

Carissa Veliz[0:42:23] And I want it to do what I want it to do. I don't want it to spy on my conversations, to see what music I hear, to track me and sell the data to people who will use it against me.

Carissa Veliz[0:42:35] And I think that future, in which we own the things that we use and they work for us, we don't work for them, is very realistic and it's very possible to bring it into reality. We just have to be very clear about what we want and how do we get there.

Ben Byford[0:42:51] Awesome. So thank you very much for talking to me. If people want to follow you, find out about you, buy your book, how can they do that?

Carissa Veliz[0:43:01] They can follow me on Twitter. My handle is carissaveliz. I also have a website with just my name, carissaveliz.com. And they can find the book in their favourite bookshop. Blackwells is my favourite bookshop, so I would start looking there. But it's also on Waterstones, Amazon and all the rest.

Ben Byford[0:43:20] Yeah. Awesome. So thanks very much and I'll see you in the private future.

Carissa Veliz[0:43:27] Thank you, Ben.

Ben Byford[0:43:29] Thanks again to Carissa. This is the end of the podcast.

Ben Byford[0:43:32] I'm sorry about the bad audio issues. We're having this problem where I haven't found the perfect solution for recording people remotely when the Internet is troublesome or they haven't got a good mike or there's too much compression, that sort of thing. In this episode, the audio coming through from the Internet was quite bad, quite bitty. So really amazingly, Carissa was able to record on her iPhone. But then, at some point, the iPhone was moved or something happened. So it gradually got quieter and I had to boost the signal.

Ben Byford[0:44:03] So it starts degrading over the episode a little bit. I tried to do my best.

Ben Byford[0:44:09] I really enjoyed Carissa's book. There's only one thing, really, that I took issue with it. That working in some of these companies, I've had experience with security. And in the book she calls out that maybe tech companies could do more about security and securing data. And obviously they can. But she says that maybe they're purposely not doing enough. And I don't think that's necessarily the case, having worked in those environments. We try and do our best.

Ben Byford[0:44:37] Personal data as being toxic is a really, really interesting concept and just really hits home, really easily about what this is all about, what we should not be doing, essentially. And I think that's a really great way of labelling that stuff. Be careful with it. It's toxic.

Ben Byford[0:44:55] And I really find this fascinating that there's this digital economy, data economy and that maybe that is coming to an end. Calling out these large corporations or practices of passing around data and selling data and that sort of horrible practices. And maybe that we can find a way to get past that, move on and not take people for granted and be nice. So that would be good, wouldn't it?

Ben Byford[0:45:23] So thanks again for listening. If you'd like to contact us, go to machine-ethics.net and stay tuned for the next episode.

Ben Byford[0:45:30] Thank you very much.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford