94. Diversity in the AI life-cycle with Caitlin Kraft-Buchman
Caitlin Kraft-Buchman is CEO/Founder Women at the Table – a gender equality & systems change CSO based in Switzerland and Co-Founder/Leader <A+> Alliance for Inclusive Algorithms – a global multidisciplinary coalition of academics, activists, technologists prototyping the future of artificial intelligence and automated decision-making to accelerate gender equality with technology and innovation
<A+> Alliance is a leader of the UN’s Generation Equality Action Coalition Technology & Innovation for Gender Equality. Caitlin was co-chair of the Expert Group for the UN Commission on the Status of Women (CSW67) in 2023 with its first ever priority theme of Technology & Innovation.
Caitlin leads the <AI & Equality> Human Rights Toolbox initiative, an educational platform that supports a global community working for a human rights-based approach to AI - with equity & inclusion at the core of the code.
Women at the Table are a leader of the f<a+i>r feminist AI research Network, with Hubs in Latin America & the Caribbean, Middle East & North Africa, SouthEastAsia, and sister network in Africa, and serves as Civil Society lead for the World Benchmarking Alliance's Collective Impact Coalition for Ethical AI.
Caitlin is co-founder of the International Gender Champions (IGC) - with hubs in Geneva, New York, Vienna, Nairobi, The Hague & Paris bringing together female & male heads of organizations, including the UN Secretary-General, to break down gender barriers; she serves on the IGC Global Board, and co-leads the new IGC Impact Group on Digital and New Emerging Technologies with Doreen Bogdan Secretary-General of the International Telecommunication Union. Caitlin is a one of the Network of Experts for UN Secretary General’s AI Advisory Body; a member of UNESCO’s WomenForEthicalAI working group; member of the Gender Advisory Group for the AI Action Summit to be held by the government of France, February 2025 and Co-Chair of the UN Commission on Science & Technology for Development (CSTD)'s Gender Advisory Board.
Transcription:
Ben Byford:[00:00:00]
This episode was recorded on the second of October, 2024. Caitlin and I chat about gender and AI, technology isn't neutral, diversity in creation and exploitation, lived experience expertise, co-creating technologies, and the importance of success metrics, and various international treaties on AI. If you'd like to find more episodes like this, you can go to machine-ethics.net. You can contact us hello@machine-ethics.net. You can follow us on X, Machine_Ethics, Instagram, Machine Ethics podcast, YouTube at machine-ethics. And if you can, you can support us on Patreon, patreon.com/machineethics. Thanks very much, and hope you enjoy.
Ben Byford:[00:01:05]
Hi. Welcome, Caitlin. Thanks for coming on the show. If you could please introduce yourself, who you are, and what you do.
Caitlin Kraft-Buchman:[00:01:12]. I'm sitting here in Geneva. I run Women at the Table, which is actually a feminist systems change organisation and is the mother of an organisation called A+ Alliance for Inclusive Algorithms that we had founded in 2019. A+ Alliance is the mother of the Feminist AI Research Network, which is a network that really looks to create prosocial AI. It's actually proof of concept work. In addition, we're also the founders and runners of the AI Inequality Human Rights Toolbox Network, which began in 2017. But I'm sure we'll go into all of that as we go forward together.
Ben Byford:[00:02:02]
Thank you very much. There's a lot of stuff in there. So what we normally do on the podcast, before we dive into some of those things, is we just briefly ask the question, what is AI? Or maybe even, what is the technology we're talking about and what is AI to you? What are those things that you're thinking about when you're dealing with those organisations?
Caitlin Kraft-Buchman:[00:02:28]
Well, when we say AI, we understand that there are very, very many different permutations of it. So we're mostly concerned with impact. And so we're taking a human rights-based approach. So we're saying, at the end of the day, how we arrive and which technologies we're using are less relevant to our particular conversation than the impact on humans and the impact on human dignity, which is, we'll say, the base, the base of the whole human rights movement. So that's one part. On the other part, when we talk about feminist AI, we talk, I suppose others could also call it humanist AI. It is inclusion at the core of the code. So at the very, very base, at the base of the data collection, at the base of the objective, the base of the model, the idea is have community has been engaged and the people who've been impacted really consulted and co-created the technology?
Ben Byford:[00:03:39]
Perfect. I've got so many things already. So, so obviously, you've been in this world for a little while, and you've been, from my understanding, working in gender equality and equality and those sorts of things for a long time now. How How did you arrive at A+ Alliance and these sorts of things that you were seeing? Because obviously, AI, let's say, revolution. I'm doing air quotes here. Ai has been around for a while. But the recent revolution in AI has only been the last, let's say, 12 years. So at what point did you get excited about what's going on or scared about that and start to pull those things together and try and do something about it?
Caitlin Kraft-Buchman:[00:04:27]
So, Women at the table had always anecdotally had four pillars of work that focused on traditionally male dominated sectors. That was the economy, that was democratic governance, sustainability, and technology. We had different work streams for each of these different very, very important areas of work. It wasn't until... We started 2015 with the international gender champions, and we were drivers of something called the Buenos Aires Declaration on Trade and Women's Economic Empowerment, which was the first time the word woman had ever been used in an official WTO or a GAT document. We were the drivers of the Gender Responsive Standards Initiative, which now every international standardsmaker of the world has signed. And In each of these worlds, we started to see, I didn't know a lot about trade, but I understood that it's the hidden operating structure of certainly the world economy. And that was fascinating and saw how women were, were not part of that decision making process. That led us to the world of standards, which are also the hidden interstices of the world. All these standards starting with built environment, but also, a lot of more abstract standards coming out in technology, not abstract, really, but unseen.
Caitlin Kraft-Buchman:[00:06:07]
And that's another operating system. And that kind of led us to algorithms as another series of operating systems that are at large in the world, but we are largely unaware of as citizens. So in 2017, we decided to convene a lunch in concert with the Office of the High Commissioner of Human Rights, Women's Rights Division, and we... On gender and AI, to say, What is it? Do these things have anything to do with each other at all? We invited a series of professors from EPFL and ETH Zurich and Geneva. Here we are in Geneva. So we invited our Swiss colleagues to talk about. And we're very, very lucky that we had two really fabulous researchers from at BFL, Wana Lisa Salis, who's now at Yale, and Nishit Fisnoy, also now at Yale, but who are running a lot of things at BFL. My favourite part of that luncheon where they gave this extraordinary presentation, one of the other professors stood up and said, I am so glad that I am a professor of mathematics. I have no gender because numbers are pure. Numbers have no bias. And then one of my colleagues turned around and said, But, Herr Professor, who picks the X and who picks the three?
Caitlin Kraft-Buchman:[00:07:42]
So really, in any equation, of course, there's human ingenuity, there's human input, and therefore there's bias, and there's no such thing as anything being neutral.
Ben Byford:[00:07:57]
A hundred %
Caitlin Kraft-Buchman:[00:07:58]
Yeah. Yeah, that started us on a quest. There you go, from that.
Ben Byford:[00:08:04]
That tickles me, that stuff, because I feel like that's like the old world to me, where we had this aspirational thing where everything was like science and maths and things like that were pure, and they were uncorruptible by social biases and things in the world. And it's like, if only that was true, guys.
Caitlin Kraft-Buchman:[00:08:26]
Exactly. We did a thing recently with our for this AI Equality Toolbox, which we'll talk about. But we had asked a PhD cohort to take some of our questions and map their research to the questions. And they all came back in the first cut of the survey to say that, of course, there was bias in their research. It was statistical, but not one of these particular cohorts saw that there was societal bias in their in their data. And that was also really interesting to us because they're socio-technical. Do you know what I mean? We understand those things, but also how we understand what bias means, how we understand how those things interrelate really depend on the educational system, the context, all that.
Ben Byford:[00:09:17]
Yeah, definitely. And do you think those things, those two ideas about how these things actually operate, do you think that we've come a long way from that, where we have an appreciation? Do you see maybe the way that people talk about it or the way that you maybe have talked to people in industry, that there is this change in how people are talking about this stuff in that way?
Caitlin Kraft-Buchman:[00:09:43]
There's, sadly, I would say there's vague awareness, so that's good. But I find that people are... Or just very shallow awareness. I don't really see that there's the fascination and excitement about making the changes that we need to make, which is the way that I choose to look at it. It's new science.
Ben Byford:[00:10:12]
It's just depressing, isn't it?
Caitlin Kraft-Buchman:[00:10:15]
That's why I tried to say it's exciting, right? Yes. Because we had this experience, actually, with some of the guys from the IPCC who originally didn't see a lot about gender and climate change. And it was after a series of conversations where one of the head scientists, a light went on. He said, my God, this is an area of science that we have never pursued. This is a chance to be totally excellent. This is a chance to expand the boundaries of the investigation and the science that we're doing. And it was an exciting prospect for him. And I think that we need to change the narrative from just purely receiving the technology of getting into our P Doom quotients and existential risk, which I think make us are passive. It basically is we are being acted upon by the technology as opposed to the people who are creating the technology and can use the technology and direct it for good, as opposed to it being enacted upon us for maybe not so good. And that's an emotional paradigm shift that we need to make together.
Ben Byford:[00:11:40]
And part of that equation, like you were saying before, is getting Women at the table, right? So is part of the alliance, A+ Alliance, and those things that you're doing, is that like getting diversity at all levels? Because obviously, there's people who make the are in the higher echelons of companies or they're working maybe in public orgs, who are making the strategic decisions. But there are people just making stuff, right? There are people who are collecting data and data science and engineers that are making stuff. And is that your process to get everyone into this? Equality It's really across the board thing.
Caitlin Kraft-Buchman:[00:12:32]
Yes, and. So clearly, yes, we need more women. Clearly, yes, we need more socioeconomic diversity. We need more what we call intersectional diversity. So it's gender, race, class, caste, all of that, because everybody's point of view is only going to add to the incredible body of knowledge. The way that we represent that, though, is that we say that also communities, the people who are impacted, should be engaged, must be engaged in the co-creation, and that we're trying to expand also the definitions of what expertise is. You have technical expertise, but you also have societal expertise. You have lived experience expertise. And we have a tendency to sort of, create hierarchies with what that means. And we're trying to erase those boundaries. So you need technologists, clearly. But the technologist also need the people who are going to be living the experience, as well as the people who have been educated in some of the anthropological or the medical or whatever that sector is in addition to the people who are going to be using and then have it used on them.
Caitlin Kraft-Buchman:[00:13:58]
That's partly, and I think that just adding women and stir isn't going to solve the problem the way that we need to have the problem solved, because it's just, first of all, all women are not gender experts. All women don't care about socio, pro-social. To explain all that, because that's also a little gendered. It's like, oh, women are good and they will do goodness for other good people. It doesn't buy that. I just think.
Ben Byford:[00:14:25]
I've got a thesis, though, that this is going to sound a bit trite, so I apologise now. I feel like there's very few women warring right now.
Caitlin Kraft-Buchman:[00:14:40]
I would like to think that women, and probably just statistically, I'm sure that you're correct, but it also is there only... We just had the General Assembly. There were 175 or more world leaders who spoke, nine of which were women. So there are only nine women leading countries. So it may also be that women statistically haven't had a chance.
Ben Byford:[00:15:07]
Yeah.
Caitlin Kraft-Buchman:[00:15:08]
Wage war. I have often said, not that women are less corrupt. I think women have probably had less chances. To get to the levers of corruption and that high levels of leadership that lead to those temptations. So I'm a little bit more like not women good, but more different lived experience, interesting, and change the dynamic.
Caitlin Kraft-Buchman:[00:15:36]
I love that.
Caitlin Kraft-Buchman:[00:15:38]
Thank you. That was a slightly worrying idea of mine.
Ben Byford:[00:15:43]
Totally horrible. But like, Hey, Margaret Thatcher. There are examples, yes, for sure.
Ben Byford:[00:15:52]
I love the expertise levels because that really struck me as you were talking about it previously, that there are a expertise in a certain area where it's medicine or technology or humanities or whatever it is. And then there's a lived a social expertise in where you're coming from. But also there's the people who it's happening to, their lived experience expertise. And I think that's really nice. I think that's a really nice framing of these different things. So traditionally in user research, if you were doing user research, you might go and talk to people, right? But maybe you wouldn't qualify it in that way and you wouldn't hold it to high regard in that way. And I would suggest that actually from my experience, less and less so these days because user experience is being twisted a little bit, I would suggest for the capitalist engine. But yeah, I like your framing of those different levels of expertise that we need to be and bringing it into the table, around the table.
Caitlin Kraft-Buchman:[00:17:03]
That's it. Yeah, exactly. And to co-create. I think that you're going to have much more robust, much more resilient things. We use the example a lot. New England Journal of Medicine has actually chronicled this. There was a hospital waiting room system that was put in place. Great. We want to have people be able to access medical care in emergency rooms. It But the model proxied for the number of visits that you had gone to, the number of doctor visits that you had had over the past year would show how sick you are and how ill you were would mean that you would come to the front of the line in the emergency. The only problem with this was that this was in the United States. Obviously, I never met a poor person or read a newspaper article in that people without health insurance who were impoverished, used the emergency room as their first port of call. And what this system inadvertently did, which actually on its face makes some sense, put all the poor people at the back of the queue, and in this case, also all the people of colour at the back of this particular queue.
Caitlin Kraft-Buchman:[00:18:20]
And I'm sure they were very well intentioned, whoever did this model. Have you thought about, okay, we need around the table as we're creating whatever this algorithm is going to be, we need the emergency room nurses who are doing the intakes, the doctors who are doing the intakes, the people who are sitting in the emergency room, as well as the hospital administrators who, one would presume, were be paying for this algorithm. But just all the different medical anthropologist about why is that happening? Sitting and figuring it out, then I'm sure that you would have come up with a very just and useful useful system, as opposed to a bunch of tech guys going, Okay, that's great. Well, sick people need to get in sicker. So what we'll do is we'll just take X for doctor visits, and that is the right way. And I think, yeah, that's a very painful and really interesting example.
Ben Byford:[00:19:19]
Yeah, I'm sure there's many more like that as well. I can think of a few that we've definitely covered in this show before. So what's your current output? What are you guys excited about in what you're doing at the moment and what things are you affecting at the moment?
Caitlin Kraft-Buchman:[00:19:36]
An outgrowth of that original meeting that we had in 2017, that became a series of workshops at EPFL, and that became somebody's master's thesis, and then it became more workshops and a course that's free, that sits on the Sorbon Centre for AI website, and now is turning into these toolboxes. One of several of use cases, because we're also seeing there's so many use cases, but you can't cram them all into one course and make any sense or have people absorb it. We're now doing a more of a sectoral approach. We have a community which is more than 350 AI and researchers from around the world, 50 seven countries, which is really pretty amazing. So these people hanging out on this circle community, and I'm sure you'll put in the box how to find us. But we're doing an African toolbox with African use cases with colleagues from the African Centre for Technology Studies. We're doing a Latin American, a Spanish toolbox with this methodology. I'm going to explain the methodology in a minute. The Chilean National Centre for Artificial Intelligence. On that's being translated now. We're going to be doing a world of work toolbox that will focus on recruitment process, well-researched, very many points in the funnel, as well as worker surveillance that ILO will work with us as a knowledge partner. And most, actually, importantly to me, in terms of a paradigm shift, a public sector toolbox, where we're working with the Turing Institute to take their human rights impact assessment and to incorporate it into this methodology that we have, which is very contextual.
Caitlin Kraft-Buchman:[00:21:39]
I think everybody agrees they're basically six. Some people say eight, some people say five. But there's parts to the AI life cycle, which is a life cycle, a life cycle approach to the technology development, starting with an objective being a North Star, with really being able to articulate what it is that you're trying to create. And with that, the team that you need to create it, just as we've discussed, all its multidisciplinarity. And I'll give you an example about how we're working this out with the public sector in Geneva. Then going to data discovery clearly has lots of points of potential bias, but a lot of potential points of really getting it right to model selection, to the deployment, the iteration, and also just to say that these things are not linear, that you can go back from the model selection to the data discovery as you're adding synthetic data or you're deciding to recollect data, but that you really define model fit that makes sense instead of just like a fairness metric and then trying to shoehorn everything into one of these 21 different fairness metrics, doing real experiments with fairness metrics to see, are you always in line with your North Star of what your objective was and is, and then all the way, obviously, to deployment and audits.
Caitlin Kraft-Buchman:[00:23:07]
Putting forth also this idea of a human rights impact assessment should be done ex ante, should be done before you start and be checking, which is what the objective work is, checking yourself along the way to see, are you still hitting those beats? And what can you do if you're not?
Ben Byford:[00:23:27]
I love this, but the thing that I always find hilarious, hilarious? Probably the wrong word. Dubious? Is this intention part at the beginning. So you mentioned that there's this, like there's the start of the project and you want to put together a team and you want to work out what you're trying to do and then at those different beats, reassess. It's like, are you hitting those targets or does it make sense still to what you were trying to achieve in the first place? But it's like if you're trying to achieve, making the most money from people and tricking them into buying things. You know what I mean? Is this like that intention bit is the thing that you want to get right, but there's no one looking over your shoulder, telling you off. You know what I Necessarily.
Caitlin Kraft-Buchman:[00:24:16]
But do we actually know anybody said, actually, if you were forced to articulate your intention, would you say, We're trying to make the most money by tricking people? You would be going, No, we want to have the best pizza delivery in the shortest amount of time. And perhaps if you just were saying that and you went through, you go, oh, there is actually a human rights aspect to not having privacy data or just you might be looking at things a little bit differently.
Ben Byford:[00:24:53]
So hopefully going through that process that you were outlining, hopefully you'd come to the fact that you would hit against these, not barriers, but hit against these ideas which would cause you to reconsider.
Caitlin Kraft-Buchman:[00:25:06]
And also just to be at home with the critical analysis and the fact that you're making decisions and the decisions that you're making are conscious and that you own them and that we have decided. Because most of these things are allocation, actually. But it's when somebody gets a pencil, somebody gets a pen, that you're able to own the fact of why certain people are getting pencils and why others are getting pens. And maybe it may not be proud of it. I think being to say it, to share it, to articulate it, instead of just sliding into it, might be a good first step.
Ben Byford:[00:25:46]
Yes, exactly.
Caitlin Kraft-Buchman:[00:25:48]
It's more inclusive and more useful for everyone.
Ben Byford:[00:25:53]
Yeah, and presumably, if you have those different people as part of that co-creation piece, then going to hear those different views and different experiences as part of that anyway. So to ignore or to make certain decisions in those people's worst interest is going to be highlighted in that process, hopefully.
Caitlin Kraft-Buchman:[00:26:14]
Yeah. And also you're going to make a better experience for those people. And that would probably make a better experience would relate to a better product as well.
Ben Byford:[00:26:24]
Yeah, definitely. So the hope in all of this is that you are going If it's purely a private sector proposition that you're going to make a better product at the end of the day anyway through this. Yeah.
Caitlin Kraft-Buchman:[00:26:37]
I mean, I've got to say that one of the reasons why we decided to focus on public sector is there's a paradigm shift that we're hoping to help everybody come to, which is most of the automated decision making that we have is all about finding bias. So you know if your objective, your core objective is to find bias, that's all you're going to find. There's nothing else that's going to be there, as opposed to what if, and believe it or not, we do have the canton of Geneva has come to us. They said, We want to know what would happen if we connect more citizens to services that they're actually not getting. We want to find people who are not receiving services that they are allowed to access. So immediately, if that's the animating principle, that's your core objective. And of course, obviously, in the system's requirement, they're all going to be if people aren't allowed to access the services, they shouldn't because nobody wants fraud and nobody wants public money to be misspent. But it's basically about connecting people positively to something. I think you're going to have a very different set of outcomes.
Caitlin Kraft-Buchman:[00:27:53]
And so we'd like to see this paradigm shift with public sector people saying, how can we use this technology to improve a quality of life, to be bringing people to things that they actually need to be more effective citizens and participants in the democratic process? By having those public sector under public services be constructed with that thinking, as opposed to how can we exclude people that we might change things a little bit. And also use the technology the way that we all think it could be used and why we love it.
Ben Byford:[00:28:33]
Yeah, because there's so much promise. We're constantly sold the promise of it as well. It's like, give me it, give me the good stuff. Use it for medicine And yeah, access to stuff.
Caitlin Kraft-Buchman:[00:28:47]
I mean, a little bit is about the history of it, right? Because we see it with smart cities are also a weird misnomer in my mind. But in the beginning it was IBM selling folks in cities on having faster, better traffic lights, less traffic congestion. And that was the idea of what a really great city was. And then after a certain, which is very gendered, by the way, we can talk about how transport works, but I will leave that aside. And then then cities started to say, well, we're spending a lot of money. We'd like to tell you what we need, which turned into the surveillance economy that was first Rio and the favelas and then other people with CCTV. And then I think the third version of the cities is coming up is like, what do communities really feel the technology could be used for to help create better lighting? Because, again, we're not against any of this automation, but you want to say, Gee, it could be better lit streets since we know that that is right there, just like cuts down on crime and increases safety, particularly for vulnerable people. And we're also transitioning in terms of the way AI is being used and being deployed in at least a public sector setting.
Ben Byford:[00:30:12]
Sounds great. So I was interested in your general thoughts about human rights and why human rights, specifically, but a rights way of working is a good way of thinking about framing these sorts of problems and utilising things. You're talking about dignity, obviously being a big part of that, as well as impact. So obviously, if you're talking about pure philosophy, we can talk about utilitarianism and different aspects of that. But I'm interested in why rights is interesting to you and your organisations.
Caitlin Kraft-Buchman:[00:30:54]
Well, as you know, as we've all seen, there's a lot of... There's a responsible AI movement, there's an ethical AI movement, and I suppose a rights-based movement. I mean, all of these things are fabulous, and they're all going towards the same goal. But there are, I think at last count, I'm sure there must be more now, 80 plus different ethical guidelines to AI. That means, are they all exactly the same? They're very similar, but similar means they're also a bit a la carte, right? Because Some touch more on this than others touch on that. The beauty, in our opinion, of human rights and human rights frameworks is it's agreed upon law. It's settled, it's international. Everybody's agreed on it, every country of the world. And as a point of departure, it's a settled approach. And obviously, if there are holes in it, then you can add it as we go along because these were not legal frameworks that were made for this technology. But they're also beautiful because they're based on principles. So it doesn't really go to this if it's neurotechnology as opposed to machine learning as opposed to Gen AI. It doesn't get into all because I don't think you're ever going to get ahead.Certainly with legislation, you're never going to get ahead of a technology. And certainly that approach is only going to make people be able to figure out what the loopholes are.
Caitlin Kraft-Buchman:[00:32:39]
Whereas an impact-based approach that's based really on how humans are going to be affected, looking at these three really core principles, which are nondiscrimination and equality right there, participation and inclusion, which also gets to us to this whole notion of bringing in so much of the world that's not been included in the data or included in the build. Then, of course, accountability and rule of law. Those three basic things are really great points of departure, well written, well established, well studied, but also really quite simple and also quite aspirational at the same time. So they really It's good for us in that way. Yeah.
Ben Byford:[00:33:33]
Yeah. Because there are differently people I've talked to in the past who have... They believe in the rights, and I guess by abstraction, the principles way of working very much. And I always feel like there is something that gets missed. But as you pointed out, there is an opportunity to build on it because these things weren't made 10 years ago or five years ago.
Caitlin Kraft-Buchman:[00:34:01]
And it's living law, right? Because we have all these treaties. You have this CEDA, which is the Convention to End Discrimination Against Women. You have CERD, which is Convention of Elimination of Racial Discrimination, CRPD, Rights of Persons with Disabilities, and, of course, Convention of the Child. And those treaties, which everybody has signed up to, are also living treaties that at least every two years, if not every year, have a general recommendation who are expanding the notions of what this means are investigating different sectors. And they've all begun some AI. That's also really interesting. Living, breathing law, but that impacts. And also, very interestingly, each of those treaty bodies, we've been discussing this in relationship to the Global Digital Compact, which has just been signed, and this Pact for the Future at the UN General Assembly, and how advocacy groups such as ours can work with treaty bodies so that the treaty bodies can ask individual countries about what they're doing in terms of helping to bridge this rather large global digital gender gap, which is massive. I'm very, very interested, although it's not pointed out in as much intensity as I would have liked about what we're doing about gathering sex-dysaggregated data, but also gathering new data that actually includes women, because we know just from the health gap perspective, it's massive.
Caitlin Kraft-Buchman:[00:35:51]
If we're going to pretend that there'll be personalised medicine and AI-supercharged medical apps that will bring better medicine, we need to actually have the data on the women to begin with, which we have never had. And that's a big accident waiting to happen. So how can all these different mechanisms that already exist at international levels, national levels, work to help bring all this forward. So we all benefit, right? Which is the point.
Ben Byford:[00:36:20]
Yeah. And is that accord mostly to do with the missing data, the missing people in that equation, or does it to, again, the people who are making those decisions or other things? Is it mostly to do with the inherent gender bias in those situations?
Caitlin Kraft-Buchman:[00:36:43]
Well, the global digital compact deals with everything. We're very lucky that we have a little paragraph on gender, but it deals with access on all levels, access to the Internet itself, because we know that there are billions of people not on the Internet, as well as access to compute on the other end, which is also a problem for everybody at this point, except for several large companies. So it really looks at... It tries to look at the whole eco-system in a very... It's a lovely way. It's not very prescriptive, which was a negotiation created document, member states.
Ben Byford:[00:37:32]
And that's going through at the moment.
Caitlin Kraft-Buchman:[00:37:34]
That has. That did. It didn't go until the last minute, and then it was approved. It was signed at the General Assembly. So it's really relatively new. And I think people are pretty excited about it. I saw that the Interparlimentary Union is now, their assembly, taking up because the larger pact is the pact for the future. And the global digital compact is an and a codicele to it that just deals with the digital world. Within the bigger pact, there's a science and innovation chapter.
Ben Byford:[00:38:11]
So you mentioned earlier your toolbox work, and it would be great if you could just explain a bit more about that and how people can access that and use that and how does that work?
Caitlin Kraft-Buchman:[00:38:22]
So we have it. It works on two levels. We have a course, which people more than everyone's invited to take. It's sitting on the Sorbonne Centre for AI website. You have to register, but it's free, and you get a certificate at the end of it if you take the test. But more and most importantly, we found that people who take it are really interested in the conversation and what happens afterwards. And what we're really interested in is having this community of practise. And we're very excited to have these 59 different countries, every continent there, AI researchers from all over the world, clearly, but also some anthropologists, some activists. It's a very, very varied, some legal scholar on people really bringing their particular point of view from their regional point of view and their sectoral point of view. And that I find super exciting. We have open studios where people come and they present their work. And then it's discussed. So we have a climate change activist sitting recently from Myanmar now sitting in Thailand who's going to be talking about climate change and gender and what's really happening in the MECON with water resources coming up.
Caitlin Kraft-Buchman:[00:39:46]
We just had our colleague, Emma Kalina, who is doing in her third year PhD at Cambridge, presenting her PhD work on stakeholder engagement and what that means. And this was very UK Bay-based. And then we'll have... So we have people. There's a real variety of people from around the world with different kinds of perspectives. We have book talks, and we started doing something where we were doing consultation. It's a consultation world, right? So UNESCO put open its guidelines and opened up for consultation. So we did a joint piece on that. And as the AI Equality Network, we put in our piece. We did one for the US, the NTIA, that we thought was really very good. And we're going to start doing more and more of those. And those are fascinating working groups because we almost always have somebody groups from Africa, from Europe, from Latin America and the US in the core team. It just happens organically. And the kinds of perspectives that people bring just really bring a richness to the text and make it really easy and fun. Those are the kinds of things that we're doing that we're really excited about.
Ben Byford:[00:41:08]
Sweet. Great. So people can access that and obviously do the course, but then find out more and maybe get involved if they are in that place, that they are doing stuff and they want to do stuff.
Caitlin Kraft-Buchman:[00:41:23]
They don't have to do the course. They can just come straight into the community, too. I mean, it's not a prerequisite at
Ben Byford:[00:41:30]
Great. Sweet well, thank you very much for your time. The last question we always ask on the podcast is what excites you and what scares you about our AI-mediated future? Sounds more airy-fairy every time I say it.
Caitlin Kraft-Buchman:[00:41:49]
No, no, no. Well, do we want to end with scarcer? What excites me is this idea of a paradigm shift, if that we can really think about how and be intentional and say, how can the technology we make connect people to a better quality of life, even if it's a It's a delivery app, and especially if it's a municipal service, that is really, really wonderful. So I'm excited about flipping the entire way of thinking about it. And I think that it's actually pretty simple, and it will give us agency, and it will give us power, and actually back to excitement about why we were excited in the first place. What scares me is that maybe we're moving so fast that we can't take that moment to be intentional, and that we haven't thought through, in particular, this data about women's health because Because just very briefly, as I'm sure you know, it was only in 1993 that women were allowed to be in clinical trials. Still, to this day, not only, but the majority of trials, even trials for female sexual dysfunction, have men in them. But also we're doing all the research on male mice.
Caitlin Kraft-Buchman:[00:43:23]
We haven't moved to doing it on female mice because there was thoughts about the estrous cycle too erratic, although it turns out the testosterone cycle is even more erratic than estrous, which as a woman, no woman I know, every woman I know, laughs up, uproariously and said, Well, we could have told you that. But anyway, research It needed research to show that the male mice were more erratic than the female rice. And so we know, and every cell in the human body in a woman is different from a man. Not just reproductive cells, but their heart cells, their lung cells, their kidney cells. And that means that what we don't know is massive. And diseases that are the plupart women's diseases, but that men also have, like immune-suppressed diseases. But 80% of those diseases are by women. Those are the most under-resourced, under-finances diseases. Okay, so we know nothing, and we're not stopping to say, what do we know? And we're going to start doing supercharged AI medicine based on bad science. And so that really scares me because I think that that's a big accident waiting to happen, especially for women of the global south who are really absent from these data. But all women in general.
Caitlin Kraft-Buchman:[00:44:48]
I could also flip that on the other side and say that's also a really exciting opportunity for medicine to be changed and really live up to its potential in the 21st century.
Ben Byford:[00:44:59]
Thank you so much. I really appreciate your time. And I really love the outlook. I think flipping things on this head should be what we do. That's something to take away, and that's extremely positive. So thank you for that. Thank you. How do people follow you, get in contact with you, all that thing?
Caitlin Kraft-Buchman:[00:45:19]
Well, we're at the ai equality toolbox .com. You can find us there. Also through Women at the Table has all of these different initiatives, Women at the table .net. But you find us, then you'll find the A+ Alliance and the Fair Feminist Network, but mostly the AI Equality Toolbox. We welcome everybody. We really want to have, I think, all these different disciplines, all these different geographies all together, thinking together, asking questions, sharing insights is just how we're going to get where we need to go.
Ben Byford:[00:45:58]
Cool. Thank you, Kaitlyn.
Caitlin Kraft-Buchman:[00:45:59]
Thank you so much, Ben. Thank you for inviting me. Thanks.
Ben Byford:[00:46:05]
Hi, and welcome to the end of the podcast. Thanks again to Caitlin for joining us. I felt like I had a few risque opinions in there. So that was really great to go toe to toe with Caitlin on some of those misconceptions, personal biases, and really great to hear a load of the chatter that's going on internationally in this area of diversity, especially as it pertains to AI. So super awesome and really interesting stuff.
Ben Byford:[00:46:36]
Thanks again for listening. If you'd like to find more episodes, you can go to machine-ethics.net. And if you can, you can support us on Patreon, Patreon.com/machineethics. Thanks again, and I'll see you next time.