Monetized for their economic benefit. And so that's tricky. And so I'm not sure if this issue of bias can be solved because we're constantly being tracked and put into categories, classified into certain groups as a, as part of the crude logic. That is the foundation of how these systems work. So I don't really know if that can be solved because, you know, just like Daniel Santos, he got judged a bad teacher. When he wasn't a bad teacher. You could stand ten feet away from this man and know what a committed, impassioned, dedicated teacher he was this, you know, the, the, the young woman, Latanya Meyers, who's judged as a high risk for probation. And she literally is not sheets a model citizen trying to change her life and reform herself and redeem herself. And, and yet, this algorithm has put her in this category. And I don't know if you can get that perfect. And so I think the bigger question is that when designers design, it's about asking for whom could this system fail? How could this go wrong? What are the stakes if it fails? How bad is it? Is someone going to do jail time if the system is wrong as someone not going to get health care or health insurance, is someone going to be denied a credit line that they really need if the system fails. So we have to understand what the stakes are, for whom it can fail for. And then the other thing is like, how easy is it to identify that it failed? How easy is it to understand that this system made a mistake? A lot of times, we don't even know that AI was involved. You just don't get the interview for the job and you don't know that some sorting system we did you out because you are a woman or didn't have an Ivy League highlighted in your keywords or something. And you don't even know why you didn't get the interview. Like that Amazon hiring system that I show in the film where the actual programmers had the best of intentions. They want to create a system that is actually fair. And in spite of that, the AI system was picking up on who got hired, who got retained over a number of years and who got promoted, and then on its own and starts to discriminate against anyone. It could tell it was a woman. Steve Wozniak's wife. They both applied for the same Apple credit card, and she gets a lower score. And Steve Wozniak is like, wait a minute, we have all the same assets. We look identical on paper. How is it that my wife has getting a lower credit score? And it could be that the system is picking up on the fact that women have had a shorter history of getting credit in this country, had a shorter history of access to mortgages in this country. And I assume that Steve Wozniak's wife is all right with credit, like it. If she gets denied credit, she's probably good. But for the rest of us, we don't even know why we got that score. We don't even know that an AI decided that. So I think understanding these things and then the other, the last part of that equation is understanding how easy is it to correct how it, how easy is it to know there was a mistake me? And how easy is it to correct that? And that means that sometimes you need a human in the loop. You know, a lot of unemployment benefits were automated. And Virginia Eubanks did a study in a book called automating inequality. And when an automated system replace social workers, a million people that were qualified for these benefits didn't get benefits. And ans someone lost their life because they couldn't get access to health care that they needed. And so I think that the other thing is like, how easy is it to get a human in the loop to correct that error. And so to me, that's where I, what I mean by it's not about the perfect algorithm. It's about creating systems that are more humane. That means like a complete system. And that means sometimes there needs to be human being in there somewhere. I'm trying to figure out how these systems work. And the other thing is like just what we trust these systems can to do because we call it intelligent. And I would really question the word intelligence to an entity that does not have capacity for morals or ethics, or African for compassion. Really believe deeply that that's what makes us intelligence as human beings and we can't leave that out of the design of these systems. So thank you. Thank you. Probably time for another question, too. Shallow me. Have 11 quick question. Thanks, Melanie. You talk about literacy, which is critical here. Do you know what universities do? And you know any universities trying to address this because it's so multi-disciplinary. If I'm over Philosophy, I'm trying to teach people about what is human. If I'm an operations management on teaching people what is efficient. If I'm in Computer Science, I'm teaching people how to write great code at an even more sophisticated code. So where is little silos here in education? But this is a multi-disciplinary. Challenge, how do we teach literacy in this area? And, you know, anyone doing it, you hit it on the head. But the hammer on the head of the nail. You are absolutely right. And, and I think that computer science students are learning in a silo. And one of the things that I am hopeful about is I now see this new generation of computer scientists saying, I think I might need a women's studies class. I think I might need a Black Studies class. I think I might need an ethics course because I can't actually program for society if I don't know anything about society. And I think that you are exactly right, that we need to create interdisciplinary education around artificial intelligence and AI and how these systems are deployed that include everyone. And that hasn't happened yet. And we are just on the forefront of that. And both of what was said earlier, we need more inclusion in those rooms. When you have an industry that is less than 14 percent women. And I couldn't even get the statistics of people of color. Half the genius is missing from these rooms where these decisions are being made. And one of the things that I have compassion for is that bias is not just something that white men have. It's actually in all of us. We all have bias. It's an inherent human condition and for most of us, it's unconscious to us. And that actually means that by necessity, we need each other in those rooms to shine a light into the places that we can't see. And to be vigilant about controlling for bias as an innate human condition. And so the, the heart of it is, is that I don't think that's happened yet. And what I'd like to see is one, I would like to see a massive investment Kate campaign in women and people of color in stem and in artificial intelligence specifically. Because I think that the, the cost of coded Bias is actually very representative of the people who are speaking out against bias and in support of greater ethics in AI and so on. You know, even considering the fact that three black women graduate students bias and a technology that Microsoft, Amazon, and IBM missed. And that at colleges and universities that we need to create spaces that are interdisciplinary to talk about these issues. And that has not yet done. Thank you. Thank you. Sounds like an opportunity for our university. Absolutely. Do you think it's like we're very timely and bringing this topic within learner, we can start something. Diane. Before we go, there was one question I saw from a student. I think it was Carly barley George. Has a question. Carly, do you want to give your question? Yeah. Yes, please. So first off, the movie was absolutely wonderful. I really enjoyed it. And my question that was in the back of my head while I was watching this was this is a very big problem. There's a lot of exploitive initiatives that are in place. Taking data from what we're doing, what can we do as a society to kind of slow this down so we can work towards a more tentative solution. How it's such a good It's such a good question. I think I name some of the tools during our talk around literacy, changing your university education, local action, supporting a great group writing to your legislator, or these are all tools I think we should have in our toolbox. I don't know if we can slow it down, but I it's my hope that we the people can speed up, but I don't know. I don't know if we can slow down the piece of which that technology is moving forward. But I hope that we can start to keep our ethics and our morals concert to keep pace with it. And I just, I just want to underscore is, you know, it as we come to a close that I make documentaries because they remind me that everyday people change the world all the time. And that everyone who is a super hero among us doesn't always wear a cape. And I'm saying that in a very unromantic way. I've seen literally people who like today, Miranda, who didn't just keep her landlord from installing facial recognition, but inspired the first legislation in the state of New York that would prevent, protect other residents from the same. So I think the most important thing is that we all do something. I mean, if everyone on this at this event tonight did one thing, took one small step. We each have a garden of what I call our garden patch of influence. And I think if all of us do One thing, it can really move the dial. And so I think the biggest thing is that we all do anything, just one thing, something towards algorithmic justice. Wonderful. Thank you very much. You. Okay. Thank you, Shelly me. I was just wondering if you can before I know our time is up a cave to listen to what your next project is about. Since you mentioned something about the tail as it's about Tiktok. Yeah, it's about Tiktok. I'm making them that TikTok and it should be out in spring of next year. Awesome. Then we all look forward to it. Step Bureau and spring of next year. So on behalf, I think our time is up. Is that right? Yeah, it's a 49. Thank you so much. Shell then you can tie up or joining us. Thank you. Ldc members. We're doing Lerner College for helping us. At this event happening, our guests for joining us. Dvr, we're honored to have you here and then Thank you so much for answering all the questions. I wish we had more time. I see our chat is blowing up as we speak, so thank you, everyone. I hope you enjoyed your talk. Thank you again for joining us. It was here every line and yeah. Oops. Go ahead. Show me. It was an honor. Thanks so much for having me. Thank you so much. Everyone else? On May 12th, we're having our next learner diversity council event. We're having a lunch and learn and 12. So we hope that you'll join us at 12 on May 12th for our next event. Thank you all so much for being here tonight. Have a great night. Bye, bye. Bye.