Welcome everyone. My name is Kathy Wu. I'm director of the Data Science Institute. It is so good to see you all at this year's data science symposium. Whether you join us online through Zoom or in person here in the beautiful audio on UD campus. One welcome. We have an exciting program today. Thanks to the amazing planning committee co chaired by Dr. Buzzy and Dr. Bianco, coordinator NGO triangle, and all the committee members, the faculty, students and post doctoral committee members on the committee from across UT campus and our pattern institutions including Lincoln University. So that's give them around applause to the planning committee and all the volunteers who are making this symposium possible. So by our latest count, we have more than 300 registrations for over 40 academic units and patent organizations. We are particularly excited to have a large number of graduate students and postdocs and who are going to be giving presentations as well. We'll be hearing from two keynote speakers for their inspiring work. Going to have three panels of an ethics panel and industry panel and an education panel who, which are bound to promote stimulating discussions. We are going to have a summation students and post that lightening talks. We have faculty talks and posters. They were foster ideas for research collaborations and team science. I hope you will really enjoy this for the event. So now it's my most distinct pleasure to introduce Dr. Ross than Associate Vice President for Research Development, who was a key driver who made launching of the Data Science Institute possible. You see, Dr. Brosnan had the foresight in hosting the first ever data find symposium back in May 2017. I think, bringing together campus community to explore a future of Data Science at the University of Delaware. So the symposium has led to campus community to develop a white paper in October of 2017. And they are coauthor by the symposium organizers and 3D that in that white paper. It provide a key recommendation to the university administration that we ought to launch a Data Science Institute to foster coordination and collaboration of data science activities across campus and across partner institutions. And that's where we are today. The Data Science Institute was officially launched in fall of 2018 and sinks then Bactrim lastname has continued to provide amazing support and advice from the upper administration that contributed to the tremendous success, the growth of the data science initiative, not just at UT campus but across our partner institutions. So before I turn over to Dr. Ross down to open a symposium, I'd like to just give a brief bio about him. As the Associate Vice President for Research Development, dr. Ross down works closely with faculty leaders to develop competitive proposals and partnerships to establish successful multi-disciplinary research grants and programs. He assumed his fro in fall of 2016, after serving 20 years at Arizona University as tenure professor in the photon schools of engineering. His current role, his emphasis is on growth of you, these research enterprise in alignment with machine and Focus, Focusing new and innovative partnerships and pursuing broader areas of sponsor research. He's a computer scientist with a distinguished record in interdisciplinary research in areas of geometric design, computer graphics, document exploitation, and geo-spatial visualization analysis. He also has served as PIs and collaborate on several federal grants from a number of federal agencies. So without further ado, we are. Thank you, Kathy. And that's really good to see you in person. Let me emphasize, I present the lower division or the upper administration and I go by AR Kathy was very kind to address them as doctor rather than several times. But I prefer air. And thank you for inviting me to give the marks. I guess your first choice. President Biden was busy so you get what you pay for. And so we'll see how this goes. So I'm going to divide my remarks into two segments. First of all, it's really nice to be physically in presence of all these colleagues and to the Zoom colleagues who are watching this. This is, this is rarely a turn. And the events of last 18 months, and I really wanted to thank Vice-President Charlie rarer than prose Morgan and President Salinas for taking us through sometimes very depressing times and tough times to be able to then come back and meet a get together again and have intellectual discussions in person. So I want to start by maybe a little more on the sobering note as to the burden and the gravity of the discipline and the burden that you should all feel on your shoulders. I was not a computer scientist by bird's eye and late 80s I, as a mechanical engineering graduates when I started to use C prologue, talk about an oxymoron see and prologue to create a room based algorithm to divide CAD 3D objects into simpler case. So I fell into it and then I fell in love with this topic and then for the rest of history. But what I mean by the burden and the gravity is that you are the single most, the influencers society today. And if you go back in history, but let's not get arrogant about it. You know, that this is the most important time in humanity. People have used Data Science going back in history. In the late 19th century, US Census Bureau struggled with to count the census, to do the census. And if we go back even further to show how important data sciences and decision-making, if we go back 3500 years. There was among Rishi Prasher from India who studied the movement of the planets and the satellites and stars and all the astrophysics. And then he studied the lives and the birds and what happened during the lifetime of people. And he tried to do the first correlation in my view. And, and the Science of Astrology was born. And why is that important? Because for all these years came to war based on Astrology. Traders traded north-south, east-west and open their businesses based on Astra, astrological predictions. And even marriages were blast or rejected based on the horrors horoscope, sometimes I say horoscopes not matching based on sort of a predictive analysis of whether this was a good match or not. And believe me, my I am a product of that because I'm sure my forefathers and for mothers who are in the union because their horoscopes were, were mash. And maybe you can say, looking at me AR, it's a good reason why we needed better predictive algorithms back and back in those times. Now, that's a huge amount of, even today that practices is happening in India for sure. So you can see the social, economic, and political impact of data science. Okay? So I like you cannot use a modified code from Duncan said that it's so easy to lie with data science, but it's so hard to tell the truth without it. Right? And so that's what is important, especially today. Keep that and so it's not the new deep learning algorithm whether to use MLP radial basis functions for neural networks. It's the impact you're going to have on society, economics and political outcomes which really determine the daily life of a person. So here's another quote. Somebody said that data is not information. Information is not knowledge, and knowledge is not wisdom. So you have to connect. At the end of the day what you learn from it for better decision-making that for the, it all comes down to, or like, I like to say, is data leads to information, to knowledge, and to better decision-making. So that's, that's one part. And maybe you're thinking what a depressing set of opening remarks. But let me tell you why I'm so excited. And. He shared a little bit of the history of the, of the Data Science Institute, but let me go back past five years. So when I came to UT, I felt myself surrounded by a lot of chemists and chemical engineers. And that's true today too. And like a calf lost enough foreign pasture. I sought out colleagues in data science, mathematics, statistics, and so on, so forth, and came to realize that there were so much strength that UD, we were just not organized. And like every good story starts with the early crash and burn. So we tried to get together for NSF, call for tripods, that was the first call for AI based centers. And we crashed and burned. But out of that came what Cathy shared with you. The Data Science Symposium, which led to the white paper. We applied for a unit. Well, I feel like since that symposium, we have been like a basketball zone. We just throw the ball and it just goes whoosh through the net. We got funded to establish the Data Science Institute. And then it was not a matter of who should lead the Data Science Institute. The harder part was convinced that person. And I'm so delighted. Kathy Wu, professor Wu. She had the credentials, the leadership, the passion, the drive. My guide, I can tell you the summer she was in Taiwan and at six in the morning Taiwan time nine o'clock in the evening, time on time. I don't know what I'm in the zone sphere times on we were apart. She was as alert in the meanings. So passionate. So Kathy, this still would not be what it is without your leadership. And then one part of that was the white paper led to Provost Morgan and President sono supporting the cluster hire. And I was so excited that we're going to have lot of new faculty positions and will be hiring people. And and then I realized we were all those positions were junior faculty positions. I said, oh no. It's going to take so much time for maturation of those faculty when they started getting grants and establish themselves. And how are we going to really take this Data Science Institute? But I was completely wrong, so I'm so delighted that faculty like Federica Austin brought my grad dabbler, Ramadan, pinky model. She's there along with the senior leaders. I see red chair. Folks in mathematics, computer science. They have contributed so much and rarely Ben Ghazi. They have rarely, rarely turned it around and rarely made a huge amount of difference. And the launch and success of the Data Science Institute. So please join me in thanking Kathy and all the Young Turks, as I say, in making this successful. Thank you very much. Congratulations. Enjoy the keynote talks and the rest of the session. I'm really delighted that Tom Powers is going to conduct a session on ethics, so very important to data science. Our great rest of the day. Thank you so much. Thank you so much. Yeah. Those were insightful and interesting remarks. I'm just going to open the stage with a few reminders about the event. As Kathy mentioned, we have an incredible lineup of speakers. We're really thrilled about the people that agreed to speak to us today. Some of them will join us remotely, that your keynote speakers will join us remotely. The first stop by Professor Ben Shneiderman will be just right after I speak. So I'm going to keep it short. On the on the right side, you see where those things happen in the schedule. The schedule is, of course, on our website. We have three panels, ethics in data science and industry relationships and agency relationships panel, an education pattern with incredible speakers from nearby and far away. We also have contributed material. So contributor talks, 18 contributed faculty and research lightning talks into sessions. Those will be short, four minutes, lightning talks and 29 posters that will happen on the virtual and in-person platform. The posters are viewable, All of them virtual and in-person, digitally on our poster gallery, which you can find on our website. And we encourage you to. Check them out and support are junior scientists and researchers. This is also the only portion of the workshop where our online and in-person audience will split and have different content and different experiences. After the lightning talks for the postwar, the in-person posters that are available for both the online and in-person participants. The coffee breaks are organized such that the in-person participants can stream, can stroll around the room and talk to the presenters in-person. The pollsters presenters will be standing right by the poster. Whereas the online participants have a different Zoom link, which you can see there. And you receive by email, of course, for where we set up breakout rooms for online participants to interact directly with the presenters of the virtual posters. We also are so thrilled to support our junior researchers that we set up a poster price. There will be a cash prize for both the best in-person and the best online posters. And in addition to a panel of experts that will judge the posters, the audience will also count. So you have a digital portal to indicate which one was your favorite poster. Can grab the QR code if you're really fast. Before I switch the slide or the URL, or just go on our website, all the information is there. We use Slack for communication principally except for in-person communication. For the in-person participants. What that means is that you received the link to join our Slack channel on Slack is an app that supports rapid communication and interaction on the digital platform. You, it's fairly intuitive if you have any questions, of course, come and ask, but please join the Slack. Because we setup to ask questions for our ethics panels, for, for our talks on the Slack channels rather than the Zoom, than the Zoom chat. We also will of course allow in-person questions, but by organizing them on the Slack channels, we can keep a record of them and we can organize them by talk and moderate them more effectively. So join the Slack. You will be on the general channel. For every talk, there will be a specific channel where you can ask questions and make comments and the name of that channel will be posted on general before every session. Finally, over 300 registered participants, a 120 attendees in person. This is the maximum capacity of the Audion under socially distance responsible interactions. And we're super thrilled about the myths up and the breadth of the participants, affiliations and memberships and identities. While the University of Delaware constitutes the majority of our participants, we have industry and academic partners. Lincoln University and our State University. Our close partners are prominently display there, as well as a number of other local and non-local organizations. And as far as academic school, we have really reached the breadth of identities and disciplines that includes not only stem, but also the arts and sciences. We have participants from health sciences, philosophy, disasters Research Center and many more. We also have a really nice breakdown of gender as identified by the pronouns that you chose when you signed up. Data science being a stem discipline, tends to be male dominated. So we are thrilled to see diversity in gender and diversity in academic roles. So the yellows lies on the doughnut chart on the right are PhD students. They constitute the single largest group of participants. And if I split the participants between some sort of like junior versus not Junior, putting students and postdocs on one side and everybody else on the other. Students and postdocs started from high school. All the way to postdocs, they constitute over 50 percent of our attendees and participants. And we're really thrilled about that. Particularly because of that, because of this diversity of identities and roles, it's very important that we behave. They will behave compassionately, that we behave trustworthy, Lee, and that we behave kindly towards everybody towards each other. And we asked you to sign a code of conduct. We put a lot of effort into thinking about what our shared values that we believe we should hold and principles of engagement. Please remind yourself of the Code of Conduct. It's on our webpage. It's organized in such a way that you can hopefully easily find what you need, including what to do if you were to experience violations of the code of conduct. If you were to inadvertently violate the code of conduct and hurt one of the participants, just realize that what you do and say has the effect that it has regardless of the intention that you had, apologize and move forward. If you were to experience violations of the code of conduct, you can reach out to any of the organizers. Keep in mind that the organizer, they have academic roles are mandatory Title 9 reporters. So they can't necessarily guarantee confidentiality. But we're extremely grateful to Andrea triangle too. In addition to being the indispensable event coordinator is also serving as an ombudsperson for this event. So if you want to ensure confidentiality in your report, reach out to her on her email on Slack in-person if you're here. Without further ado, I'll introduce my colleague, Matt Marino, who will introduce our first. Thank you for that. Awesome. Hello. I'm Matt Morella. I'm an Assistant Professor in Computer and Information Sciences. And today it's my pleasure to introduce our first keynote speaker and my mentor, Dr. Ben Shneiderman, doctor, she is an emeritus distinguished university professor of the Department of Computer Science, the founding director of the Human-Computer Interaction Lab and a member of the US Institute of Advanced Computer Studies at the University of Maryland. Among many honors, he has an ACM I Tripoli and visualization Academy Fellow as well as a member of the US National Academy of Engineering. His widely used contributions include clickable highlighted web links, high precision touchscreen keyboards for mobile devices and information visualization innovations such as the development of treemaps and for viewing hierarchical data, novel network visualizations and events sequence analysis for electronic health records. In addition to being the lead author on designing the user interface strategies for effective Human-computer interaction. In this morning's keynote, Ben will discuss emerging integration of AI technologies, HCI to produce human-centered AI, which seeks to amplify, augment, and enhance human capabilities to empower people, build their self-efficacy, sport creativity, recognize responsibility and promote social connection. He will discuss how researchers, developers, business leaders, policymakers, and others are extending the scope of artificial intelligence. It's not only focus on algorithm development, but also embrace human-centered perspectives that can shape the future of AI powered technology so as to better serve human needs further human values, rights, justice, and dignity, while also building reliable, safe, and trustworthy systems. And with that said, please welcome to the Data Science Symposium, Dr. Ben Shneiderman. Thank you, Matt. Thank you to the organizers and the University of Delaware. Good morning, Delaware. I'm very pleased to be the opening keynote speaker for the conference on data science. This is home-base for me. This is the topic that drives me forward. Every day I'm standing in front of electronically the Reb Center for Computer Science and Innovation at the University of Maryland. A beautiful new building. I hope you'll come and join us some time. And I hope I can come and join you at Delaware sometime as well. Let's get on with the show. Thanks for that wonderful, welcome and introduction. There we go. You should be able to see my screen here. And I'm pleased to talk about these topics, maths given me a very strong introduction so we can get right on. And there we go. I'm always happy to represent Interaction Lab at the University of Maryland and interdisciplinary disciplinary community led by Computer Science in the College of Information Studies with partnerships around campus including the Maryland Institute for Technology in the Humanities, myth. Visit our webpage whether more than 1000 go reports, 200 projects, 200 videos, and lots more to learn about. As mentioned, my book Designing User Interface, I hope is something that's familiar to the sum of u and covers the territory. That has been my passion for all these many years. At the University of Maryland. On the mats already mentioned some of my contributions, the the highlighted selectable links here I shows the University of Delaware as a public land-grant university research. And you can click on that. This is the Wikipedia entry for University of Delaware. And the idea of those links came in this wonderful moment in 1984 and was built by graduate student Dan Oscar off and tested by many other students. We tested different colors like red and there are more design variables and you would have thought, and Tim Berners-Lee adopted our work based on what he saw in his spring 89 manifesto for the web. We're also from Maryland responsible for the small touchscreen keyboards. At the time in 198889, the reviewers did not believe that we can make such a small keyboard that was touchable and selectable. And we demonstrated with a video that could, that show them the reviewers it was possible. I also hold a patent for photo tagging. And these are some small ideas which have had very strong influence and that's an important message to remember that small ideas that you can develop can have powerful impact. Most of my work has been on visualization here, Spotfire, which became a six, came from our work in 990 for the company was formed in 997 and has grown successfully. And with purchase by TIBCO in 2007 and remains a leading tool for visualization. Of course, colleagues developed Tableau have had still greater success. And making visualization tool a widely used data science tool. Treemaps were another development of, of, of our work in 990 one, I show yesterday's grab of the stock market showing the one month performance and each of the 500 stocks are shown in area equivalent to its market capitalization. The color indicates how much it rose during the past month. You can see generally good news and green. But you can see the red ones which stand out as those that have gone negative during the past month. So these tools are widely used and it's my pleasure when I get up and see that the front page of The Washington Post or The New York Times has a TreeMap recently showing Biden's infrastructure bill that and how the moneys would be used as well as COP 26 reports about the efforts of each of a 190 countries. So that gets widely used and has been developed by a company called visual action, calls them flatmaps, and I work with them. So there are hundreds of companies and hundreds of free versions of treemaps. Matt also mentioned the work on network visualization with Node Excel. The book on that is now in the second edition. And it's the most widely used tool for teaching network visualization and analysis in business sociology and many other fields. Event flow as I work on patient history. So that's a sample of where we've come from. But today is a different story. Today is about human-centered AI. And that's a phrase that's growing, growing in importance. And it's important to understand what it is and what it is is a set of processes that the way we develop systems by studying what users needs are, by developing prototypes, by working with stakeholders, by making prototypes, by testing them, by retesting them, and by making a systematic process to disseminate successful products. It's also products that are designed to be that enable people to have a sense of comprehensible, predictive, and controllable experiences that they can operate, that they are in control. And we're going to talk about how to do that. And as Matt has already said, the goal of human-centered AI is to amplify, I, augment and power and enhance people. Ais has had huge breakthroughs in the last decade. And machine learning and deep learning are important steps. And to make them successful, We bring the strategies of human-centered AI. And this talk is about how to think about that and how to bring that about, how to design them. So there are three parts to this talk. The first one is about the ACA I framework, which is a change in thinking. And these images I show here show people working together supported by technology. That's what we're all about. People working together, supported by two. However, it wasn't easy to get there. 40 years ago, Tom Sheridan, professor at MIT, had put forward the levels of automation idea that they could be ten levels of automation from full human control to full automation. And that idea was widely believed, widely accepted by me as well. And in the first edition of my book in 1986, I had a section called balancing automation and human control that expressed exactly this idea that there was a single dimension and that you had to go from human control to automation. That it was a 0 sum game. The more automation, the last human control. And I believe that idea. But as the years went by, I and others came to question this assumption of a single dimension in design. And as hard as it was, I began to break free from that. And so by the later editions of the book, that chapter became titled, ensuring human control while increasing automation. Ensuring human control while increasing automation. That may seem like a puzzle at first. As I began to explore how to make that happen, I began by understanding that there were really two dimensions. You can have low and high degrees of feeling control and low and high degrees of automation. And so those were two choices. And so we went from a one-dimensional to two-dimensional space. And I'd like to represent it this way. And so you have a x axis with computer automation and a y-axis with human control. And for simplicity, we'll look at it as four quadrants. The most common assumption is that we're going to have high levels of human control of, I'm sorry, of computer control, computer automation. We see that in important applications such as pacemaker embedded in your chest so as to regulate your heartbeat. An airbag in your automobile that has to deploy within 200 milliseconds. These are highly automated processes. However, there are also highly autonomous human actions. When we ride a bicycle, when we play a piano, when we act as parents, we want to be fully in-charge. We don't want to be controlled by an autonomous agent and a computer. We want to take our own action, develop our own mastery of bicycle riding or a piano playing or parenting. Then the direction that much technology is moving towards is what I call reliable, safe, and trustworthy. And we see familiar examples like the elevator or the digital camera. The elevator, you press the button, the elevator arrives, the doors open, you step inside, you press the sixth floor, the doors close and you watch as it goes, 123456, the doors open and you get out. You are in-charge of the things that matter to you. Yet. There's a high degree of automation that controls the elevators, that does all the things that makes sure that you get what you want, that your insurance. The digital cameras, my favorite example. There's a high degree of AI machine learning embedded in the digital cameras that we use, that billions of people use every day. And those cameras have AI that, that sets the aperture and the focus that reduces hand jitter and much more. Yet, I has the photographer point the camera where I want to compose the image, zoom in if I like, and then click for my decisive moment. It's my picture. I took the picture. I'm in charge. I'm in control of the parts that are important to me. It's my creative experience. And yes, I've used a lot of AI and a high degree of automation, but yet I am in control. So that's where we're going. And I'll give more examples as we go on today. Now, there are dangers. There are dangers of excessive automation on the far right here, manifest maybe most dramatically in the two crashes of the Boeing 737 MAX and match those tragedies which resulted in 346 deaths were entirely because excessive or automation. The designers believe that they could build the MKS system in such effective and perfect way that the pilots were not even in form of the existence of this autonomous system inside the airplane. And so when that system went wrong and started pointing the nose of the plane down shortly after takeoff, the pilots pulled back on the stick 20 times. And yet they could not prevent that. They did not realize they could turn it off because they didn't know that it exists. And that's the tragedies we need. Excessive autonomy is a danger. Excessive automation, the danger. We want automation, but we want the right levels that ensure human control. Down on the top, we see that excessive human control is also a date that we wish to avoid. And people can make mistakes. And so well-designed system have guards, interlocks, and safety systems that prevent human mistakes. For example, your home self-cleaning oven. If you turn on the self-cleaning and the temperature gets above 600 degrees Fahrenheit. You cannot open the door. Because opening the door have very severe consequences. And increasingly, designers of systems have come to recognize that those kind of bards to prevent systems from allowing humans to make those kind of mistakes are really essential. These are not easy to do. Sometimes are complicated design rules. Should automobiles prevent people, drivers from who have high levels of who have high levels of breath alcohol from driving, should they prevent cars from driving above the speed limit? I'm not sure I would buy that kind of power. Sometimes you need to do that if you're rushing to a hospital. That may be an important reason. So this framework gives us a way of thinking of the many quadrants of design. There may be times you want low automation and low human control. There may be times we want high levels of human control. High levels of computer control. But the sweet spot will be reliable, safe, and trustworthy systems with designers not parceled out the features that make it effective as a human control device and effective as a highly automated device. Let's look at some more exam. Pain control systems. The early World War two level morphine drip bag was a plastic bag that dripped morphine at a regular rate. This had to serious problems. It could give too much morphine and thereby kill the patient. It could give too little morphine and thereby not relieve the pain. The movement towards an automatic dispenser measured human heart rate and breathing rates, and thereby avoid the danger of excessive morphine. But it didn't deal with the problem of pain since it's not possible to build a sensor for pain. And so human guided dispenser were you and users got a trigger that they could pull to get more. Morphine became a nice complimentary solution there. However, those systems had interlocks to prevent excessive morphine. So we're getting to the right kind of design. And the current and future designs of pain control systems are patient guided by clinician monitor. The central control center of the hospital will monitor 50 or more of these devices to see how they're going and understand what works and what doesn't work. And thereby collected data to continuously improve the algorithms to provide more effective automatic dispenser. One final example would be the design of wheelchair. The old-fashion 100 year ago push chair was so heavy it required a caretaker to push in along. Robotic wheelchairs, navigate it automatically to a destination, but giving users control by making lightweight, hand powered user guided devices opened up many possibilities. It respected the desire of patients for their independence and their self efficacy. And it also led to some creative ideas like wheelchair basketball games and races. And so it opens up new possibilities when you give people the creative control and end the future of wheelchair design is motorized, joystick controlled, tell it, operated and programmable. And lots of these things are happening now with ways to present, prevent excessive automation, and prove and prevent excessive human control, thereby making safer and more effective. So that's the strategy that I encourage you to think about. Rake out of the idea that as a one-dimensional model and see the two-dimension. This is most important in the movement towards self-driving cars, which the Society of Automotive Engineers put out a six level of automation, which went from human control to full automation. But there again, they were thinking one-dimensional, open it up to two dimensions. There are many ways that advanced driver automation systems can improve safety and retain user control. That's what's happening. So that's the HCI framework. The second message I have for you today is to change the design metaphors. And that begins with changing the philosophy. And so we have to think about the design metaphors. The old philosophy was about intelligent agents and teammates, partners or collaborators. The idea of autonomous systems, social robots, those to me seem like archaic ideas. They read, lead down the wrong path. The new ways that I hope courage you to think about our AI infuse super tools, tell bots, control centers and active appliance. Let's take a look at them. But first let's talk about the attitude shift that's important. And this is stated by two very nice quotes that I strongly support. Robots are simply not people. We just have to remember that people are different. People are special in many ways. Humans, not robots are responsible agents. Those are the important messages through. And so the first is that my first principle is responsibility. Only humans are liable legally and morally. Every robot should come when the sign it says, if I screw up, it's your fault. Unless you are, how has that then design changes to lead you towards design of of advanced automation that's in human control. Yes, we'll have a high level of automation, but yes, we will retain human control. The second principle is to appreciate the distinctive capabilities of computers. There are sophisticated algorithms, huge database, superhuman sensors, information abundant displays and powerful effectors. And if you take a model that's based on make, mimicking human style, you may forget these differences. The last principle is to remember the distinct capabilities of people. Humans are creative, their passion, they have empathy and melody, and intuition. And those are special things that make people different from machines. People are not computers. Computers are not people. That's the central distinction, that category error of some, on some research that suggests that computers are becoming like people, to me, leads down the wrong path. So let's take a look at the super tools I've already described. Like digital cameras, give users an enormous degree of control before, during and after they take the picture. They can edit, they can have different kinds of photographs. And yet there's very great deal of, of AI machine learning in the background, taking care of making sure there's a very beautiful photo of many different kinds of photos. And that's really the Goodman. Another huge success story is digital navigation based on GPS. Here, I asked for a trip from the White House, eight miles to College Park, the University of Maryland. The system gives three choices. I select the one I want. Machine Learning gives a predictive model of how long each of these trips will take and makes a recommendation. And then it's up to the user to choose. They may want the more scenic route. And I want to avoid going through certain parts of town or avoid too many traffic lights. It's their choice, it's their trip. Similarly, the successful applications of AI and text and search auto-completion. I start typing University of Maryland, and I get a variety of choices which remind me of ideas or guide me in certain ways, but it's up to me to choose. Similarly, spelling correction is not done automatically, it's done underuse it Troll. A recommendation is made which user is free to accept or reject. That's what's super tools are about. Another super tool is the Bloomberg terminal. Vast amount of information, information abundant display in tiled, nonoverlap style, in a spatially stable arrangement where the user gets to know exactly what they are, what's in each place. They can make changes and set it up and the way they want, but they get the information that they want is their display. At the same time, there's a huge amount of AI going up. Bloomberg ingest 61.6 million news stories every day and uses AI, natural language processing to prepare the information and selected and organized in ways that will make the decision maker. I give the decision-maker what they want. This success story means that more than 300000 people spend about $20 thousand a year to have this kind of system on their desk. That's what I call super. And some people use the phrase a Bloomberg terminals for medical displays, for industrial automation and for many other. So that's what we're looking forward. Information abundance place, where AI systems are put to work to provide the information the user wants. Your home appliances are becoming a AI infused from Google's nests to control the heating and cooling in your house. You, I, robots, Roomba, and many every other device from dishwashers and clothes washers are, are really becoming rapidly infuse with AI. And those are the way we're moving forward. These are in your control. You operate them, you can stop, then it's comprehensible, it's predictable. And its control. The pacemaker, which used to be autonomous, is becoming user control. That's an active appliance. The, the, the patient gets to control the pacemaker through their through their phone. And the clinicians and Medtronic get to monitor tens of thousands of pacemakers so as to improve the algorithms. More people all the time. That's what I call an active clients. And we will see more and more of those coming into play and a clever designers, they're finding ways to add human control to what were formerly thought to be autonomous systems. The teller bots are another category. You'll hear many people talk about the Mars rovers is being autonomous. And yes, they are in certain ways. They can move so as to avoid obstacles, avoid precipice. They can adjust the position of the antenna and the solar panels too. Improve performance as much as possible. But there's like a whole control room with 80 people at nasa, JPL to operate this, to plan the missions that are carried out to take advantage of special opportunities that may emerge, and to repair problems that inevitably arise. So that balance between autonomy for some actions, but control for others are the vital idea for the future of successful commercial products. Semi the surgical robots. While journalists love to tell the story about robots do better than humans surgeons, the developer of the da Vinci Surgical System, a leaping once as robots don't perform surgery. You're a surgeon, performed surgery with da Vinci by using instruments that he or she guides of the console. And that's again the amplification model. This kind of robotic surgery tool enables the surgeon to move very precisely deep within the cavities of the human body and carry out actions that will be very difficult to do any other way. And they do it with small incisions. And that makes it easier for the patient and makes recovery more. So That's the kind of design we want. And da Vinci's is a SAS because they've understood the right ways to ensure human control while increasing the level of automation. And we see this in control centers and hospitals and, and airports and industrial automation where the high degree of automation carried forward. But the control center allows people to deal with special opportunities, with failures, and with unexpected events. And we see that in many places, people working together with information abundant displays, providing the kind of services and support that orbit. So that's the second one. The third idea is a larger one about the governance structures. And this takes us past the boundaries of computer science to larger issues of social, political, and administrative design. I summarize the 15 recommendations that I've put forward with this oval diagram. The central core is the more sulfur engineering ideas of reliable systems that come by audit trails in software engineering workflows, verification, unbiased testing, explainable UI, user interface. Talk a little bit about those, but I don't have time to say more than a few words and we'll talk just, I'll just say briefly here, safety culture is an important idea that's emerged in the last decades, especially in medical areas and, and life critical applications of transportation and so on. And it requires leadership, commitment and must come from the top of the organization. And then has to have the right kind of hiring and training. The careful monitoring of failures and near misses, the n-channel and reviews, and the adherence to industry standards. We move to the outer circle of industries which may contain many organizations and each organization as many teams. But in each industry will see auditing firms. Kpmg, Deloitte, Ernst, and Young, PricewaterhouseCoopers, which are moving on towards developing independent oversight for AI systems. They array do the independent auditing for financial systems. And it would be a valuable contribution to have them audit AI systems. Another strategy is insurance companies, which had been beneficial in transportation, medical care, and building construction. The insurance companies will come to understand how to reduce the dangers of these applications to make safer buildings. And then they will insure them. And what's happening now is the challenge about automobiles and safe, self-driving cars. Insurance companies are struggling to decide whether they should charge higher premiums. Are lower premiums for self-driving cars is we just don't know enough about how safe they are. For example, there's a website called Tesla deaths.com. Tesla deaths.com. Last time I looked, there were more than 200 deaths from Tesla cars. Now we don't know how many of those were operated by autopilot, and we just don't have the evidence. Tesla tells us that their cars are safer, but the insurance companies don't have the data. We've now been able to examine and the National Highway Transportation Safety Agency has moved in August. Open an investigation of 11 crashes of Tesla cars on autopilot that resulted in one death, but many injuries. When the Tesla cars plowed into police and fire trucks and vehicles that were at the side of the road or in the road responding to an emergence. And we will from that investigation began to understand why that may have had a big story three weeks ago was that and let's check on Missy Cummings, a Duke University professor who studied Tesla car safety, and it produced a storm or a reaction from the Tesla community objecting to her presence. And that's that's a problem. I think that rejection is the problem. I think it's great that Missy Cummings will be joining nasa and she may bring wisdom to that area as well. Government regulation or veins. Another possibility, industry rejects the saying it will limit innovation. But the evidence does not support that view that we know from the history of automobile safety and automobile fuel efficiency than having government regulation dramatically increased innovation. Now we see that also the GDPR ruling about requiring explainable AI system triggered a wave of Automate of innovation within 10 thousand papers from the AI community about explainable AI. So let's take a look just briefly at some of the notions here within the team. The five recommendations are the idea of audit trails, which have made civil aviation so safe. And the analysis tools to understand the flight data recorder data that's collected for each, each white. This is important and every AI tool, every AI robot should have a flight data recorder that allows forensic analysis of retrospective arrest, retrospectives, forensic analysis of, of of accidents. These other principles I could discuss at length, but I did want to focus on explainable user interfaces. And that's become a very hot topic, a very important part of human centered design. And so this is something that every style for engineering team can practice to have explainable user interface. The general literature focuses on retrospective explanations. That is, the system makes a decision and then the user, if they're confused, can ask for an explanation. And the major discussions whether that explanation is local with specific reference to the application or global as an explanation of how machine learning and neural networks work. I would say 98% of the work has been retrospective exponential, but I've come to believe that there's a better goal rather than sharing the confusion. Let's prevent confusion and surprise. What I call prospective user interface is not retrospective or prospective. That an interactive, visual and exploratory. Let me give you an example. Here is the typical case of mortgage mortgage granting application with a post hoc or retrospective analysis. You put in mortgage amount requested 375 thousand household monthly income, 7 thousand liquid assets, 48 thousand. You click on submit and you get a explanation. We're sorry, your mortgage loan was not approved. You might be approved if you reduce the mortgage amount requested, increase your household monthly income, or increase your liquid assets? That's generally as a pretty good explanation, but not by me. I would say that really leaves a lot of questions open, which is more important. Should I reduce the mortgage amount, increase my monthly income, might be hard for me to do that or a piece my liquid assets by borrowing from family member or a friend. Well, I think a better way to do this would be a prospective exploratory interface by which you can adjust slider. So you can move these around and change the amount of mortgage. You can see that if you reduce the amount of mortgage requested, your score will go up. If you increase your household monthly income, your score will go up. If you increase your liquid assets, your score will go up. And by following this kind of exploratory user interface, you can see which ones matter the most and which ones are most attainable by you in order to get to the score you need for a group. So this simple idea, I think, can be put to work in many cases and we're beginning to see this in banks and other applications. Maybe the most enjoyable one I found. Group of British librarians who put up a website with a recommend AI based recommender of novel choices. And you can move sliders up to four of them. And hear from funny to serious. You move the slider from beautiful, too disgusting, but no sexual content, explicit sexual content from optimistic to bleed. And as you move these sliders to cover images change to make different recommendations. You understand why you're getting that recommendation. Now we've seen dozens of these, for example, newspaper recommendations where there are three sliders, politics, sports, and entertain. And as you move the politics slider, you get more stories about. But if you move the sports, you get more or less stories about sports. Another one would be music recommendation. Here five of the attributes in Spotify. Spotify's 14 attributes allow the user to suggest, to select acoustic instrumentalists, sensibility, valence, or energy. And so you get different recommendations. The OECD's better life index also comes with a rich set of sliders. And this is becoming a common approach which I think could make it more possible. We need better control. Mammals for Facebook, news feed. Large controversies over whether they should retain the old-fashion form of chronological reporting or the algorithmic based one. And we've seen the recent controversies over Facebook's choices which favor its business model rather than giving users control. That's what we need to check. Okay, That's my story. And so let's just have a quick summary here. Remember there's three parts, the HCI framework, which says there can be high levels of human control and and high levels of computer automation. Second, or the design metaphors to shift us to AI infuse super tools, tell about control centers and active apply it. And third, the governance structures that allow for innovative approaches that bring people into independent oversight of what's going on and, and shift the balance of control to enable much improved clarity about what works and what doesn't. These are the three papers that these three ideas were based on. And my website covers than a golfer in the slides to the conference organizers, we will post them. So you can follow these and visit my page of HCl that umd.edu human centered as AI. And I'm pleased to announce that the book from Oxford University Press on human-centered AI will be appearing in January and the UK and in February in the US. I offer the conference organizers a discount coupon from Oxford, which will allow you to get a pre-publication discount for the book. I treasure the very nice warm comments my colleagues and sent to me those are read the book, how NMAC worth of University of British Columbia. He says The book is well structured and light jewelry. Thank you Allen. The coverage is comprehensive, but it will be controversial. Yes. The idea of human-centered AI is not going to be easy to put forward. It will require effort by many people. And it is happening. We are seeing more and more of that. And I'm really pleased that that's, that's process is moving forward. The book is structured around these principles of supporting him and values, human rights, social justice, and dignity, individual dignity. As I've said, individual goals of self-efficacy, creativity, responsibility, insertions, and actions supported by design aspirations to make reliable, safe and trustworthy systems. And my four levels of analysis of the team, the organization, industry and government. And those are the three ideas on the bottom, we have to remember there are a variety of stakeholders who have different needs. The researchers have one set of philosophies, developers, business leaders, policymakers, and then the large number of users are also important. At every point we have to remember there are threats to this process. This is not easy. The malicious actors who would use these technologies to bring harm the cyber criminals and terrorists, the oppressive political actors. And we need to make sure we do our best to limit. We have to make sure that bias is in the training data and in our designs is limited. And we have to protect those individual and minority group. Individual and minority needs. There may be flawed software as well. And so these are the ways I think about it, and that's what the focus is. For me, the future is human-centered. Please join our Google group with more than 8800 people to join in the next few hours. The weekly note goes out later today. So you'll be able to join in groups dot google.com, human dash center, dash AI. Join us, follow us on Twitter and visit the website with more. And that's the story. I believe that future is human centered. I hope you'll join me in making that conviction. Aria. Thank you very much, and I'm glad to take questions if we have. Thank you. All right. Thank you very much. We have some time for some brief questions. We're going to go right to our virtual participants. This question, first question for Ben from that, when is Facebook has shown to pass misinformation among its users in order to keep them hooked to the screen for it to o more ads. In the process, Facebook damages the norms of our democracy by injecting mistrust into our government institutions. What governance do you propose to protect our democracy? And what legal standing do we have to introduce such regulations? Because the US Constitution aims to protect individuals from the power of the government. Wow, wow, what a Welsh see question. And focus right on a difficult, difficult, difficult issue. Yes. I think raising public awareness is the starting point of what Facebook failures have been. And the recent weeks abroad us even more compelling evidence of how Facebook acted on its business benefits rather than on the needs and the values of its users. I think Facebook, I know at Facebook works try to limit hate speech and advocacy of violence. They need to do more. It is possible for them to change. It is possible for them to stop and limit the, the intrusive and harmful robots and robots. Systems that are spreading these kind of messages. Increased human control through better design is also a possibility. Yes, I know that Facebook, Facebook believes that it will reduce their their their business efficacy. But I think that if we have, as many people, including me have withdrawn from Facebook, we will begin, they will begin to suffer because Facebook is no longer, no longer a trusted source of information. So, yes, you've got a difficult issue. I do believe in government regulation and even Facebook. Would advocates work for a government regulation. I think the Congress is moving towards that. It's taken much too long. That's 12 we need to do. You've asked the right question. The answers are slower and coming and I would like them to be, technology plays a role, but human control is really what we want. Maybe just, I'll take a moment and just say one idea proposed by myself with Mark Smith of the social media research foundation is allow editors that you could appoint an editor for you. It's a minor point, the American Civil Liberties Union to block certain sources. And if you could appoint an editor, you wouldn't have to do all the job yourself. And those organizations might have a better opportunity to do the job of limiting parental organizations or other social groups could take on the role and you could choose your editors to edit the flow of what you'd get from Facebook, from Twitter and other social media plan. Thank you for tough question. And I don't have a full answer that I I hope that the strategies that are being tried will be put to work. Yeah, I started with real ringer question first. We'll go to restaurant who's in our in-person audience. I'll just repeat the question for her for quickness here. But then could you please elaborate on where, where humans are not robots, quote as well, the two that I put up where Margaret Bowden and Joanna Bryson there cited in the papers that I mentioned. You can find them easily by searching on the net or contact me and I'll point you to the sources exactly. But it's sort of a central principle for me and others that If not now, not ever, well we want, well we see robots becoming like the social robot movement is peering out. I mean the failure of Geebo and Anki inquiry and, and Kosmo and Asimo and pepper most recently shows social robots are compelling, a fun, they're entertaining. They're certainly entertaining. They could support some educational purposes. But we're not going to see social robots in our future. We will have wonderful services from cameras and digital navigation and many, many, many more applications that are AI embeded but have excellent efforts of human control. Human control is a competitive advantage. It gives people what they want and gives them a sense of mastery and self that thank you so much, Ben. Thank you all for listening. I think to keep us on our schedule, we'll move forward. But thank you again so much for joining us. Thank you, bravo. The future is UVA and set human centered.
2021 DSI Symposium Welcome and Keynote Schneiderman (1 of 7)
From Julie Cowart December 06, 2021
5 plays
5
0 comments
0
You unliked the media.