2024 "Uncover AI 4 Students" Special Virtual Event
This year's unique "Uncover AI 4 Students" expert webinar series is designed to inspire and empower students at all ages to embrace artificial intelligence, and leverage it to supercharge their future learning and career. We are thrilled to have several world-class AI experts to share their valuable insights on the future development and emerging trend of Artificial Intelligence, and how we can best prepare ourselves for this unique AI era of exponential technology growth.
Kasia Chmielinski is the Co-Founder of the Data Nutrition Project, an initiative that builds tools to mitigate bias in artificial intelligence, and an advisor at the Centre for Humanitarian Data and Global Pulse (UN OCHA). Kasia is also an affiliate at Stanford University and Harvard University focused on building responsible data systems. Previously, they held positions at the US Digital Service (Executive Office of the President), MIT Media Lab, McKinsey & Company, and Google. Kasia has a degree in physics from Harvard University.
Moshe Y. Vardi is University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University. His research focuses on the interface of mathematical logic and computation -- including database theory, hardware/software design and verification, multi-agent systems, and constraint satisfaction. He is the recipient of numerous awards, including the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Knuth Prize, the IEEE Computer Society Goode Award, and the EATCS Distinguished Achievements Award. He is the author and co-author of over 750 papers, as well as two books. He is a Guggenheim Fellows as well as fellow of several societies, and a member of several academies, including the US National Academy of Engineering, National Academy of Sciences, the American Academy of Arts and Science, and the Royal Society of London. He holds nine honorary titles. He is a Senior Editor of the Communications of the ACM, the premier publication in computing.
---------------------------------------------------------------------------
Sage Wang is a junior at Clements High School. She is the co-founder and President of Content Development of Kid Teach Kid. Sage is a 4-time Gold Key Winner for the Scholastic Art and Writing Competition, a 2-time AIME Qualifier, and a USACO Silver Division Qualifier. Outside of school, Sage also enjoys playing the cello and playing frisbee.
---------------------------------------------------------------------------
In this talk, Chmielinski delved into the concept of "AI safety by design," advocating for the integration of a safety mindset into the product development cycle from its inception. The discussion will encompass a range of topics, including maintaining dataset integrity and transparency, adopting trust and safety protocols for generative AI, and highlighting tangible examples of user-centered design in deployment—moving beyond theoretical discourse to actionable strategies. Discover how the proactive addressing of potential harms and the implementation of care-driven practices can pave the way for building better tech that truly serves everyone.
In this talk, Professor Vardi dissected the two schools of economic thinking have for many years been engaged in a debate about the potential effects of automation on jobs: will new technology spawn mass unemployment, as the robots take jobs away from humans? Or will the jobs robots take over create demand for new human jobs? He presented data that demonstrate that the concerns about automation are valid. In fact, technology has been hurting working-class people for the past 40 years. The discussion about humans, machines and work tend to be a discussion about some undetermined point in the far future. But it is time to face reality. The future is now.
Transcript of "AI Safety By Design" presentation by Kasia Chmielinski
Sage:
Everyone, thanks so much for joining our event today. We're just gonna give a few more minutes just to have everyone a chance to get in. And hi, Kasia. Thanks so much for joining us on holiday weekend.
We really appreciate you taking the time to discuss such a fascinating topic with us. Yeah, thanks for having me. Happy holiday weekend. All right.
I think hopefully everyone should have had a chance to get in. So without further ado, thank you all so much for joining us. today. And Katya, on behalf of all of us, a big thank you for being willing to share such profound insights on AI safety by design.
So welcome everyone to our Uncover AI for Students special event series presented by KetechKid. Here at KetechKid, we're dedicated to providing quality,
free educational resources that have reached thousands of students. worldwide. This year's event series is all about empowering students of all ages to harness the incredible potential of artificial intelligence.
Today, we're honored to have Kasia Kalansky joining us, a world -class AI expert. I would love to introduce Kasia, who is a founder of the Data Nutrition Project, an initiative dedicated to minimizing bias in AI by building responsible data systems.
Kasia's affiliations with prestigious AI such as Stanford and Harvard University demonstrate their commitment to developing safe and fair systems. Cautious' impressive career has included roles at the U .S.
Digital Service, the MIT Media Lab, the Kinzane Company, and Google. They are a pioneer dedicated to the principle of AI safety by design. In today's talk,
Cautious will be guiding us through the essential practices of embedding safety in AI from the start to the end. cycle. They'll dive into topics such as maintaining data integrity, adopting strong safety protocols,
and implementing user -centered designs to create technology that benefits everyone. And at the end of their talk, we'll have a 10 -minute live Q &A session, where all of you guys in the audience will get the opportunity to ask Kasha any questions you might have.
So without further ado, please welcome Kasha, who will be teaching us about the vital topic of AI. safety. The floor is yours, Kasha.
Kasia:
Super. All right,
can everybody hear me? I guess I can't really see the participants, but hopefully you all can hear me and you'll stop me if you can't. So thank you so much for having me. I'm going to walk through a little bit of a presentation.
Hopefully it will be fairly engaging. I have some, everyone loves a live demo, so I'll try to do something live and we'll see if everything breaks. Then I will open up to questions. If that sounds all right,
I'm going to go ahead and share my screen, and hopefully you can thumbs up if you can see. Thumbs up?
All right. Fantastic. I'm going to focus today on better AI through better data. Don't worry, I'm going to go through step -by -step. step by step. So hopefully this will be not difficult for anyone to follow.
My name is Kasha, as I was already introduced, I'm a technologist and I'm currently focused on building responsible data systems. I've done this in a number of different parts of the ecosystem.
And I'm also a very enthusiastic cyclist and a birdwatcher. I love birdwatching. I even have my binoculars right here so I can look out the way. So you can also, of course,
ask me any questions about birdwatching. Sometimes people ask me, "Well, how did you end up in the career that you ended up in, or how did you get where you are today?" And so by way of just kind of a quick exploration of what a career can look like,
I like to say it's non -linear, but my mom says that I'm scattered. So you can decide which one of those you prefer. I actually graduated after studying physics about a million years ago.
And at the time, I had a few options like a continuing physics and get a master's and a PhD, or I could go out in the world and try to apply my brain to something that seemed like it would be more immediately relevant.
And so I think some people go into the sciences and that's awesome. But for me, I decided to go and join this company that was big, but not so big. So this was in 2006 or seven.
So it was like almost 20 years ago. And I joined this little company called Google. And so I worked at Google for four years. I ended up in the UK, so I worked out of London. And then ultimately decided to leave Google and go start a company.
So I did that with some folks on the West Coast. So I went back to LA and founded a company with some ex -Googlers. That was-- around financial tech. So this was building AI to build credit credit scores.
So tried to tell whether someone would pay us back or not and then use that score and sell it to banks and other other places. And the idea was that we wanted to make credit more available to everyone, regardless of what their kind of official credit score would be.
So after that I went and I joined a team at the MIT Media Lab. called Scratch and some of you might know it. It is a learning platform and a language. You can make animations and videos and things like that using this kind of LEGOs like block -based coding.
So, I was on that team and I led their product for four years. And then I went into the government and I worked in the White House at the U .S. Digital Service. After that, I really wanted to be located in New York.
So, I was looking for jobs that were in New York. I ended up joining a consulting firm. called McKinsey where I worked on healthcare analytics projects. So again, lots of data, some AI,
some just advanced statistical systems. And around the same time, also launched the data nutrition project, which I will talk more about, which is a nonprofit. So I ended up quitting all the things. And now I do some of the stuff at the bottom.
So I advise organizations. I am a researcher at a few institutions and I also like to tell some... and I do that through film or through podcasts and other kinds of things like that.
So I'm kind of all over the place. But I guess you could say I started with a little bit of a technical background. I ended up building products. I started to understand the way these products and these systems might not work for everybody and decided that I really wanted to pivot my career and think more about safety and responsibility when it comes to technology.
And now I do that as a freelancer. basically, as somebody who works for themselves. Also, I was just like a total nerd. I'm not saying that you're all nerds, but I was like, you can see this is me part of math club when I was in high school.
So I've always really enjoyed technical subjects. I've always understood numbers more than I understand people. And you can see my little grin there. I was like, very enthusiastically into math club.
So let's talk about AI, let's talk about bias in AI. Well, first of all, when we talk about AI, there are three different kinds of AI that we've been talking about for years.
AI is not new. So the term artificial intelligence actually was coined in, I think, 1955. So it's a really old term, and we've been building on this over time where these systems have really grown up,
and every time something new comes out, we call it AI. and we all get excited, but it really is just advanced statistical systems. And it's gone through many different names, like big data and machine learning and now AI and generative AI and these kinds of things.
But what we're really talking about is, generally speaking, narrow AI, which is that we've trained a machine to do a very specific or particular thing. So they're dedicated to assisting us in just specific tasks.
You know, drive this... car, get us from point A to point B on Google Maps, that's AI, you know, recommend what product I might want to buy, that's shopping recommendations, that's AI. Now, people have started talking about general AI,
which is different from generative AI. So, generative AI is like GPT and these kinds of foundation models that are able to tell you words or, you know, make new images or create sentences.
That's, that's still there. right? General AI is basically an AI that can do many, many, many different kinds of things. So this would be like your robot that you could say,
could you drive me to school and then go to the store and get groceries for dinner and then come back and cook the groceries, make the dinner, set the table, and then pick me up from school and then feed me, right?
That would be like an AI that could do many, many, many things. That would be general AI, we're not there yet. And then there's super AI, which is, we don't even know machines that are just on the order of magnitude smarter than humans, just way more smarter than humans.
And we don't really know what this is. This is kind of what fantasy looks like. So we're very much still in this narrow AI, that first bucket. Some people think that we're starting to move towards the second bucket. But up till now,
that's really where we live. And I think it's important to remember that we're really training specific machines to do specific things right now. So the golden rule about AI is that you are what you eat.
And this is when it gets really important to think about how we actually make the AI systems. I just like cute things, especially cute images of food. So in this case,
we've got an AI that looks like a wheel of cheese. And the AI has been fed all of these really adorable things on the left-- pizza, and the pears, and the popsicles. And so then in the future,
when it sees pizza, or a pear, or a popsicle. it's like, I know what that is because I've seen it before. And that's how we train the system. We tell the system, start recognizing patterns. When you see this, it's this. When you see this, it's this. But then if it ever identifies a radish,
it won't know what to do because it's never seen a radish before. And that's when the AI doesn't work well. The AI won't work well if we don't train it with the data that's representative of what it's going to see in real life,
right? So this is a silly example about food. but AI is only gonna work for the people who are included in the trading data. So it's not just about food,
it's about people too. And so you have real world problems here where you have machines, AI, that are getting deployed, they're launched out in the public, but they were based on incomplete datasets.
And so they don't work for all the people. And this is kind of the problem that we have with AI. the problems that we have with AI systems so you know for example on the left here the MIT technology review story about predictive policing algorithms well let's say that you want to train a machine to tell you where crimes are going to be committed so you need a lot of historical data to train that system so it will
predict in the future where people are going to be conducting crimes and so you can that historical data and maybe you think, well, I'll just use all of the arrests data that has ever occurred in my city.
And the arrests are kind of like the crimes. And so then I'll just train it. But the problem is that arrests are not the same thing as crimes. We know that some types of, some folks get arrested more than others.
And some areas have more police than others. So what you end up having actually is a data set. of people who were arrested for crimes,
but maybe it's because they were in the wrong place at the wrong time and there are more police around and they were, you know, racially profiled by the police and picked up there. So your data is actually not telling you about crimes,
it's telling you about arrests, and arrests are telling you about bias in the world, not all of them, but some of them. And so when you then build an AI that says, please tell me where the crimes are going to occur. the AI points to the places where most of the arrests happened,
and you just have this cycle now where you're going to send more police into areas that are more police, and you're going to arrest more people potentially on the basis of something racist, right? And the other two are examples as well.
The one on the right I'll also call out. So Amazon created a tool that was supposed to help companies hire. And so what they did was they looked at all the resumes that they had ever seen before. and they knew which ones got hired and which ones didn't.
They said, "If you see resumes that look like these ones that got hired, then you should also recommend that they get hired. And if you see resumes that look like these ones that didn't get hired, then you should recommend that they not get hired." And well,
it turns out that in the past, there were a lot of people who applied for jobs, but men got the jobs more than women, or maybe there weren't that many women who applied for jobs. And so what the AI learned was that maybe women shouldn't be given jobs.
And so at a meeting, they were given jobs. on this AI, it started discriminating against women. So this is the kind of issue that we have with AI. If you don't train it on the right data, it's not going to make unbiased decisions.
This is another example here. This was a few years ago, but Amazon Prime was trying to figure out where they could deliver. This is before they could deliver everywhere. They still can't deliver everywhere. And so what they looked at was they just looked at neighborhoods where people had to deliver.
And it turns out that a lot of folks don't have memberships if they live in communities where they're poorer or they don't have access to the internet. So they can't be buying things off the internet, right?
They don't have this good connectivity, these kinds of things. And so Amazon was excluding neighborhoods where people didn't have Amazon memberships. And that ended up being most of the areas in Boston where black and brown people live.
So you had an AI that was excluding people. Again, not necessarily on the basis of race, but because their data was not representative, it ends up being discriminatory. So why did I get involved in this?
Well, I have my own-- that's me, by the way, as a cartoon. You know, when I was taught how to build these systems, you can't build for everyone all at once. And usually you're under a tremendous amount of pressure to launch something very quickly.
And so generally what you say is, all right, build for your ideal user or your target user, and then over time we'll start to build for other people too. And so what that meant was a lot of the training data sets,
the data that I used to build the algorithms on, were people who speak English, they included people who lived in the Western world, who had access to the most recent technology, and were kind of normative characteristics.
And then as I started building these systems, I realized that sometimes they didn't even work for me because I wasn't my own ideal user. So, you know, I'm mixed race and I'm mixed Chinese and Polish.
I am non -binary, so I don't identify as one gender or the other or any, really. And we're hearing it. I actually can't hear out of this ear at all. And so my data wasn't represented.
in the data set. So I was building systems that didn't work for me. You know, once I built a system that classified me over and over again as a white Eastern European lady. And that sounds funny,
you know, that's like, wow, that's so, so off. It's so off. But actually, this system was supposed to be used in healthcare. And in healthcare, it really matters what your background is, because you have different levels of risk when it comes to different diseases or different things you need to be screened for.
So I kind of had this "aha" moment where I thought, "Oh my goodness, I'm building systems that don't work for me. Think of all the systems out there and how, because of the data that we put into them, they might not work for everybody." And that started to get me really interested in,
"Oh, how can I start to maybe build these systems that work better for everybody?" And so before I jump into some of the work that I've done, I thought, maybe we could have a real -world example. I'm going to try to do this live.
is either a fantastic idea or a terrible idea, one or the other. So I'm going to, if my computer will let me, ooh, fun.
Okay, we're gonna go here, and you all can do this from your house to later if you wanna try it once we're done with this demo. It's an open and public website. Okay,
and because I'm gonna stop my video so that you can see this, 'cause you're gonna see my video again in a few seconds. All right, so we're gonna build a classifier and we're gonna give it a bunch of data and we're gonna tell it what this data is.
So the first set of data we're gonna have, turn this on here so you can see me again. Okay, you see me here. All right, this is gonna be Kasha. Let's be. be. And I'm going to get a bunch of training data.
My training data are all pictures. So pictures, okay? Move my head around. I have all these pictures. So these pictures are all now, we're going to train the AI to recognize me. That's Kasha. All right, class two.
We're going to do this now. Class two. Let's do... Well, actually, you know what? Let's call this one glasses. All right, and this is going to be just Kasha with glasses.
I can't actually see very much right now, but just gonna trust that it's working, all right? Move my head around a bit, okay? You got me. Then we'll do another one. Add a class.
How about we call this one hat? Let's say I put a hat on top of this weird headset that I have on, okay, good enough. All right, there's my hat, cool. All right, and then,
'cause I really can't see, I've got a hat on, okay? one here and we're going to call this Boba and this is my adorable little Boba friend who's sitting here with me and We're going to take the pictures of my Boba friends as well.
Okay, so now We've got four different classes We've got glasses me without glasses Hat and Boba and we're going to train the model.
So now it's going to actually go and train them model and train it to recognize whatever it sees so Perhaps I took too many photos Okay Here we go And then what we're going to do is I'm going to actually try to see how well it does to classify And then we're going to try combinations of things and see how well it does.
And if you have, let's see, I think that there's a chat. If you have some guesses as to what's going to happen, if you want to throw them in the chat, I'd love to see what people think is going to happen.
All right, so this is our model here, and you can all still see my screen, right? Can I get a thumbs up? Yeah? Okay, excellent. All right,
so it's seeing me. And it says that I'm 100 % glasses. That's correct. Right. I turned my head up. Look at this. And I turned my head a little bit. Some weird stuff happens. Because I didn't actually train it turning my head.
And if I take off my glasses. Oh, okay. It's kind of confused. But it thinks like it's Kasha and glasses. That's good. I put these back on. And then if I put on this hat, what does it think? Okay.
All right. Pretty good. It understands that it's a hat. Good. And then if I have my little boba guy, here we go. It's 100 % boba. Cool. All right. It seems to work. Let's let's try some things then. I have some other pairs of glasses.
So let me take these guys off and put on these glasses here. It seems to be a little bit more confused now because these glasses, I mean, do you want to have, does anyone want to guess as to why it doesn't understand that these are glasses?
You can throw it at the chat if you'd like. Well, one thing is that maybe it's looking at the color of the lenses, right? 'Cause they're different glasses. We didn't actually train it to understand that these are also glasses.
How would it know that these are also glasses, right? I have another pair of glasses here too, 'cause I wear a lot of glasses. Again, it doesn't really understand, right? Exactly, different glasses. So we could actually probably go back here. We could say glasses,
we could maybe, do this a bit more. Maybe I could try these other glasses over here. Okay.
And then I could maybe retrain the model here. We could see if it does any better. So you guys are exactly right. It's only going to understand exactly what we showed it.
And even though we as humans understand, hey, one pair of glasses is just like another pair of glasses, it turns out we're way smarter than AI. If I put on this other pair of glasses here-- oh, you can't see me,
but I'll put them on in a second. Even though there are different pair of glasses, you still know they're glasses. So how many different glasses do we need to train? Well, it's a really good question. There's no way that we could train every single pair of glasses the entire world.
So here we go. We have the original ones, 100 % glasses, cool. New ones, ah, now it understands it because we trained-- it, right? Now it understands that they're glasses. And then the white ones. Pretty good.
Pretty good. Still thinks it's a little bit of me. But, all right, so we updated our glasses. Now, what if we tried to do something like, I have my little boba guy. What happens if I go to the side?
Okay, this still kind of knows that it's him or them. I don't know what their gender is. What if I, like, put a hat on? the, on the boba?
What's it going to do? What if I like took my glasses and I put them on the boba? Interesting.
So I'm not going to keep doing this because I can do this for hours, literally. But I think you all, oh yeah, let me throw the, let me throw the link into the chat so you all can play with it.
Ideally, I but if you want to play with it now too, I know it's going to stop you. There you go. Okay. And let me go back to my presentation. Oh, I should probably do this so it doesn't take all the bandwidth.
Yes, leave the site. And I can go back. Okay. So what did we learn there?
Well, it turns out you really have to be very specific with the things you train it. And just because for us humans, we understand that glasses are all the same thing, that does not mean that the AI is going to understand that.
And we see this kind of an issue. For example, when you go to the doctor and they measure how much oxygen is in your blood, they put this little thing on your finger. They clip it on your finger, and then you kind of wait for a little while,
and they check to see your oxygen. And what they're doing is they're actually using lasers, and they're kind of bouncing. off of your skin and they're trying to understand what's underneath your skin. And it turns out that those systems were all tested on people who have skin that's my color or lighter and it wasn't tested on people and they didn't have a lot of people in their in their image classification who had
darker skin and so the AI just doesn't work for them, right? So they're going to the doctor and they're clipping on the thing and then the thing is giving them the wrong readings and the doctors are saying, "Huh, this is interesting. We don't understand why it's giving the wrong readings." Well,
it's because it wasn't trained on all the different potential skin colors that humans can have, right? So this is a really, a very real problem, not just about, you know, Boba and,
you know, glasses and things like that, but all of these systems that are doing image recognition, facial recognition, that are determining things about us, our credit scores, giving us our medical tests. How were these actually trained?
You know, what was it contained in the data? So I had some questions, I'll just pose them. I think we've kind of maybe addressed some of them. So what potential biases did I introduce?
How did I introduce, how might I identify these, right? And some of the biases that we realized were, I'm wearing really dark glasses, but glasses are different. And we didn't talk about,
we didn't actually train with multiple things at once. We only trained one thing at a time. So what happens once you have multiple things? These kinds of things start to get a little more complex and a little bit more confusing to the AI. How might I mitigate these biases?
Well, I could take lots and lots and lots of photos. The other thing that you can do is you can make sure that once you've deployed the AI, that you're testing it, and you're making sure that it's doing well. And then if you find a case in which it's not doing well,
then you go and you retrain it just like we did. You go and you add more pictures, you go and you add more words. You go back and you try to make it better. It's unlikely unlikely you're gonna launch something that's perfect because it's just very hard to make something as perfect.
There's no such thing as perfect AI, right? But what we can do is make sure that once it's launched, we keep going back and checking to make sure that it works, right? And if we see a mistake, we catch the mistake and we try to make it better. And then real kind of real world applications,
you can kind of drop other thoughts if you have them in the chat. I'd love to hear if there are situations. where technology didn't work for you or someone that you know or someone that you love, maybe someone in your family, and why you think that AI might not have worked for you or your family.
I have one example, which is when I was at Google, I helped them build voice search. So that's you talk to your phone, and then it's like Siri, but this was like in 2008. You talk to your phone,
and it actually can go search Google for you. And what it's doing is it's listening to your voice, and then it's using it. to turn your voice into text and then the text is being searched on Google and they return your results.
But do you wanna guess who was actually trained, whose voice was actually in the training data? What was all British people? 'Cause I lived in the UK and it was all people who had a very clear British accent.
And so it didn't work very well for me 'cause I have an American accent and my mom is from Australia. and she speaks English with a Chinese accent. And so it didn't work at all for her, right?
It's because the trading data didn't include people with different accents. It only had one kind of accent. So he did go back and made a retrain it and find lots more data in order to make it work for everybody. So what can we do about this?
I just talked very concretely about the ways that we can make sure we add more data. But this is something that I and others have been thinking about. a few years ago when I was a fellow at Harvard,
along with these wonderful folks, and we decided to launch something called the Data Nutrition Project. Now the idea behind this is that we had been eating a lot of cookies. We had been sitting in a university room dreaming up ideas to launch things to do and thinking about how could we make,
how could we make people? And we decided to focus on the data, like I've been talking about, you know, is the data representative? Do we have enough training data? Are we thinking about people of different races,
different ages, coming from different places in the world, with different kinds of use cases? And we're thinking about how we could make the data more healthy, and we're eating tons of cookies in this room at MIT. And we pick up the box of cookies and we're like,
hey, they're nutrition facts. on the back of this cookie box. And that means that we can look inside it before we eat it. And it means that if we're sensitive to sugar or sensitive to different kinds of ingredients,
that we would know that this is dangerous for us before we eat it so we can make a good choice. And it's really, really important that we have transparency into things so that we can have choice if we don't know what's inside.
we can't make a good choice. The same is true of data. Right now, if you go and you buy a data set and you wanna build AI on top of it, it's very hard to know what's inside the data before you use it. There's just no standards out there,
which I know sounds crazy, but there just are no standards. So we thought, well, maybe we could make a nutrition label for a data set, or we could look inside before we eat it. And so that's kind of what we ended up doing.
All right, we first launched a data set. quantitative. It had lots of information that kind of looks like a data, sorry, it looks like a nutrition label as well. And then we were told,
hey, that's not so useful because we already know kind of, we run a whole bunch of statistics when we get the data set. Questions that we have about the data are things like who paid for it,
where did it come from? What data did they remove? How did they clean the data? and these kinds of things will be very hard to see on a quantitative label. And so then we launched something that was a little bit more ambitious,
and we said this has racial bias, this has gender bias, and that ended up being pretty challenging, because it kind of depends what your definition of bias is.
And so we ended up launching the third version, which is the most recent version. And this one has a kind of a balance of the two. So it's a standardized way of understanding what's in the data. We say things like how you should or should not use the data.
And if you shouldn't use it, what we mean by that and why that's the case, maybe there's some licensing information or some other things that you need to know. And then underneath this, if we could scroll underneath this, you'd see a bunch of risks and mitigations that are related to the data.
So for example, if the data only came from one region in the world, you'd want to call that out. it doesn't mean that you shouldn't use it, it just means that you should only use it on a system that is gonna be used in that part of the world.
Now, for example, let's say that I built a, let's say that I built a risk system for diabetes. And I wanna be able to use this to assess if someone is at high or low risk of getting diabetes.
And I trained it on adult information from one region, maybe the Northeast, which is where I am. New York, right? Maybe just this region, adults in just this region.
And then I built my system and I deployed it in the emergency rooms and in the hospitals just in the New York region. Maybe it works really well because I have the data specifically from this region,
right? People who have these diets or in this climate or whatever. But then let's say I take that, that same model and I use it in a pediatric setting with kids. Well, kids, bye. and adults bodies are really different.
Maybe it doesn't work very well on kids at all Or maybe I take this model and I go to an ER that's in Tokyo and People have they eat different diets the the the environment is different and all of a sudden it doesn't work for them at all And so the danger is if you don't match the data set to the use case So we don't say to people this is unhealthy.
Don't eat this. We just say this is what you need to know about this data And once you have transparency into what is actually contained in the dataset, then you can make a decision based on what you're trying to do with it.
So I think I ended probably quite early, but I prefer questions anyway. So I'm hoping that all of you have lots of questions to ask me. And I am happy to answer anything from career stuff about getting into AI or any of the topics.
I covered if you want me to go back and explain anything more deeply or anything about birds or bicycles, which are also my passions. So I will stop sharing my screen here and I'd love to hear your questions.
All right, I see a question here. Well, there's one question that just says cookies. So I'm just going to say yes,
I like cookies a lot. Another question from Daniel G says, considering the importance of integrating ethics into technology education at school, could you propose specific curriculum changes in college or even high school?
that could help bring more awareness to AI bias and the importance of data set, technology and transparency? Yes, absolutely. Just thinking,
it's my thinking face. It's like my dumb face is my thinking face. So I studied physics in university. I went to Harvard and I studied physics and, I would say that one of the biggest challenges that I faced with the curriculum was that everything is taught to you like there's no human bias and it's taught to you like it's totally factual that this is the this is objective factual scientific and does not have any
space for humans emotions or any kind of social underpinnings. It's seen as being almost divorced from social bearing at all.
And I think that that is really dangerous. And so in some ways, I think having absolutely we need to have additional courses about tech ethics,
just like in the medical field, there are many courses that you have to take about medical ethics. There should be courses about technical ethics and about the way that we build and deploy technology on and with communities.
I think it also, though, has to be built into the actual work that happens when you're learning math or learning science or learning computer science. It can't be seen as something that is separate. You know, we should be asking, why do we build these things this way?
You know, why do we identify one ideal user? Why are we thinking about the norm? Here's another example. When I was building technology more day to day,
I think it's the same now. Everything at some point moved from on -premise application servers and storage into the cloud.
Everything then went to being hosted in Amazon. And one of the big benefits of having everything hosted in Amazon was instantaneous scaling. So let's say that I was working on the scratch project and as soon as the school day starts,
you see a huge bump in the traffic. And so you have these little hills that happen based on the time zone and based on where people are coming from and based on the school day and whether it's a school day versus a weekend.
and if you have to maintain your own servers then you have to be able to scale them very quickly and sometimes you don't have enough capacity and everything falls down. So it was really fantastic to be able to use Amazon and just say instantaneously scale it.
Just throw a machine at it. I'll pay another five dollars an hour or whatever it is and then as soon as you get used to it,
it will be useful. scaling purposes, but the thing that you're not realizing is that it's probably not great for the environment because you can always just throw another machine at it. You're not thinking about the actual cost of running a machine,
the actual environmental impact this might have in a data center somewhere. So we're constantly making decisions that actually have our values embedded in them.
And I think that these are the kinds of things that we need to bring into the... curriculum. So talking about not only just when you're building technology but also having ethics courses alongside your regular curriculum. I also the last thing I'll say about this and then I will be quiet because I see lots of good questions coming in.
The other thing I'd say about this is that it's important for us to be interdisciplinary. So for example a lot of the data work that I and others have been doing has actually been happening for a very long time.
if you talk to archivists, people who build archives in libraries, right? It's the same kind of problem in a completely different domain. And if technologists are never talking to anybody else, they're talking to people who run libraries,
they're not talking to chefs who make food, they're not talking to educators who teach kids, then we're going to lose a lot of information because a lot of these problems have actually been addressed or have been studied in different realms,
not just in technology. And so I think it's important to make sure that the curriculum is interdisciplinary in some sense, where technologists are working with people who are studying psychology and people who are studying medical applications and people who are studying music,
right? I think this kind of cross -disciplinary thing is also very important to build people who are gonna be ethically intelligent too, not just technically intelligent. Okay,
Daniel G, I hope that answers your question. and many, many more. Next, but how can we make sure we have a comprehensive data set, which is such a big challenge? This is a really good question.
And I think it's not necessarily about having a comprehensive data set. It is about having the right data set for your purpose. So like I said,
if I was just going to give you an answer, to be deployed on one kind of population, very specific population, I probably only need data that looks like that very specific population.
But if I want to use that AI on lots and lots of people across the entire world, then I need a much bigger data set. And so it really is about mapping what you're trying to do with the data set to the data set.
It's not about just having the biggest data set that you could find. It's about quality. It's about quality and applicability to the specific thing that you're trying to do. One example would be that I have,
like many of you I would assume, a cell phone that has many, many apps on it. And one of the apps that it has on it is called Merlin. And it is like a, I don't know if anyone uses it.
who's not my age, but it's like it can listen to the birds and it will tell you which birds are singing. Like I said, I'm a bird watcher. This is really helpful because sometimes you can't see the birds, especially right now in New York,
there's a lot of leaves on the trees, it's really hard to see the little critters in the trees. So if there's a bird singing, I can take my phone and I can just kind of go, okay, let's listen to it, and then the AI will tell me which bird it is.
Now, I also want to know which bird is singing, which bird is singing, which bird is singing, the entire world. I don't need a comprehensive data set. I just need the ones that are going to be here.
Now does that mean that I might miss a bird or two that's traveling through? Yes. So there is obviously a trade -off between having total comprehensive data set and just what you need locally and that's that's a decision you have to make is how far do I go and how close do I stay,
right? Now, if I went to Europe and I took out my phone and I tried to listen to the birds, the birds are totally different for the most part. And so I wouldn't recognize any of them. In that case, I had a great data set for New York,
but a terrible data set for the UK, right? So it's not so much about one single data set to rule them all. It's not about one single comprehensive data set. It's really about do you have the correct data set your particular use case for this particular thing that you were trying to do.
Okay. I'm going to go to the next question here from Kevin. How do we avoid using people's opinions instead of scientific or proven facts to train the AI model?
I'm really sorry to tell you, but generally, speaking everything that we're doing is probably written by people. And so it's less about trying to figure out what is true and clean and free from bias and what is biased,
and it's more about understanding what the bias is. There's no such thing as an unbiased data set. Somebody has made a decision somewhere. Someone decided to add that or to remove that. Someone decided what the data set would contain and what it would not contain.
And so it really is a question of what are the biases that are included in this? Now, for example, if I look at a textbook, okay, this is a good example, history classes.
I don't know how many of you are based in the US, based in other places in the world, but the history that we learn in the US is very different from the history that we learn in other parts of the world. And even sometimes, across the US,
the history that we learned is different. You know, so my mom grew up in Hong Kong, the history that she learned is really mostly about Chinese history. She learned the order of all the, the dynasties and she learned the names of the poets and she learned the particular warring,
you know, the wars that occurred over time, mostly in the Asian region. And I grew up in the US and I I learned history from this perspective in the North, right? And even the history that we teach in the North is maybe different from how it's characterized in the South of the U .S.
So what is a fact? What is a proven or a scientific fact in that realm and what is not, right? Now there could be some things that do seem like, all right, this is more of a scientific fact that's maybe biased free because,
you know, it's the speed of light, right? But we've also... decided what a meter is and what a second is. And so these things are maybe more objective,
but they're still based in human decisions that were made. And so it really is a question of, can you identify and be comfortable with the biases that you have and make decisions if you're not to try to reduce those biases?
Okay, this is very strange to answer questions and not have a back and forth. So if you have, follow up. feelings about the way that I'm answering your questions, please feel free to jump into the chat. I'll go to the next one.
That's from Juan, who says, "How can high school and college students "contribute to reducing bias in AI "while we're still learning about technology "and artificial intelligence?" Well,
I have a news flash for you and that is that even people who are... building AI right now are still learning about it. So there are a lot of things that are really new,
that are being found out now. So just because you work in the field does not mean that you know everything. Actually, because the field changes so quickly, you're pretty much guaranteed to not know everything all the time.
So it's going into AI or going into technology probably means you're going to always be learning things. which I think is really fun, but that's like a big warning for you if you if that's not your jam if you want to like do a thing and get good at it and that's the one thing you do.
Technology is definitely not the place to do that. And probably most other domains to you still have to learn a little at least. But how can folks contribute. I mean,
I think the first thing is. to be aware that these are the kinds of problems that exist and then Don't be a dumb user, right?
So if okay, I have I have an example here So I use an app that I wear like a little ring and it measures my sleep and Then it tells me if I slept well or not I run I mean I usually know if I didn't sleep well because you wake up the morning you feel really good and then you're like,
"Oh, well, I guess I didn't sleep well." But it's nice when this ring tells you they didn't sleep well and it could show you how you slept. Now, I realized that approximately once a month or so,
it was telling me I was going to be sick. It was saying, "Watch out, Kasia. You're going to be sick. You're getting sick. Your body is getting hotter. You're going to watch out getting it cold." And then I realized,
oh, no, no, no, it's not that I'm getting sick, it's just that I'm about to get my period. I'm the kind of person who gets a period. So I thought about it. And I realized, hey, they probably have never tested this on people who get periods.
I bet their entire team are people who don't get periods. So I wrote to the company. And I said, this is a problem. You're telling everybody who gets periods that they're about to be sick.
You're probably scaring a lot of people. Plus, this is incredibly biased. You should do your homework and actually test this on people who are representative of the entire population. You're missing approximately 50 percent of the population." And they very quickly,
I'd say within a week or two, actually fix the issue. So when you're saying what can a college student or a high school student do about this. well the first thing is don't just sit back and Take what's given to you and assume that it's done.
Well You know, it's it's not whether you're in high school or college. It's also me. I work in the industry But I didn't work at that company. I have no idea how they're making their technology But I am critical of it Because I'm somebody who understands that these kinds of issues can occur and when they do we need to reach out and we need to ask folks to fix the problem,
and we need to say, "Hey, look, I think you have a bias here. I'm seeing something that's wrong here," or we have to go reach out to other people who are using the system too and saying, "Are you seeing the same issues that I'm seeing?" You need to hold folks accountable,
right? Don't just kind of be a kind of reactive user, be a proactive user. That's one thing that you can definitely do, no matter what age you are. The other thing you can do for sure is,
if you're learning about these things, at school, make sure that your curriculum has something about ethics and has something about, hey, these systems are not objective. They're not neutral, right?
They're built by people. And there's something that's almost even more dangerous when we have a system that's a machine system. People think that it's not biased because they think that people are biased,
but that machines are not. But they forget that people made the machines. So I think that's if you're a part of this curriculum, if you're in CS or you're taking math classes or science classes or you're coding,
just keep your ears peeled, your eyes peeled, I don't know what that saying is, but peel and think about, hey, am I actually seeing ethics in this curriculum? Go to your professors and say,
could you add this or can we talk about these issues? So don't be a reactive person, be proactive about this. and go and make sure that it gets embedded into your own curriculum. I think that's definitely what you can do if you're if you're students.
And then if you're interested in it, go into this field. I don't think it's ever too early to start to poke around, see what you can build, see what other people are building.
And maybe that's where you want to go with your career, which would be awesome. We'd love to have more more people for sure. Okay, next to Sonya. You're welcome.
I'm happy to share. Thanks for being here. AI regulation. What future regulatory measures do foresee being necessary to keep up with these changes?
How might these regulations differ across global jurisdictions? Man, you all have good questions. These are really good questions. Okay. So we definitely are in an era where the AI and the technology has moved faster than the regulation.
And as a result, the companies have a leg up because they've already done a bunch of stuff that I think would have likely to have been illegal if we understood how it was going to be in the first place.
But it hasn't been because they've just done everything, for example, data collection practices, just scooping up all the data they can find across the internet. But now we're having... some issues, hey, is that copyrighted? Is that private data?
Right? And this is because we have cultural norms now in the technology sector where these kinds of things are allowed. You can buy someone's data. You can source the data.
There's a whole market for just buying people's data. And it's legal. Right? So this kind of stuff, I think, definitely regulation is slower than the technology has moved.
You're right. In terms of what we then need to not even keep up with the changes, but maybe start to address some of the issues and start to make some changes. I don't even know if we can keep up to,
but start, start to move in the right direction. You can see some of the work that's coming out of the European Union. So there's actually the EU AI Act. The European Union Artificial Intelligence Act came out on like Tuesday of this week.
So really new. And the way that they're addressing this is to basically say AI systems of a certain risk category or risk level have to be audited.
We have to understand about how they were trained, what the data was that trained them. The audits have to be done in such a way. And so I think the regulation kind of sets up a future marketplace.
for AI and data auditors. And that will be kind of these third parties that will go in and be able to investigate whether the AI works the way it's said it's going to work, whether it's harming people and whether the data is appropriate for the use case.
So that I think is, it's already happening. So I don't know that I'm saying that I foresee that it's necessary, but huge component of that is to grow a workforce of people.
who are able to investigate AI and investigate data technically and can tell you when something has gone wrong. And that really doesn't exist right now. You have most of the people who have the technical skills are building the AI in the companies.
And all the AI is kind of locked down, right? It's proprietary. So we need more people who have the skills to go into government. We need more technologists to go into non -profit.
or into this kind of auditing sector that's being built so that we can start to hold companies accountable for the AI that they're building. All right,
next person. Justin, can I share an example of bias in AI that might affect today's high school students? And how can you all have thoughts on this and you can drop this into the chat as well.
My belief is, my understanding is that, especially over the pandemic, a lot of educational tools went online. A lot of people went online and a lot of AI started to be deployed on students.
So things like Khan Academy or Code Academy are these kinds of these kind of teaching tools that are maybe personalized.
So you do what course and then it says, oh, you're a student who does well in this, I'm going to offer you this other course, I'm going to offer you this one based on the things that you did well, you didn't do well, I'm now going to offer you this, right? Those are all that's all AI.
I think there are some dangers with this. So where are they getting the trading data? Not sure. And what if it starts to recommend certain kinds of education for some people and other kinds of education for somebody else?
So I think that that is one potential source of bias. I don't know that we know very much about that industry. So I don't know whether this is actually harming people. But I do know that AI systems are getting deployed more widely on high schools and even lower than that to kind of personalize curriculum or to help you have your own path.
so that maybe also teachers are overworked, I think. So I would actually be interested if I could turn that question around on you all, if there are examples of AI that you've been using in school and how you feel about them,
maybe you can drop those in the chat too, but that's one example. And I think you had a second part of the question, how can these biases be addressed? Well, it's very hard to identify when a bias occurs because we can only really understand that it's a bias when it happens to many,
many people. So we can see that the algorithm has been doing the same kind of thing to the same kind of person. It's very hard if you have one person to say, "Oh, this is a bias decision." You have to kind of see it over and over again and realize it's a pattern.
It's like something that's broken in the AI. So you'd have to understand that it's a bias at scale somehow. And in order to address that, I think we would have to reach out to those companies and say,
"Help me." why your AI is making this decision, and explain to us how you train this AI, what was the data that you used, and can you prove to us that it's not doing something that's biased?" And so,
unfortunately, a lot of the recourse in these cases, because the AI is private, it's proprietary, there's no regulation that they have to tell you what they're doing, right? So, a lot of what we can do in this case is we can go and demand that they give us information and we can maybe go to the authorities if we think they've broken some kind of law and we can have them investigated.
That's pretty much what we have. Or I think you could do the other way, which is social pressure, which is that you can publicly kind of shine a light on bad practices and put pressure through social media and through the news and these kinds of things on companies to be held accountable for things.
But generally speaking, it goes, number one, can we find the biases, right? hard, but can we find them? And if we do identify them, we need to reach out to the organizations and ask them how they build the AI.
And if they don't want to work with us nicely, then I think holding them accountable through the media, through social media, through going to the authorities, that kind of thing is probably the next step. Okay,
let's see. Amelia. Can you share with us? one or two biggest challenges or obstacles I've faced in real world applications? Sure.
Sorry, let me read that again so that all of you can follow along. Can you share with us one or two of the biggest challenges or obstacles that you faced while implementing data transparency tools in real world AI applications? Well,
generally speaking, people don't like to document things. They want to build, build, build. They don't want to. tell you what they did because it's not as fun to tell people what they did So I think that one of the biggest challenges that we've had from the nutrition label side is that the nutrition label takes work to make and People don't want to do the work And it's not required that anyone have a nutrition label
on their data sets And so the motivation is an external it has to be internal the person just wants to they have to want to do it of the goodness of their Hearts because they think it's the right thing and some people do,
and so we engage with those folks and we love that community. But some people are like, I don't want to, no one's telling me I have to, and it takes a lot of time and work. And also, why would I tell you things about my dataset that make my dataset look less good,
right? Especially folks who have cut corners, they don't wanna say they've cut corners. And so I think that that is a huge challenge, is getting people to actually do these things when it's not required. - Thank you. not necessarily the norm or something for people to do it.
That would be, I think, the biggest challenge. The other challenge that happens when I'm inside of companies and I'm helping to build is that there's often a kind of resource crunch.
There's not enough money, there's not enough people, and if you're a startup, you've got investors and the investor are putting a lot of pressure on you to grow very quickly. And they're saying, "We gave you $10 million and we want you to return $100 million so that we can get a return on our investment.
That's why I invested in you because you said you're going to grow and we're going to make money. So you better grow fast." And so you have a handful of engineers and maybe machine learning folks or AI folks. You have a product person.
That's me. I have a designer, right? And you're just like, "Oh, that's me." build some stuff, and we've got to make it really good, and we've got to go really big." Then ethics or documentation or transparency just falls down the ladder,
because it's more important for you to get something out so that your investors don't yell at you. I think this is also a real -world issue around transparency tooling, is that if the transparency tooling and the ethical components,
the responsible or safety components of the yell at you. That's really important for you to get something out so that your investors don't people away from other things and putting them on that project.
'Cause it's generally speaking, the same people who will be building all of these things. So I think that's maybe the second obstacle. First is it takes a long time and people don't wanna do it, they don't have to. And the second is even if you wanna do it,
sometimes the market pressure and the pressure being in a business means that you don't wanna do it. -offs, you have to cut corners, and so you end up putting all of your effort into getting the product out and less into the safety side.
Unfortunately, it's not always the case, but I don't mean to depress you, but those are some real challenges. Okay, next we've got Eric. "Is some of us high school and college students have aspiration of pursuing career in AI development?
What mindset, shift, and adjustment do you have?" in order to fully embrace today's learning about AI bias and data transparency? Great question. Well, first of all, I'd be thrilled if some of you want to work in AI development,
especially if you care about data transparency and reducing bias. Let's see. What kind of mindset?
I think this leads very well from the previous question, which we're, you know, what were the obstacles and I think if you can take a mindset with you that the most important thing is not to make $100 million,
but it's to make tools that work for people. And if you happen to make $100 million, good on you. But that the more important thing is that we use this technology to the benefit of humanity.
And, and the globe, honestly, I mean the environment I think is a huge, it's something that. my generation, your generation, all the generations that are live and the ones that are coming are going to have to contend with. And so I think that,
you know, when you jump into the startup world, assuming that's where you want to go or big, big company, big tech world. A lot of things are profit driven, they're driven by the market,
people want to make a ton of money. And that's not all there is to it, right? That's part of the game. for sure. But if we really want to use this technology to help people and to facilitate conversations across people,
to reduce conflict, to promote peace, and to start to address some of the ills and the environment that we've caused over the last many hundreds of years, I think that we can't just,
I know that we can't just think about profit. And we can't just think about ROI. return on investment for investors. And we can't just think about who's going to get the next big infusion of cash from the venture capitalists.
And so that would be the mindset shift that I would recommend is that you might be taught by older technologists like me, people who are older than me even, that the market is the way to do it.
And that's the way to play the game in startup world is to kind of lie about a thing, have a dream. make a prototype, go big, and then fix it later, right? Just get out there and like fix it later.
If you can go in with a different mindset, which is let's think about this a little and let's make sure we're building something that works for everybody. Let's make sure that we are testing that when the AI has been launched,
we're still watching to see whether it actually works, right? Monitor it and then evaluate it and fix it if there's something wrong with it. that's the mindset that we need going into the next generation of AI.
Mira asks, how do you decide when to use LLM? Okay, great question. So for those who don't know what LLM is, that's a large language model.
This is like the models GPT, the Lama family, Mistral, other kinds of DBRX, these kind of foundation models that are trained to kind of give you the next thing in a sequence.
So if I say, I would like a glass of, it goes through all of the things that it knows and it comes back and it says, highest probability is water. And the second highest probability is milk.
And then after that is orange juice, right? And it goes pixel by pixel so it can make, it can generate new goes word by word, it generates new content that is kind of on the basis of patterns that it has seen it's been fed in the training data.
Because they've been trained on such massive datasets, I mean, really, really big datasets, they have picked up and can sound and act very human.
And you've probably experienced this through chat GBT. So hopefully, now understands what you mean when we say LLM. The question of how do you decide when or why not to use it? Just like with any technology,
we should be using a technology for what it's good for and leaving the part to people that people are good for. So LLMs are good at sounding conversational.
In English, at least, because they're mostly trained on English, so in other languages, they might not work as well. Again, a data problem. So they might be able to, for example,
help you summarize a lot of text and make it sound human. What they're not good at is retrieving information, is being accurate. And so what you have to do is make sure that whatever your use case is,
that LLM is gonna be the right technology for that thing, maybe sounding human. chatbot -like, you know, human -like experience, and keep them away from things that they're not good at, like doing math,
never good at math, they're not very good at anything that's fact -based because they tend to make things up and just kind of hallucinate is what we call it. I'm going to pause because I think we're out of time and maybe Sage if you want to jump in.
There are a few more questions I could try to answer very quickly. or I can just put a pin in it. Yeah since we're running close to our time here if you just want to finish off with any concluding thoughts on any of the final questions please go ahead.
Okay I'm gonna look at the ones that are remaining here. I see four of them just give me like a second to read them. Uh -huh. Uh -huh. Okay cool.
So much, um, so Thank you all for joining the talk and having amazing questions. To kind of answer some of these questions here about data nutrition projects and where you can learn more,
we have a website, datanutrition .org. You can go there and start there. If you have more questions, feel free to email us. We're very friendly. It's definitely, we're volunteer driven, so we're very small group and we're very friendly.
In terms of our long term. vision, we have this tool that's out there. You should play around with it if you'd like. It's linked to from the website. And we're in a lot of conversations with folks who are thinking about regulation in the space.
And we would love for our label and others too to be seen as potential standards as this becomes regulated. So it will probably be required at some point to have some kind of data documentation.
And it would make a lot of sense for that to be standardized so that you could actually-- compare the documentation to each other. And we're hoping that DNP can help with that. And I think that the last question here is about courses that you should take if you wanna pursue AI development.
I don't have specific thoughts there. I think if other people have answers to that, if you wanna drop them in the chat. But I would say that make sure that you are studying not only the technical,
but also the also the socio -technical, meaning ethics, the impact on society, and just making sure that you are thinking about how and when you should or should not be deploying AI.
And I think that's probably it. Thank you so much for having me.
Sage:
Yeah, Kasia, thank you so much for such a fantastic presentation. Personally, I just want to say I really like it.
live image recognition demonstration. Did on Google is a lot of fun. And also just the analogy between bias being like unhealthy food. I also really quickly want to thank everyone in the audience for all those thoughtful questions during our Q &A.
And finally, just please remember a recording and transcript of Kasha's talk today will be available on our KTK website after this weekend. And please join us again for the second event of our Uncovered AI.
for Students series this Monday night at 7 p .m. Central. Have a great night, everyone.
Kasia:
Thank you, everyone. Enjoy your weekends. Bye.Transcript of "The Future is Now" presentation by Dr. Moshe Vardi
Sage:
Thanks so much for joining. We'll be getting started in just a few minutes. Just give everyone a chance to travel in. So my algorithm for deciding when to start, is to look at the number of participants and compute the seconds derivative and when the second derivative is negative it means it's time to start.
Alright, I think we can go ahead and get started. So hello everyone, and welcome to the Uncover AI for Students special event series presented by Kid Teach Kid our mission is to provide high quality free educational resources and we're proud to have reached thousands of students worldwide.
This year, our series focuses on encouraging students of all ages to harness the incredible potential of artificial intelligence. Today, we're honored to have Dr. Moshe Vardi with us. Dr.
Varti is a world -renowned mathematician and computer scientist, and he will be sharing his insights today on the evolving relationship between humans and AI. Let me quickly introduce Dr.
Varti. He's an expert in automated reasoning and a distinguished professor of computer science at Rice University. He leads the Technology, Culture, and Society Initiative and is a member of many prestigious academies,
including the US National Academy of Engineering and the Royal Society of London. Additionally, he's a senior editor of the Communications of the ACM, the premier computing publication. In today's talk,
Dr. Vardi will discuss the relentless advancement of automation driven by technological progress. over the past several decades. He will present data demonstrating the significant impact of AI and automation on the future of work and emphasize the importance of society embracing this reality because the future is now.
So without further ado, please join me in welcoming Dr. Vardi who will teach us about the vital topic of humans, machines and work. Dr. Vardi, the floor is yours.
Dr. Vardi
- Thank you very much for inviting me to address you tonight.
- All right, so I would like to... you see, it's not going to be really a technical talk. It will be a socio -economic, technical, political talk,
kind of trying to look at the issue of technology and society, which is an issue that's very dear to my heart these days. So whenever time,
you know, time magazine, the cover, then you know that something significant has happened. And this is from, I think, January of 2023.
It's a dialogue with ChatGPT. ChatGPT just burst into the scene just in November of 2022. So let's kind of take a bit of a look at the history of AI.
But you can see already that people are already incredibly concerned about what's going to happen with AI. What you see here is-- I love this cartoon.
You see a box labeled AI. You see a monster coming out of the box. And the man is looking at the shipping label, and he's thinking, who the heck is Pandora? OK.
there are a lot of concerns about AI. And in fact, the concerns are so much that sometimes it's hard to tell what is real and what is just fiction.
So in February of 2023, a Google engineer, Black Lemoine, was fired from Google. And he was fired because he said, my fears are coming true.
So during my conversation with the chatboard, came to the conclusion that AI could be sentient due to the emotion that is expressed reliably in the right context.
I mean, I know serious people take this very seriously, but the fact that Google engineer was jumping to the conclusion tells us something about the situation.
Two months later, a very well -known computer scientist, Jeff... Hinton, let me see if I can get rid of this stuff at the top here,
hide control, what is this, hide, ah, better, okay, very good.
So, Jeff Hinton decided to leave Google. He was also at Google. For half a century, Jeffrey Hinton, natural technology at the heart of childhood like charge GPT,
now he's worried it will cause serious harm. As company improved the AI system, he believed they become incredibly, increasingly dangerous. People are worried about dangerous AI.
AI. And in May of last year,
that's about a year ago, the Center for AI Safety issued a statement mitigating the risk of extinction. Extinction,
the end of human society from AI, should be global priority alongside other societal secrets, such as pandemics and new war. I mean, these people are called the doomers,
people who think that the eye brings extinction risk. So let's go back and take a little bit of history. So many people know,
in some sense, the beginning of a paper by Alan Turing in 1950. The paper was called "Commissioning Machinery and Intelligence".
It was famous by the so -called imitation game, or the Turing test, and what you see is Benedict Cumberbunch from the movie "The Imitation Game". But imitation game was actually a small part of the paper.
The paper was a philosophical examination, "Can machine be intelligent?" and the paper was called "Commissioning Machinery and Intelligence". machine can be intelligent.
And it concludes, I believe, at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machine thinking without expecting to be contradicted.
So the end of the century was 2000. And maybe you can see he was perhaps a bit too optimistic. I don't think by 2000 anybody was thinking that laptops are sinking. Even now we're debating it,
but definitely not at the end of the century. And this early optimism was typical of the early period, 1958. Within 10 years,
a digital computer would be the world chess champion. It actually took about 40 years. I mean, it means generation, the problem of artificial intelligence would be substantially solved.
Didn't happen within a generation. Generation is like 30 years, 25 years. Didn't happen. We still have not fully solved the problem of artificial intelligence. And because of this optimism,
AI was a field that over promised and under delivered for many years. And there were people known in the world AI winters where people have kind of given up on AI and there's very little funding so it makes little progress there are two such one in the in the 70s another one in the late 80s early 90s there was a big hype about something called the Japanese fifth generation and then Japan was going to take over AI
and didn't happen again there was years of research funding and with that research how to do research to make progress? Well, things start turning around in the late 90s.
In 1997, IBM D. Blue beat Kasparov in chess. Kasparov was then the reigning chess champion. Here you see him walking away in this self -discuss from the game table.
He lost the second game and he never recovered from that. IBM again struck in 2011 the developer system called Watson and Watson beat two great Jeopardy champions,
Brad Radder and Ken Jennings by a significant margin. And it looked like we were making great progress. Then 2016,
this time it was Google or DeepMind, and this time they mechanized Go, and they were able to beat the reigning champion, Lee C. Dole,
in the game of Go. In Go, it's considered much harder to mechanize than chess, because the board is larger, and there are only two type of pieces, just there's only one type of black and white,
there are many, many more configurations, much harder to mechanize Go. And then in 2022, we had what people called the Gen AI revolution.
Suddenly we have, charge EPT came around and it can it can generate text and art and music. And suddenly people thought, you know, it can start do things that humans really do.
You know, many of you have played with with charge EPT and you can ask it to to write text You know there is a debate if you're saying I'm going to college There's a whole debate as a college SS still makes sense.
You can ask such EPT to write an essay So and then if we think can do all that stuff Can they take can they can they can we can the automate jobs that people would do and ultimately They will have a adverse impact on the job market.
So the issue is maybe we don't have to worry too soon about extinction. Robot's machines are not there yet, complete to take over. People who worry about extinction maybe have watched too many science fiction movies,
Skynet, etc. But the issue is what's going to be the societal impact of widespread automation. And that's the topic that I will talk to you about. about this prospect of widespread automation and the fear that machine will take away jobs from people.
And indeed, McKinsey, a consulting company about a year ago, estimated about close to 12 million workers will lose their job by 2030.
so just in about six years, okay, huge impact on the workforce, okay? So already there's some prediction that the GNI will have huge impact on the workforce.
But the whole issue of automation and work is very controversial among economists, okay? So this is... a debate that's been going for a long time.
The kind of new classical economy says no, no, no, here is what Ken Rogo from Harvard wrote in 2012. Since the dawn of the industrial age, a recurrent fear has been that technological change will spawn mass unemployment.
New classical economists predicted this would not happen because people find other jobs. all but possibly after a long period of painful adjustment. He doesn't tell us how long and how painful,
but we'll come back to that. By and large, that prediction is proven to be correct. So the idea is, yes, technology takes away jobs,
but it creates new jobs. Everything works out in the long term. But other economies, Paul Krugman, a Nobel Prize winner, asked, can innovation and progress really have a large number of workers,
maybe in works in general? The truth is that it can. And serious economies have been aware of this possibility for almost two centuries. So distinguished economies are in disagreement.
How much-- and these people call themselves the new Luddites. The Luddites are the people who objected the new Luddite, the new Luddite, the new Luddite. new ludites.
Now when you come to predictions of what will be the impact of technology on jobs, in 2018 MIT Technology Review decided to try to do a consensus and put a chart with different predictions and conclude the about as many opinions there are experts.
So you see the predictions are all over the place for minor impact to huge impact, okay? No consensus whatsoever.
So why there are so many different predictions? So there is a I'll power for the Danish Predictions-- some people say predictions are hard.
The Danish pover is predictions are hard, especially by the future. I say, no, predictions are easy. Which is hard, correct predictions.
Making predictions-- anybody can make a prediction. Go ahead. Each one of you can make a prediction. So I will not make predictions here, because I think the future is inherently unknown. [VIDEO PLAYBACK] I mean,
just think of what happened with CharGPT and imagine six months before that who could have predicted it. So, we're not able to look even ten years into the future. But what I'll try to do here is to look into the past and to see how much technology had impacted technology in the past.
And I remember what Ken Rogoff told us, "Oh, technology, it's not a problem, don't worry about technology and jobs." So here is, here is, let's take one important segment of the US economy manufacturing.
Manufacturing is a huge sector of the US economy. And what you see here is what has happened to the economy since the early nineties to manufacturing the economy.
On the top we see real manufacturing output. What is real, real manufacturing output? dash for inflation. And you see that manufacturing volume grew over a period of about 20 years by 40%.
But employment has declined by about 30%. So we see growing volume and declining employment.
So what's happening? How can you have... growing volume but declining employment? Well, the answer is we become more productive. By using automation,
our workers today are more productive. They can produce more. And because of that,
because productivity has increased, you can see between 1997, even to 2010 productivity doubled and because of that fewer workers can manufacture more.
So you can have on one economic boom but it's not good for the worker because the workers have lost jobs. So this is the picture that we are seeing when you look into what happened to working class people.
So over the past 40 years, automation is already had very hard economic impact on middle and working class Americans. Now,
if you're coming from the educated class, the educated professionals, professionals. And I suspect that most of the people here on the talk,
if they're watching this talk, they're probably coming from educated professional families. And educated professionals live in a bubble. Because if the parents are educated professionals,
there's a good chance that all their friends are educated professionals, and the children, they will make sure the children will be educated professionals. So the only thing that's happening inside the community is that there's a good chance that all their friends are educated professionals.
what I will show you is that the last 40 years has been very unkind to working class people. And kind of explain what's happening to this country economically, politically, why we have such a polarized country.
And that's, I would now dig into that and I will argue that this is, a major societal challenge that we must address. So,
one of the things you can do is look at what happened to working class people. And either the finished working class people is no college degree.
So here what you see what happened to white men with no college degrees. and you see that and this is this call again real, real means adjusted for inflation and you see the median income,
median is similar to average as the income in the middle, median income has declined from about $45 ,000 a year to $37 ,000 a year.
So that's a significant drop, I mean it's about 20 % drop. significant drop. Now, you hear that unemployment is low.
And it's true. Unemployment is very low right now. But unemployment, what does unemployment mean? Unemployment means how many people that are looking for a job are not finding a job.
So this is a significant drop, significant drop. of two ways. More people finding jobs or fewer people looking for jobs. So economists look at what they call labor force participation.
What percentage of the population is either working or looking for a job? They're in the labor market, it's called. They're in the labor force. And what you see is if you look,
this looks at men and what you see is, that if you go back like 60 years ago, 70 years ago, then you can see that over 85 % of men used to be in the labor force.
And now you can see how this number has been declining, declining, declining over the past 60 years. So what happened? Men just give up. And I'll come back to it in a minute.
Why did they give up? They stopped looking for a job. Thank you. see if you remember seeing some camp rallies and you see angry men and you may have asked yourself what are they so angry about well this is what they're angry about people who lost economically are unhappy and that's why they're angry and let me just a little deeper into this.
So economists look at what they call the skill spectrum. So they look at all the jobs and they divide them into low skill and middle skill and high skill. Low skill are the people who maybe more your yard.
And high skill are typically lawyers, engineers, lawyers, engineers, these are all high -skilled jobs. And then in the middle, they are middle -skilled jobs.
And you can see that in 1983, the majority of the workers were in middle -skilled jobs. They're about 50 % low -skilled and 26 % high -skilled and almost 60 % middle -skilled.
And for example, manufacturing was considered middle -skilled. Now what? happened when you go over 30 years? Low skill doesn't change much. High skill,
there are more high -skilled jobs, because the economy has become more sophisticated. Many more jobs require college education. But the middle -skilled jobs have shrunk.
That means that middle -skilled people who lost their jobs, they cannot find other jobs. That's why they drop out of the labor market. cannot find other middle -skilled jobs So if you look what happened to wages between 1973 and 2007 for example,
see that people with advanced degrees Have the real wages have increased college increased If you had some college maybe associate degree you barely kept even you have high school you even didn't keep even,
if you're less than high school, you lost ground. And to go to manufacturing, you can just finish high school. And now it's not good enough. And it's not just about even the money,
it looks like a life expectancy. So we compare people with bachelor degree and without bachelor degree. And you see a growing gap. Now around, you know, I don't know, I don't know, I don't know. drop.
But still you see a gap between people with bachelor degree, people without bachelor degree. And the gap is not trivial. It's about close to eight years of life.
This is huge. This is huge. So let's go back to the other about the industrial revolution.
People say industrial revolution happened. Lots of jobs, you know, agricultural jobs, as by and large, have been mechanized. And look at us, we are okay. But this is a very simplistic way of looking at industrial revolution.
Yes, we have adapted the industrial revolution. The modern social welfare state is always prone to the industrial revolution. But it took about 200 years.
The industrial revolution started in the late 18th century and the emergence of the modern social welfare state is after World War II.
It's almost 200 years. It was not an easy adjustment, for example, the two communist revolutions, the Soviet revolution and the Chinese revolution. the combined death toll is close to 100 million people.
It was very, very difficult adjustment. Even in this country, people have amnesia. We had, you know, this is the Pittsburgh railroad riot,
a strike. Police came to, you know, to break the strike, the open fire. I mean, look at that, that looks like a revolution. We didn't. have a revolution here But it was bloody.
It was not an easy adjustment So to say all the other revolution happened and we adjusted is historically naive and If we're going to repeat the same painful adjustment,
then we are stupid We need to learn from history. Those who do not learn from history are doomed to repeat it now there's technology destroying and and jobs?
Yes, technology destroy jobs. Does technology create jobs? Yeah, technology creates new jobs, OK? But is it clear that the numbers will offset each other?
We don't know that. There's no guarantee that the number of created jobs will be exactly the number of destroyed jobs. What about the speed in which job are destroyed and jobs are created? What about the skill level?
Imagine you take a truck driver and you tell the truck driver, "We don't need you anymore to drive a truck," but you can become a programmer who writes software for autonomous trucks.
It's unlikely that you can take a truck driver and tell them to become a software developer for autonomous trucks. The skill level is very different. okay? A truck driver maybe has a high school education.
To write software for autonomous tracks, you need at least a bottle of the green computer science. What about where the jobs are? You know,
they've talked to miners in West Virginia, told them there are jobs in California to install solar panels. Is that okay?
can move, my elderly parents are here, I take care of them, I cannot move to California. Human beings are not widgets, you cannot just move them around like widgets,
they're human beings. Now people have studied this issue of mobility and how much we have technology creates new jobs.
And they study from Oxford. about 10 years ago, they have looked at jobs that, new jobs exist in 2010.
They did not exist in 2000. And the funny was a very, very, very small fraction. So between 2000 and 2010, of course, huge development in information technology.
Remember, Google was launched in 1998. iPhone was launched in 2008. iPhone was launched in 2008. between 2010 and 2010, very,
very small, not a huge deal, you know, yes, some new jobs, but we're not creating as many jobs we used to create. And here is the striking way of looking at it.
Let's go to Detroit in 1990. So, this is the video. I'll see you in the next video. automobile companies,
okay, General Motors, Ford and Chrysler, and they have market value of $70 billion and they employ 1 .2 million workers.
Now let's go today to Silicon Valley. You take the six big techs of Silicon Valley. Alphabet, Microsoft, Facebook, and Vidya.
They employ 2 .6 million people. So it's about twice as many as the Troit. But the market value is $40 trillion. So there are 20 times as big as the Troit was in 1990.
But they only employed twice as many workers. You can have huge economic growth, but without so many jobs created. And part of the issue,
you look what kind of jobs. So here is LinkedIn had already in 2017, emerging jobs, machine learning engineer, data scientist, cell development rep, customer success manager,
big data developer, full stack engineer, unity developer, director of data science. None of this job will go to working class people. All of these jobs require college education.
Now, you have to remember, in this country, only about a third of the people will end up with a four -year college degree, only about one third. Two thirds will have less than four -year college degree.
Now some people say, "Okay, so offer it to you only with actually a colleague of mine." Already said in 2017, he said, "Well, look, we have an aging population. Why don't the people who lost their jobs go to become caregivers?" But this,
again, this is someone who is in the educated professional bubble who know nothing about caregiving. First of all, manufacturing jobs. were union jobs.
They used to pay $20 to $30 an hour. That's $40 to $60 ,000 a year. That's a middle -class job. Caregivers get maybe now about $30 an hour.
So it's less than that. So you're telling people, look, get a job but lose at least half if not more of your income. But also cultural barrier.
I mean, remember, human beings are not widget. You cannot take a big, a big, a brawny man who used to walk, lifting heavy weights on the, on a manufacturing line and say,
okay, go, go wash the old people. The cultural barriers, he may not be willing to do it, or even he's willing to do it,
they may not. hire him to do it. Such jobs are called pink -colored jobs that are typically taken by women for, again, for social reasons, but it's not so easy to change.
We can say no, men should do it too, but people are not widgets. Now, we also have to think that this is not like the Industrial Revolution.
So, some people are saying no, men should do it too. destroyed jobs, new jobs created, what's the big deal? But we are now talking about machines that might be out,
these machines may be able to outcompet us. The machines are getting better and better. I'm not worried about human extinction,
but there are more and more things that you know, just think of what Charlie P .T. can do now. You can tell Charlie P .T. write me a college essay based on the following experiences. And I don't,
I, I don't like the writing by Charlie P .T. I find it too, too vanilla, not enough personality. But boy, the English is good. The English is probably better than mine.
So machine are not, are not doing certain things more than mine. think better than people There's no reason to think that this will stop What happened when machine can do everything that we can do?
Nobody could come up with argument why this will not happen now good I like a good day I like to tell a kind of a parable the parable of the what's called the neoclassical horses and this was Formulated by an economist Vasily Leontief a Nobel Prize Prize winner in 1983.
He said, imagine two horses in 1900 talking about technology and horses. And they had about the Ford Model T,
that all of them on the manufacturing line in 1908. And one horse says, look, what horses-- one horse is worried, what will-- what will-- horses do once all these cars are around and and the other guy says come on technology always destroy a job for horses but it creates new jobs there will be new jobs for horses well we now know there are no new jobs for horses once cars took over there are few jobs for horses in
fact if you look at the equine purple it has declined. I mean, we used to have many, many in the United States, you know, like 22 million horses around because horses were best of burdens.
And now horses are essentially pets. Okay. Horses, no function. Nobody's using horses for work anymore. It's a pet. I like horses,
but it's a pet. Now, unfortunately, there's very little discussion in our political world about this.
One politician who did talk about automation was Barack Obama in 2016 when he was not running for re -election anymore.
He finished, he was finishing his second term. But interview with Bloomberg News, and he says, as we move towards an economy, well, because of automation, you need fewer and fewer people to make more and more stuff.
More and more of us are going to go to the service sector, which is traditionally been a low -weight sectors. Because of automation, because of globalization, we're going to have to examine the social compact.
The same way we did early in the morning, century again and then again during after the Great Depression. So what does Obama mean by the social compact? So if you look at our kind of the way we organize our labor our job our job life our work life everything that we take for granted now would develop over a period of a long time,
you know, the right to strike, the right to unionize, abolish child labor, the 40 -hour work week, equal pay, health and safety laws, and really since 1970 we have not made much progress when it comes to labor labor laws.
We've stagnated, we're still working 40 hours a week, we have not made any significant progress. There is an idea called universal basic income, UBI. You may have heard of it.
The idea is everybody should get an allowance from the state, a basic allowance. It doesn't matter how you're rich or poor, everybody should get one. And so it's not going to be what we call mean test.
It doesn't depend how much money you have. Of course, for each people, it's not going to be if they get another, you know, let's say 25 000 in the dollar a year it won't make any difference to them but for poor people it will make a huge difference this is very controversial people on the left on the right advocate it people on the left and on the right object to it there was a uh andrew yang was a was a
president candidate in the previous election he tried to bring it up but i don't see it discussed much now. But Obama was right. We do need to renew the social compact.
And I'd like to quote a gentleman, a former who was the chairman of the Council of Economic Advisor under Obama. And he said, "My worry is not that this time could be different when it comes to AI,
but this time could be the same." what we have. He said, if you know the history, technology has had huge impact on our work life and we have done okay because we have adapted,
we have developed the right policies, but now we are not really developing new policies. What do we mean by policies? Education, taxation, trade, housing, we need to re -examine our socioeconomic life given that the possible automation are very,
very significant. For example, take education. So we've had early 20th century revolution,
so to speak, this was called the first educational revolution. It was high school for all. So we said, you know, we are now, the economy is more advanced,
we need more educated people, everybody should finish high school. We made high school mandatory, everybody has to finish high school. After World War II, there was an attempt to do college fall,
that was not so successful. As I said, about 33 % of the United States finished for four years degree, okay,
only one third. So what is the next? Some people say, well, the next. would be is the lifelong learning revolutions and why is that? Because the current thing with the current model is you are born and you spend the first let's say 22 years of your life studying to become a productive worker.
And the skill that you learn all the way through college are supposed to last you for your lifetime. But you finish college. you're about 22, you may be working for another 40 years at least,
and think how much change happened during these 40 years. So clearly the scale you learn in high school are no longer relevant, may no longer be sufficient. So this idea that you are going to,
that you just learn and work is not good enough, you'll have to learn and work, learn and work, learn and work. And already some company says, you know, we don't need degrees.
We need competencies and credentials. You can go to all kinds of bootcamps, for example, for programming. They don't require college degree on one hand. But also college degree is not sufficient.
And so, but the way our higher education system is set up is, it's like, like, camp. You finish high school, you spend four years on a college campus,
and then you go to work. But if you have to go to school again, it's more difficult. You now have a family, you are working. So part of what's probably going to happen,
the same way that I'm now talking to you remotely, much of education will become remote education. And this is already we're seeing it happening. More and more masters degree are being offered only remotely.
But nevertheless, I think there is, to me, a very big kind of philosophical questions. What happens when machine can do everything that humans can do?
What will humans do? You know I don't think it's going to happen soon, not I don't think it's happening in 2030, but you know how about about 20 years,
how about 20, 2045 or 2055? Okay, what happens we've become, we're able to build machines that are smarter and smarter and smarter and smarter.
What happened when they finally outcompet us? What will we do? Some people said, "Oh, we will write poetry. We will do art." This is kind of optimistic.
I may have to say, I'm less than going about it. There is, for example, some people have studied, people remember, we saw many men are not working. So, sociologists ask,
"Okay, what do these men do?" To turn it out, there is a big difference between women who are not working and men that are not working. Women that are not working are actually working,
so they're just not getting paid. But they're working by taking care of people. They take care of young ones, they take care of old ones, very often they're not getting paid. What do men that are not working do?
Play video games. So, I think that for humanity for the past million years, we've been working for a living and suddenly if we don't have to work,
huge adjustment for humanity. Now, the legacy that my generation gives you is I have to look at the mirror and admit,
we are living in your world that is not in a very good shape. So, I think your generation will be-- I'm hoping we'll be a generation that will make the world a better place.
And if you wanted to be a better place, you would need to sing the song. This is a Michael Jackson song, but the world will not be in the Michael Jackson. The song is called "The Man in the Mirror." If you want to make the world a better place,
take a look at yourself and make a change. This is true for all of us. We want to improve the world. We need to start with that. you are very privileged. You can sit at home and learn about technology from top people in the field.
But remember, with great privilege comes great responsibility. And I love this cartoon. So I think this cartoon is a good way to end the talk.
Hey man, would you please take another robot box for me? Thank you very much. Thank you so much Dr one for now we're going to go ahead and turn it over to all of you in the audience so if you have any questions for Dr right please leave them on the q &a.
and then Katie and Vinci if you guys want to help read out some questions and Dr. Vary will help discuss them. So somebody in the chat asked,
"As AI is advancing rapidly, what are the youngsters supposed to be learning about?" So... one of the most important things is to learn about AI. So,
I think there is, it doesn't mean that everybody has to go study computer science. Or now, in fact, there are new degrees coming along now. Every, you know, we will see by the time you guys in college,
there will be degrees in AI. And so some of you may want to go and get a degree in AI, there are degrees in computer science. We are not talking about at RISE. We are talking about starting a new Bachelor of Science in AI,
degrees in data science. But these are not the only degrees that you can happen. But whatever you study, I mean, you can decide, OK, I will be-- I'm interested in human beings.
I want to be a psychologist, OK? OK. And that's a good thing. You'll be, you, you know, there, you should go, if you want to be a psychologist, go be a psychologist. But whatever you're going to do,
AI will affect your life. Because it's the, I mean, just imagine, you know, we are now with AI, where we were about 25 years ago,
with the internet, the internet for just. coming out. I mean, it existed before, but very few people, very few people knew about it. Okay, I mean, people in the research world knew about it,
but people in the wide world did know about it. So I remember starting getting invitation, please, please give us a talk. What is this internet thingy? And of course,
there was tremendous amount of hype about the internet. I'm sure you've read about the dot com boom, the dot com bust. There is a lot of hype about AI. I'm worried about AI bust.
There is air boom. I'm worried about AI bust. But just as a similar that after the internet bust, internet changed the world, AI will change the world. So just as it would be inconceivable that you'll become an educated person 25 years ago,
and you didn't know that you were going to become an educated person, the same thing will be true now of AI. So keep your eyes open and you have to remember the one thing about your work life.
The only job security that you have is between your ears. That's your job security. So that's your most important instrument. Lifelong learning don't say okay.
You guys are pretty soon going to college. I'll go to college I'll go maybe I can get admitted to To some elite school. I'll get a brand name degree and then I'm all settled. No It's going to be a very rapidly changing world.
It has changed very much over the past 25 years I think the next 25 years it will change even more and so never,
never rest on your laurels. There's a question from Krishna that says,
what is AI meant for if we wanted to help us but don't want our jobs taken? So your job, you know, I mean, if you don't want your job taken be the one who use AI Don't be the one that AI is replacing you.
So you can look at the job. So I now look, you know, I kind of now As I go around I look at different jobs So the jobs that are going to be in fact the jobs are going to be hard to automate The jobs that are going to be hard to automate are in fact a low -skill jobs.
Okay, take for example example, I would tell you a job that will still be here in 25 years. Cleaning tables in a restaurant. The job will still be here. Why? Because it's actually hard to build a robot to clean tables in a restaurant and we pay people minimum wage.
So why build an expensive robot where you can get a teenager to do it. But you look at another job, you need to think, okay, what is the future of this job?" Now,
remember, you finish college, you have to think about, let's say, your career is about 40 years, 45 years. No one can make prediction what the world would be like.
So you cannot say, "What should I do now so my job will be safe for the next 45 years?" No one can do that. Nobody knows what is going to happen. Okay? But take a look. why I said you should learn a batte eye.
And if you look at the job, you say, "Okay, this job is still going to be here in 10 years." If you, 10 years, 10 years, go for it, okay? But you'll have to keep learning. That's why I said life -long learning is not just,
it's both I'm trying to convince my university that we need to move more in the direction, but I'm trying to tell you, "Don't think I'll go to college and then I'm done." Even if you say, "Okay, I'll get a master's degree." done you will never be done with education There is a book called the race against the machine you will be racing against the machine all your life That's the reality that's the world we live in
today What else I'll tell you another question.
Yeah. Oh, sorry. Yeah. No, go ahead. Go ahead one. No, no, go ahead Oh, okay. So another question is how has computers? science college curriculum changed due to advancement in AI?
- So the answer is that, you know, the very often the people who created technology are just as surprised as other people.
There is a beautiful story I heard, I read about the British physicist early 19th century, they're in Cambridge. and they have just discovered the electron early 20th century.
And they are very happy, big scientific discovery, and they have a toast to the electron, which will never be of any use to anyone.
So they had no idea what is the implication of their own technology. So the thing that happened, you know, when you build technology kind of step by step by step,
and suddenly the same way that you have the proverbial sword that brings the camel back, it also can work in a positive direction. You build, you build, you build, and suddenly poof,
it can do something that it couldn't have done before. So the people in the, the people in open science are just a surprise, because... you know, you put one, one, one, you never know when suddenly you'll have a breakthrough.
So we have to figure out, okay, now they are, for example, coding co -pilots, GitHub co -pilot. How should it affect how we teach programming?
And even the concept of giving people coding assignments, is it okay if they use the co -pilot to do coding or not?
We are right now, we are in the middle of the same transformation and we are debating inside computer science, how does it affect us and we don't have all the answers yet.
You know, you can see people are debating, is it okay for scientists to use JGPT to write paper of scientific papers? Is that okay?
to give charge APT a scientific paper? Ask charge APT to write a review of the paper. I mean, suddenly we have a new entity that can talk. I mean, I have to say, I'm personally not too impressed with what charge APT writes,
because it's a bit too vanilla for me. I like a bit more writing with more personality. But nevertheless, it writes stuff that they-- years ago,
we would say no way. I mean, if you may remember, if you've used, for example, Google Translate, just two years ago, you got clunky English, and now suddenly you get beautiful English.
What are the implications of that? This is not just in computer science. Everyone in colleges is stuck in their head. What are the new rules of the game? What is, should we treat it as a calculator?
Nobody is telling you now, no, you cannot use a calculator. Everybody to use it freely. How do we teach writing in the era of charge EPT?
All of these are questions that we're asking. We don't have answers yet. I think I see somebody is asking the same thing,
but will happen in K212, so I have to say I'm actually very worried about I'm worried about K212. And I'm worried because the precedent that we have is not very good.
So the precedent is a technology that came about just over 15 years ago. What is it? The smartphone or 2007. so they're 17 years ago.
So for many of you, roughly speaking, you know, you were born around the time the iPhone, the smartphone came about. By the time you were a young teen, you probably already got a smartphone.
This is my speculation that the majority of you, somewhere around 13 or so, you all by the 13, 14,
13 14 everybody had a smartphone Some may be being younger than that more and more data comes out Saying this was a very very bad idea so one is You may have heard about the mental health crisis for young people and That is very very real The people have tried to figure out.
Okay, what is going on? Why? Where does it come from? And at the end of the day, the best answer is smartphones. It's the impact of smartphones.
They are discussed now. Shall we have-- shall we basically have a rule? No smartphone at school. The school should take-- you know, you come to school,
you must deposit your phone. And the set that-- to that is, parents says, I want my child to be-- to have a smartphone in case of a mass shooting.
And this is the saddest thing, the saddest reason that I can imagine, heart -breaking reason to why you need to have a smartphone. But the evidence is that smartphone has not been good for children.
Now, what I will do, it will make the phone even smarter. Just imagine,
your friend is bullying you, you want to know what to do. Should you ask your parents? Would you just ask your smartphone? My friend is bullying me. What should I do? So what happens when you're carrying in your pocket this smart device that can answer any question?
What's your motivation to study if you can just, everything you can just ask your phone? So I think we're all now the technology is running way ahead of us and how does it affect us and how does it affect,
I mean, especially the next generation, people who are like right now, let's say who are just now entering elementary school, what will happen? their six years old or seven years old now?
What happens in about six years when they are entering junior high and they're getting their not smart phone, but super intelligent phone? What will they do to childhood?
I don't know. Nobody really has the answers. I'm guilty just like anyone else,
my generation, the attitude has been technology is good because it's convenient, it makes life easier for us, so more technology is better and more technology is best.
So, just let's have more and more technology. And now we're realizing it's not so simple. Technologies has always benefits, and always it has costs.
I think what is the most fundamental technology? The technology shaped us as humans is fire, OK? You look at the gorilla, they spend several hours in their chewing,
so they have a huge jaw because they need to chew food. We all have tiny puny jaws. Look even at the-- of Homo Neanderthal.
You see, you jaws doing, right, raw meat, raw meat is not easy to chew. We have these tiny jaws because we have cooked food. So fire made us,
humanity discovered fire. People still die from fire. And now in fact, with climate change, we have more and more fires. So we live, we cannot live without fire.
and fire is killing us at the same time. So technology is never free. This is the illusion was, oh well, technology is good because it's convenient. No,
it always costs. It is never free of consequences. Just to tell people,
if you're interested, you can go to YouTube and you can just put my name in YouTube. You'll find many talks, some about all different, all different talks about some about technology and different impacts of technology.
So you'll find many, many talks on YouTube. So, for example, I talked that I proposed to give here, but I didn't where I was. impact of technology on democracy.
So you can find it again, you can find technology in democracy or in democracy, you can find it on YouTube. So somebody asked,
can I give example where I impact in our daily life? Let me give you example how I impact in on on your daily life. So,
one of the most important aspects of life in general is information processing. Every living creature, even from plant, they sense and they react,
the processing formation. You know, a plant, a tree, goes and finds where is the water? I'll send my roots to where the water is, okay? It sense and reacts.
We have this huge head because we've been better at information processing. So we love information. We are sucker for information. That's why the phones are so addicted to us because they give us information.
Now, it used to be because you consume information by going to the library and choose a book. or you flip through the newspaper. But more and more we consume information from devices,
phone, tablet, laptop. Now on the web, you can go to, for example, you can go to http colon slash slash www .cs .ry .edu /tildavardi and you will find my home page.
But most likely that's not what you're going to do. If you want to find my home page, you go to Google and type Moshe Vardi. And Google will find my home page. So today,
almost all access to information that we have, unless either you went to a particular URL or you open a newspaper, it will be in the description below. mediated and it's always mediated to you by AI.
AI decides what you're going to see. So for example, let's suppose that we do a Google search on AI and automation and you do it,
I do it and set it up. it, and Kaden will do it, and Vincia will do it. Each one of us will get different results. You realize that each one of us will get different results.
Because the most important thing for Google is not to give us the best information. The most important thing for Google is to make the most money.
[VIDEO PLAYBACK] Google decides. How do we make most money? In fact, there used to be a big fight inside Google. What should we optimize for?
Search quality or ink or profits, and the money people want-- profits. So Google will show us different results depending on what Google thinks.
Where are we more likely to click on advertisements, which is-- how they make money. On social media, what do you see?
You see what the algorithm wants you to see. All information that you received today, unless you went directly, I want to go to CNN .com and I want to look at their top news item.
But if you went to news .google .com. what you see and what I see will be different. So the AI has been now affecting us in such a way that we want it completely invisible.
It feels objective. I'm doing a Google search. It feels I'm finding information that's out there. No. I'm asking for information and Google decides what to show me. That's not the same as finding information out there.
It's very different than going to the car catalog in the library. Car catalog in the library, everybody gets to see the same catalog. Google search, everybody get to see a different search.
Facebook, everybody get to see something different. And that gives this technology company huge power. And we don't know how they use it. We don't know how they use it.
And, you know, we all think that we are free agents, we're free will, you know, we have our, we make our own decision. We don't know how our decision are influenced by tiny little nudges.
I'll show you this information item. I'll show you this information item. Each, each, each thing is like one grain of sand. I mean, one of the, the, the famous Greek paradox is called the sand heap.
paradox, put one grain of sand. Okay, it's a grain of sand. Put another one. Put another one. Suppose you have a little sand and it's not a heap and you add one grain,
it doesn't become a heap. Now, of course, we know if you put enough grains of sand, eventually it becomes a heap. How does it happen? Adding one grain never makes it into a heap.
And this means that it's something with influence. If I just say, "Okay, I'll show you one news item. Show you another news item. I'll show you one post. I'll show you another post." You said, "None of them is going to change my opinions dramatically." But the answer is,
they each has a tiny effect on you. How they're all, what is the cumulative effect? We don't know. We don't know. Relationship,
people think that AI is new. For the last, I would say, is when, in fact, Google, from the very beginning, used AI to decide what-- to show you shelf results.
Because the problem was, in the '90s, you did a search, there are too many results. What to show you? So Google said, let's use AI. But at the beginning, it was objective. Everybody saw the same thing.
And then they realized, if you want to make more money, don't show everyone the same thing. Optimize on the likelihood that people will click advertisement.
For Facebook, they said what they want to do, they call it maximize user engagement, how to keep you on the app, and all social media, the same thing, how to keep you engaged as much as possible.
The longer you're engaged, the more likely you are to click on advertisement. [BLANK _AUDIO] whole thing is a big advertising machine. So how much are we influenced? You know,
how much am I influenced? It's so hard to tell because it's so everything is one grain of scent. All right.
Sage:
Unfortunately, it's about all the time we have for today. But Dr. Vardi, thank you so much for your fantastic presentation. today. I personally really like the comparisons you made to the rise of the internet or the Industrial Revolution as evidenced that the current AI revolution will likely have similar impacts.
Also, I want to thank everyone in the audience for all your thoughtful questions during our Q &A session. And finally, please remember a recording and transcript of Dr. Varti's talk will be available on the Kid Teach Kid website in just a couple of days.
And we hope you'll join us again for our-- future educational events.
Have a great night, everyone!
Copyright 2019-2020 © Kid Teach Kid
Our vision is to inspire and encourage every kid to share their best knowledge and wisdom with other kids, so we can leverage everyone's strength, and raise each other to new height. Our platform serves not only as a valuable complement to school educational system, but also as the ideal training ground for students and instructors to develop articulate communication and essential leadership skills.