How AI Happens

Responsible AI Economics with Katya Klinova & The Partnership on AI

Episode Summary

In recent years, the focus of AI developers has been to implement technologies that replace basic human labor. Talking to us today about why this is the wrong application for AI is Katya Klinova, the Head of AI, Labor, and the Economy at The Partnership on AI. Tune in to find out why replacing human labor doesn't benefit the whole of humanity, and what our focus should be instead. We delve into the threat of "so-so technologies" and what the developer's role should be in approaching ethical vendors and looking after the workers supplying them with data. Join us to find out more about how AI can be used to better the whole of society if there’s a shift in the field’s current aims.

Episode Notes

In recent years, the focus of AI developers has been to implement technologies that replace basic human labor. Talking to us today about why this is the wrong application for AI (right now), is Katya Klinova, the Head of AI, Labor, and the Economy at The Partnership on AI. Tune in to find out why replacing human labor doesn't benefit the whole of humanity, and what our focus should be instead.  We delve into the threat of "so-so technologies" and what the developer's role should be in approaching ethical vendors and looking after the workers supplying them with data. Join us to find out more about how AI can be used to better the whole of society if there’s a shift in the field’s current aims.

 

Key Points From This Episode:

 

Tweetables:

“Creating AI that benefits all is actually a very large commitment and a statement, and I don't think many companies have really realized or thought through what they're actually saying in the economic terms when they're subscribing to something like that.” — @klinovakatya [0:09:45]

 

"It’s not that you want to avoid all kinds of automation, no matter what. Automation, at the end of the day, has been the force that lifted living conditions and incomes around the world, and has been around for much longer than AI." — @klinovakatya [0:11:28]

 

“We compensate people for the task or for their time, but we are not necessarily compensating them for the data that they generate that we use to train models that can displace their jobs in the future.” — @klinovakatya [0:14:49]

 

"Might we be automating too much for the kind of labor market needs that we have right now?" — @klinovakatya [0:23:14]

 

”It’s not the time to eliminate all of the jobs that we possibly can. It’s not the time to create machines that can match humans in everything that they do, but that’s what we are doing.” — @klinovakatya [0:24:50]

 

Links Mentioned in Today’s Episode:

Katya Klinova on LinkedIn

"Automation and New Tasks: How Technology Displaces and Reinstates Labor"

The Partnership on AI: Responsible Sourcing

Episode Transcription

KK: We are actually using our best talent and best brains to economize on the world's most abundant resource, which is human labor.”

[INTRO]

[00:00:11] RS: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers, as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we're about to learn how AI happens. 

[INTERVIEW]

[00:00:41] RS: Joining me today on How AI Happens is the Head of AI, Labor and the Economy over at The Partnership on AI, Katya Klinova. Katya, welcome to the podcast. How are you today?

[00:00:51] KK: I'm great, Rob. Thank you so much for having me. It's a pleasure to be here.

[00:00:56] RS: Yeah, I'm so pleased you're on the podcast. I have so many things I want to ask you. Thank you for broadcasting in from Utah, where you are doing like a half work, half vacation stint. Is that right?

[00:01:06] KK: Yeah. Still work but rolling into vacation starting next week, so almost there.

[00:01:12] RS: That's great. Then when you close your computer at the end of the day, you're in beautiful countryside of Utah. So that's got to be great, right? 

[00:01:18] KK: Beautiful mountains surround me. Just absolutely gorgeous here.

[00:01:22] RS: Well, great. I'm glad you took time away from that to record with me. I really do appreciate it and I have so many questions for you. Before we get too deep in the weeds though, would you mind telling the folks at home a little bit about your background? Then we can get into the organization partnership on AI and how you wound up there. 

[00:01:37] KK: Yeah, sure. My undergrad was in computer science. I was definitely one of those very enthusiastic, newly minted college graduates who thought that technology is going to change the world, and giving access to computing and information is going to be one of the most democratizing forces around the world, equalizing, bringing opportunity to people, which brought me to work very naturally to Silicon Valley. 

After about a decade there, I look back thinking, “Wow, there's been a lot of progress in giving people access to computing and information, cheaper computational platforms. Has that changed a lot in terms of inequality? What you see when you start digging into this question is that there are actually more concerns around inequality, including not only inequality between rich and poor countries, but also inequality was in countries that previously have been seen as developmental success and a lot of polarization in the labor market, which is attributed in a lot of ways to technological trends and what kinds of technology is coming into the forefront. 

So that brought me back to school and gave me motivation to study economics, find people who study ports in inequality professionally, and find out what was wrong with my 20-year-old theory of change about what technology is going to do in the world, right? That then naturally led me to The Partnership on AI because it's a coalition of now over 100 organizations. It’s [inaudible 00:03:12] technology companies. Over 60% are actually human rights, labor organizations, and as well as academia united in the pursuit of putting responsible AI practices that many, many organizations have published by now into practice, into the day-to-day activities. 

The partnership organizes its work by topic areas within responsible AI. So we have groups that work on bias, fairness, accountability, transparency. We have people who work on misinformation, media integrity, people who work on safety topics. My group works on everything that has to do with labor impacts and economic impact. It's been a very exciting journey so far.

[00:03:55] RS: It's almost a classic story, right? You come out of school, fresh faced. I'm going to change the world. Or just in general, technology's going to change the world. Then after a decade, in your case, a few years in the professional sector, you find it is changing the world, but perhaps not in the way I had hoped it did. For you, what was that gap in terms of what you would hope to see versus what you were seeing?

[00:04:19] KK: Yeah. I think being in San Francisco, where I was in the moment when I was having all of these turning points in my career is really helpful because the city is just a quintessential illustration of what inequality can be like. You have a class of uber successful technological elites with the – We came to call them. Then people who truly are being left behind in the most cruel ways, I would say. 

Then when you realize that San Francisco is a microcosm of that and actually very often the amazing gains, productivity gains that technology enables, they're not being dissipated throughout the economy like they were before. So there is a small sliver of uber profitable companies and their shareholders. Sometimes, their employees who are directly employed do pretty well. But that doesn't necessarily spread throughout the economy in a way that would benefit people as producers and not only as consumers. 

Of course, the consumer story is something that is being touted by technology companies very often and deservingly so, like, indeed, we have access to a lot of fantastic free resources or resources that are seemingly free that we don't have to pay for. Computing became much cheaper. Smartphones became much cheaper in the last 10 years. But then what about benefiting people as producers and actually increasing their productivity and providing access to good jobs that are gainful, that are permanent, that are stable, not precarious? 

Here, this is where a lot of questions arise because jobs get displaced or jobs get polarized into a very precarious and the kind of jobs that you really would not want for your kid or for your family member and the very prestige and high paid ones. But those are really concentrated at the top. 

[00:06:27] RS: The awareness of that discrepancy led you to go back to school, and then you wound up at Partnership on AI. You shared a little bit about the Partnership on AI in terms of the different teams there, where you’re all pointings or your expertise. What would you say is kind of the overall mission of the organization?

[00:06:43] KK: The overall mission is to make sure the responsible AI principles, and this is my rendering of it. The responsible AI principles do not stay just beautiful words on the paper or on the web pages of the companies that proclaimed them, but actually find their way into practice, into the day-to-day operations of companies and organizations that are developing and deploying AI. That is done in a thoughtful way, and that is done necessarily with the involvement of communities that stand to be impacted and are being impacted already. It was their voices not only being heard, but it was them being powerfully brought to the table in an empowered way. 

In my case, that means making sure we hear and we listen to and we bring to the table the workers who stand to be impacted and not only workers of technology companies, who right now are in a very privileged position, but also workers who very often get neglected and whose labor is hidden, who are often not employed directly by the technology companies because they are employed through a lot of subsidiaries. Or even not at all employed in the technology sector in any shape or form, but yet those technologies get – They enter the workspaces and they impact the livelihoods and the earning prospects of people around the world who are being impacted but do not have a say in the direction that AI is taking. 

[00:08:21] RS: An organization's AI considerations are, as you said, merely flowery words on a website, but they don't necessarily have the practical application. What does that look like? What are some of the ways that companies are typically deficient?

[00:08:34] KK: It really varies by company and also varies by the topic that we are talking about because on some of the responsible AI topics, we are further ahead than on others. For example, there are now much more established fields of both academic inquiry and companies investment when it comes to fairness in transparency, explainability of AI, and safety of AI. Here, definitely internal practices within companies range, and it's not like there are no issues. There are issues for sure. 

But still, it's way further ahead than when it comes to putting into practice some of the economic language that is implicitly or explicitly built into the responsibility principles. Like for example, when you read AI should benefit all or we're building AI that is intended to benefit all, that is actually a very, very lofty economic statement. Because what you're saying is that we're not going to be generating losers from technological progress, and we're only going to be generating vendors, and it's going to be this Pareto improvement across the board. 

This has rarely happened in the economic history. Very often, when there is a change, technological change or maybe it's a change to trade policy, there are vendors and losers. There is a question of how do you not forget that you've generated losers, and you help them adjust and transition. So creating AI that benefits all is actually a very – It’s a very large commitment and a statement. I don't think many companies have really realized or thought through what they're actually saying in the economic terms when they're subscribing to something like that. 

But we are not at all discouraging them from saying that. We believe that AI should be benefiting. Or like why do we need technology if it's hurting more people than it's benefiting? So the question is what does it mean to act on this now. 

[00:10:46] RS: When a company says that, what they're typically saying is in a world where our product is so widespread, that everyone's using it, then the challenge we're setting out to solve will be better for everyone, will have solved that problem. But, again, as you said earlier, it's focused on the consumer and not on the producer. So what are some of the downstream effects on the producer that companies probably don't take into account when they say that our goal is that AI will be helpful to all? 

[00:11:17] KK: Yeah, it's an excellent question you're asking. The big thing that economists are worrying about, and there is actually a very nice term coined for it by Acemoglu and Restrepo in their work. They describe it as a thread of so-so technology that is really proliferating. What is so-so technology? It’s the kind of technology that automates human labor and human tasks but generates very little productivity gains to compensate for that. 

So the way it should work in an idealistic scenario, it's not that you want to avoid all kinds of automation, no matter what. That automation, at the end of the day, has been the force that lifted the living conditions and incomes around the world and has been around for much longer than AI. But you want it to be really productive. So then what happens is, yes, you automate some jobs but you've made the production of those products or services so much more efficient, that the prices are now lower. Or you enable the creation of new products and services there. 

So people now have more money to spend elsewhere because they're saving here on this product or service. So there will be increased demand for these other complementary products and services, and there will be additional increased employment there. People who were displaced in this area where you deployed new technology, they can be now employed somewhere else. 

Now, if you've displaced people and, yes, as a capitalist, you've shaved some labor costs by doing that, but you've generated really very little productivity gains comparing to when you were producing this same service with human resources. You didn't cut prices. You didn't improve quality very much. There are no freedom incomes that people are having now that they have to spend elsewhere, so there is not going to be this increased employment somewhere else. 

This so-so technology, there is a worry that is becoming more common. To give you an example, one example that is often given is a self-checkout kiosk. Before, you had the cashier who would check you out. Now, you have to do this work yourself as a consumer and spend your own time there. If it doesn't feel like a lot of time, well, think about the voice support agents and robots that you now need to talk with for like half an hour sometimes before you get to resolve your problem or get to like speak to an actual human. 

This is real time that you could have been spending elsewhere in your actual job, earning something and creating productive output in the economy. So this is kind of like a double impact on both you're pushing what used to be a paid labor now on to the consumer. It’s now unpaid labor, but you're also hurting the consumer because you're taking their time away from productive activities, right? 

So these kinds of trends are something to think about and really to pay close attention to, but there is really a lot more with particular capabilities that AI is bringing into the workplace. For example, you can monitor your workers completely now, like every click that they make, even every movement that they make. That data can be used to automate their work in the future. Now that you are stripping them of the know-how that they possess of the knowledge by observing everything that they do and training your models on that, does this really make the way we compensate labor now? This model outdated because we compensate people for the task or for their time, but we're not necessarily compensating them for the data that they generate, that we use to train models, that can displace their jobs in the future. 

Should they be having a share of that? Should they become shareholders of that future product? This just doesn't exist right now and this is something that can come to the forefront, as more of these technologies get deployed, and more of the surveillance really is penetrating the workplace.

[00:15:43] RS: Won't there always be so-so technologies, so long as there are capitalists? The incentive there is just to make a little bit more money, right?

[00:15:52] KK: Yeah. No, completely. There will always be I think so-so technologies. The question is can we make sure there are more of brilliant technologies that are really bringing breakthroughs in terms of productivity gains. Here there is a lot we can do, also in terms of the private sector and what enterpreneurs that are entering the sectors, innovators, what do they aspire to do, what they consider to be cool. What gains them break bragging rights, and what investors want to invest in because they see it as high potential? Is it these like petty innovations that just like save some labor costs? Or is it something transformational that really brings productivity gains across the board? 

Also, it's a policy question because if you continue, which is the case now to tax labor way, way heavier than capital, then of course, you're incentivizing a lot of this technology. It’s like this that just like how people think about every little possible way to employ fewer people and shave labor costs.

[00:17:01] RS: So on a more micro level, on the level of the individual AI practitioner, what are some questions they can ask themselves about their work? Perhaps when they go out to vendors and to source other data and that sort of thing, how can they ensure that they are not adversely affecting employment terms or to make sure that they're choosing vendors ethically?

[00:17:24] KK: Yeah, that's a great question. So outside of asking, am I creating a so-so technology or a transformational technology that's going to bring productivity gains, augment humans instead of complementing that, there are also – It’s like very important questions that come even before that, and these are questions about where am I sourcing my data from, and who is labeling this data, and what are the working conditions. What are the payment terms that these professionals are facing? How am I influencing that? 

These are questions that very often do not get us because what we observe is that practitioners usually think about it this way, “I need a label data set. How am I going to get it labeled?” They don't necessarily think about it as how am I going to hire people to label my data set. But one way or another, they end up hiring them directly or indirectly, whether they go to a platform, and they set up payments for tasks, or they work with a managed service provider. There are still very real people, on the other hand, actually labeling this data. 

So thinking about just the per label price that I'm setting on the platform, does it add up at the end of the day to a living wage if you – How many of these labels do you actually need to produce per hour per day for that to adapt to a living wage? Like does that take into account taking a break for bathroom, for lunch, being a human? Am I thinking about whether the instructions I'm writing are too long or too short, too detailed? Do people live and get paid for their time that they spend for reading those instructions? 

If I'm trying to save that time for them, am I making them so short, that many of the labels I'm going to end up rejecting and not paying for that work? But that's actually not because labeling professionals did the poor job, but it's because I did a poor job writing instructions that are too short. So all of these questions really end up having impact on the working conditions of people, on the other end, and we encourage practitioners to think about them, and we try to put together a guide for that. Kind of like almost a reminder of during what kind of points in your data pipeline work you're actually making decisions that end up impacting data labeling for professionals. 

So we published that as a white paper earlier this year, and it's available to everyone who is interested at our website. I'm going to share the link with you for the show notes, but it is also partnershiponai.org/responsible-sourcing.

[00:20:20] RS: That's great. Yeah, we'll definitely link to that in the show notes. It just seems like a great – Almost like a checklist you can have as you go out to source vendors. When you think of the downstream effects of your work, like just reminders of all of these questions you can ask. For example, I had no idea that in the event that a data labeler may have to spend up to speed with a new client, or there's this new request that that time spent reading instructions is uncompensated, which is totally unfair because it's labor, right? So you apparently – I'm wondering right now. I need to ask. I need to be deliberate to make sure that that's the case so that you come by this data, honestly, right?

[00:20:56] KK: Completely, yeah. I think there are definitely companies that do pay for it. But there are definitely those platforms where this would not be paid. So if a practitioner is not thinking about these aspects, there is actually right now very little chance that data labelers on their project would end up having a great experience. It doesn't happen automatically today, unfortunately, unless you're working with a really good provider. I'm really glad that somebody is asking you what are these questions and thinking about this. Sama has been one of the very first partners we started working with when we were setting out the responsible sourcing project, and it's been a really great collaboration.

[00:21:39] RS: I love to hear that. For the folks down, I promise, I didn't ask Katya to say that. 

[00:21:43] KK: Yeah. You absolutely didn't, but we absolutely love working with Sama, one of our greatest partners. 

[00:21:49] RS: You are certainly – From what I know about you, Katya, you're not the kind of person to make that kind of thing up either, so I do appreciate you sharing that, I wanted to ask more about some of the downstream effects of all of this automation that's happening. I speak to people on this show, and a lot of the AI technology being developed is aimed at making people more efficient. Can we automate your low leverage activities, and so you can only focus on the high leverage activities? 

At the individual professional level, I can see how that would be valuable. But as it gets more applied at scale, you will see this replacement of traditionally human labor into technology and AI. But I'm curious how you first see this ongoing automation of human labor in AI affecting employment writ large. Does this shift represent a different change than we've kind of seen historically from various economic trends?

[00:22:44] KK: Yeah. I think a very reasonable people would disagree about it, and some would say, “Look, we've had automation for so long, prior to even first conversations about AI, and it's just normal that some work gets automated.” New professionals come in their place, and there is this transition. That can be difficult in the moment, but ultimately a good thing, and I agree with that. 

The question here is, might we be automating too much for the kind of labor market needs that we have right now? Because if you think about it, the main source of income for the majority of human population around the world is their labor. They're selling their labor. This is how they earn a living. So it is a pretty important thing. We cannot just be automating all over the way without thinking how people are going to be getting their income.

What a recent research shows is that there is actually a change in the pace of automation and how it relates to the creation of new tasks for humans. So before everyone who would tell you, “Well, this is just normal. Some jobs get destroyed. Some new ones get created. It balances out in the end.” They would be right. In the four decades following World War Two, that's exactly what happened. The volumes matched almost perfectly. Then something changed in the last three decades, and really automation has picked up a lot. 

Now, it’s outpacing the creation of new tasks for humans, by quite a bit. The research I'm referencing is a paper by Acemoglu and Restrepo that I can also share for the show notes. So what that means is that the labor demand, the demand for human labor can be declining overall and faster than we need. Because right now, we still have eight plus billion humans in need of jobs around the world, right? There are – The population hasn't peaked yet. So it's not a time to eliminate all of the jobs that we possibly can. It’s not the time to create machines that can match humans in everything that they do. 

But that's what we are you doing. We are actually using our best talent and best brains to economize on the world's most abundant resource, which is human labor. That, if you ask me, is just a wasted innovation. There is so much of like actual scarcities in the world that we need to economize on, that we need technological innovation for when it comes to health care, climate change, so many things. But human labor is just not one of them. So why you would ask do we have the whole the entire field of AI that uses human parity benchmarks as the main goals to strive for, like to create algorithm machines that recognize images as well as humans, that understand speech as well as humans, that translate speech to text as well as humans? 

These are just potentially very wrong goals for us to pursue, and we need to be thinking about how to redirect AI away from this focus, the excessive focus on automation into a focus around complementarity. How do we set targets around human plus AI, human plus machine team's performance and their productivity?

[00:26:14] RS: Yes. You just blew my mind the second time this episode, talking about how we're trying to recreate the most abundant resource we have, which is a human labor. When you think about making sure that work is distributed and that these eight billion humans do have a job to do, perhaps it goes hand in hand with AI, that rather than replacing jobs, it can allow individuals more economic access. Is that in a microcosm the goal of the shared prosperity initiative you are undertaking at the Partnership on AI?

[00:26:45] KK: Yeah, completely. It is the shared prosperity initiative. It tries to think about how do we address and hopefully prevent the shocks to labor demand and to the shocks to job availability, distribution, and quality that AI can be bringing. So we really emphasize these questions around just distribution and quality of jobs. Because if you’ve created a bunch of jobs for [inaudible 00:27:11] but eliminated a lot of jobs in other areas, or you've eliminated a lot of jobs, it's like lower skill levels that are more accessible to many people around the world, and created more jobs for people with advanced degrees, that does not balance out because these are not the same populations. These are not the same people who can just transition from one thing to another as easily. 

So thinking about the unevenness of these impacts and how we can correct for them, how we can be creating good jobs with AI for the labor force that we actually have and the skills that we actually have, not the skills we wish we had in some amazing and wonderful but totally unattainable to the future in which everyone's really skilled very easily at no cost is the big challenge, and something that we're trying to strive for was the shared prosperity initiative.

[00:28:11] RS: That's great. Katya, I have learned so much from you today. We are creeping up on optimal podcast length here, so I'm going to have to slide into home here. But at this point, I will just say thank you so much for sharing your expertise and all of your knowledge. I've learned a ton from you today, and I’m really pleased you joined the podcast. This has been a great episode.

[00:28:28] KK: Thank you so much for having me, Rob. It's been a pleasure.

[END OF INTERVIEW]

[00:28:35] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image, video, and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, e-commerce, media, medtech, robotics, and agriculture. For more information, head to sama.com.

[END]