So for your first question, I think you’re right. Policymakers really should define railings, but I don’t think they need to do that for everything. I think we need to choose the most sensitive areas. The EU has called them high risk. And maybe we can take from that, some model to help us think about what is high risk and we should spend more time and potential policymakers, we should spend more time. where is the time for each other?
I’m a big fan of regulatory sandboxes when it comes to feedback co-design and co-development. Uh, I have an article coming out in an Oxford press book about the incentive-based rating system that I can talk about in a moment. But on the other hand, I also think you all have to take your reputation risks into account.
As we move into a much more advanced digital society, developers must also fulfill their accountability. You can’t afford as a company to go out and come up with an algorithm that you think, or an autonomous system that you think is the best idea, and then show up on the first page of the website. newspaper. Because that undermines consumer confidence in your product.
And what I’m saying is, you know, both sides are I think it’s a conversation that’s worth having we have certain barriers when it comes to facial recognition technology, because we don’t have technically accurate when it applies to all population groups. When it comes to the different impact on financial products and services, there are great patterns that I’ve found in my work, in the banking industry, where they really have triggers by because they have regulatory bodies that help them understand what proxies actually bring about differently. There are areas where we’re only seeing this in the housing and appraisal markets, where AI is being used to replace subjective decision making, but contribute more to the discriminatory assessment style of homes. handle and hunt the prey we see. There are some cases where we really need policymakers to impose railings, but more than that, be proactive. I always tell policymakers, you can’t blame data scientists. If the data is terrible.
Anthony Green: Right.
Nicole Turner Lee: Put more money into R and D. Help us create better data sets that are over-represented in certain regions or over-represented on minority populations. The important thing is, it has to work together. I don’t think we’re going to have a good winning solution if, you know, policymakers are actually leading this problem or data scientists are leading it on their own in some areas. determined. I think you really need people who work together and collaborate on those principles. We create these models. Computers do not. We know what we’re doing with these models when we create algorithms or autonomous systems or target ads. We know! We in this room, we can’t sit back and say, we don’t understand why we use these technologies. We know because they really have precedent for how they have been expanded in our society, but we need some accountability. And that’s really what I’m trying to achieve. Who is holding us accountable for these systems that we are creating?
It’s interesting, Anthony, that the last few weeks, like many of us, have watched the conflict in Ukraine. My daughter, because I’m only 15, has come to me with all sorts of TikToks and other things she sees to say, “Hey mom, did you know this was happening?” And I had to cower because I was really getting into the conversation, not knowing that in a way, once I went down that road with her. I am going deeper and deeper into that well.
Anthony Green: YES.