Implement a multidisciplinary strategy with embedded responsible AI
Accountability and monitoring must be ongoing as AI models can change over time; indeed, the hype surrounding deep learning, as opposed to conventional data tools, is predicated on its flexibility to adapt and modify in response to changing data. But that can lead to problems like model drift, where a model’s performance, such as prediction accuracy, degrades over time or begins to reveal errors and biases. , if it persists in nature as long as possible. Interpretable techniques and human-in-the-loop monitoring systems not only help data scientists and product owners create higher-quality AI models from scratch, but are also used through post-implementation monitoring systems to ensure models do not degrade over time.
“We don’t just focus on training models or making sure our training models are unbiased; we also focus on all aspects related to machine learning development lifecycle,” said Cukor. “It’s a challenge, but this is the future of AI,” he said. “Everybody wants to see that level of discipline.”
Prioritize Responsible AI
There is a clear consensus in the business that RAI is important and not just a good thing to own. At PwC AI Business Survey 202298% of respondents said they have at least some plan to hold AI accountable through measures including improving AI governance, and tracking and reporting on AI model performance , while ensuring that decisions are understandable and easy to explain.
Despite these aspirations, some companies have struggled to implement RAI. The PwC poll found that less than half of respondents had planned specific RAI actions. Other survey by MIT Sloan Management Review and Boston Consulting Group found that while most companies see RAI as a tool to reduce the risks of technology—including risks related to safety, bias, fairness, and privacy—they admit they have not made it a priority, with 56% saying it was a top priority and only 25% having a full maturity program in place. Challenges can come from organizational complexity and culture, lack of consensus on ethical tools or practices, staff incompetence or training, regulatory uncertainty, and uncertainty. integration with existing data and risk practices.
For Cukor, RAI is not optional despite significant operational challenges. “For many people, investing in safeguards and practices that enable responsible innovation at a fast pace is like a trade-off. JPMorgan Chase has an obligation to our customers to innovate responsibly, which means carefully balancing challenges between issues like sourcing, sustainability, privacy, power , explainability and business impact.” He argues that investing in proper controls and risk management practices early on in all stages of the data-AI lifecycle will allow the company to accelerate innovation and ultimately act as a competitive advantage for the company.
For RAI initiatives to be successful, RAI needs to be embedded in the organization’s culture, rather than just being added as a technical check mark. Implementing these cultural changes requires the right skills and mindset. A poll by the MIT Sloan Management Review and Boston Consulting Group found that 54% of respondents have difficulty finding RAI expertise and talent, with 53% indicating current employees lack training or knowledge.
Finding talent is easier said than done. RAI is a nascent field and its practitioners have noted the clearly multidisciplinary nature of the work, with contributions coming from sociologists, data scientists, philosophers, designers, and others. designers, policy experts and lawyers, just to name a few.
“Given this unique context and the novelty of our field, it is rare to find individuals with all three elements: technical AI/ML skills, ethical expertise,” said Cukor. and expertise in the financial sector”. “This is why RAI in the financial sector must be a multidisciplinary activity with collaboration at its core. To get the right mix of talents and perspectives, you need to hire experts in different fields so they can have tough conversations and stand out issues that others have. can be ignored.”
This article is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. The opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported or cited findings is not the responsibility of JPMorgan Chase & Co.
This content is produced by Insights, the custom content arm of MIT Technology Review. It was not written by the editorial staff of the MIT Technology Review.