Tech

AI may not steal your work, but it can change it

(This article from The Technocrat, MIT Technology Review’s weekly tech policy newsletter on power, politics, and Silicon Valley. To get it in your inbox every Friday, Register here.)

Advances in artificial intelligence tend to be accompanied by anxiety around work. This latest wave of AI models, like OpenAI’s new ChatGPT and GPT-4, is no different. First we launched the systems. Now we are seeing predictions about automation.

In a report published this week, Goldman Sachs predicts that AI advances could create 300 million jobs, representing about 18 percent of the global workforce. automatic somehow. OpenAI also recently released self learning with the University of Pennsylvania, which claims that ChatGPT can affect more than 80% of jobs in the US.

The numbers sound scary, but the wording of these reports can be uncomfortably vague. “Influence” can mean a whole lot of things and very vague details.

It’s no surprise that people with language-related work may be particularly affected by major language paradigms like ChatGPT and GPT-4. Let’s take an example: a lawyer. I’ve spent the past two weeks looking at the legal industry and seeing how it might be affected by new AI models, and what I’ve found is both optimistic and worrisome.

The antiquated, underdeveloped legal profession has been a candidate for technological disruption for some time. In an industry with lack of labor and the need to process piles of complex documents, a technology that can quickly understand and summarize text can be extremely helpful. So how should we think about the impact these AI models could have on the legal profession?

First of all, recent AI advancements are particularly relevant for legal work. Recent GPT-4 passed the Universal Bar exam, which is the required standard test for licensure. However, that does not mean that AI is ready to be a lawyer.

This model could have been trained through thousands of practice tests, which would make it an impressive test-taker but not necessarily a good lawyer. (We don’t know much about GPT-4’s training data because OpenAI hasn’t released that information yet.)

However, the system is very good at analyzing text, which is extremely important to lawyers.

“Language is a coin in the legal field and in the legal field. All roads lead to a document. Daniel Katz, a law professor at the University of Chicago-Kent School of Law who took the GPT-4 test, said either you have to read, use, or create documents… that’s really the currency people use. transaction.

Second, legal work has a lot of repetitive tasks that can be automated, such as searching for applicable laws and cases and gathering relevant evidence, according to Katz.

One of the researchers on the legal test, Pablo Arredondo, has been secretly working with OpenAI to use GPT-4 in their legal product, Casetext, since this fall. Casetext uses AI to conduct “document review, legal research memos, deposit preparation, and contract analysis,” according to its website.

Arredondo says he’s getting more and more excited about the GPT-4’s potential for attorney support as he uses it. He says the technology is “unbelievable” and “nuanced”.

However, AI in law is not a new trend. it was for contract review and predict legal outcomes, and researchers have recently Discover how AI can help pass laws. Recently, consumer protection company DoNotPay considered arguing a case in court using a arguments written by AI, known as “robotic lawyers,” transmitted through the headset. (DoNotPay failed with stunt and sued to practice law without a license.)

Despite these examples, these types of technologies are not yet widely adopted in law firms. Could that change with these new major language paradigms?

Third, lawyers are used to reviewing and editing work.

Large language models are far from perfect, and their output will have to be rigorously tested, which is a burden. But lawyers are used to looking at documents created by someone – or something – else. Many are trained in document review, meaning that the greater use of AI, with human intervention, can be relatively easy and practical compared to the adoption of this technology in other industries.

The big question is whether lawyers can be persuaded to trust a system rather than a novice lawyer who has spent three years in law school.

Finally, there are limitations and risks. GPT-4 sometimes produces very convincing but inaccurate text and it will misuse the source material. One time, Arrodondo said, GPT-4 made him doubt the truth of a case he had solved on his own. “I told him, You are wrong. I argue this case. And the AI ​​said, You can sit there and brag about the cases you’ve been involved in, Pablo, but I’m right and here’s the proof. And then it gives a URL with nothing.” Arredondo added, “It’s a bit of a social killer.”

Katz says it’s essential for humans to stay informed when using AI systems, and emphasizes a lawyer’s professional obligation to be precise: “You shouldn’t just take the outputs of the systems. hey, don’t look at them and give them to everyone.”

Others were even more skeptical. “This is not a tool that I trust to ensure critical legal analysis is up to date,” said Ben Winters, head of projects for the Electronic Privacy Information Center on AI and human rights. updated and appropriate. Winters describes the creative AI culture in the legal field as “overconfident and inexplicable”. It has also been well documented that AI is being hampered by racial and gender bias.

There are also high-level, long-term considerations. If lawyers practice less legal research, what does that mean for expertise and oversight in the field?

But we’re a long way from that—for now.

This week, my colleague and Tech Review editor-in-chief, David Rotman, wrote a paragraph analyze the impact of the new age of AI on the economy—especially jobs and productivity.

“The optimistic view: it will prove to be a powerful tool for many workers, enhancing their capabilities and expertise, and providing a boost to the economy as a whole. The pessimism: companies will simply use it to destroy jobs that were once considered automation proof, high-paying jobs that require creative skills and logical reasoning; some high-tech companies and tech elites are even richer, but that won’t do much for overall economic growth.”

What I’m reading this week

Several big guys, including Elon Musk, Gary Marcus, Andrew Yang, Steve Wozniak and more than 1,500 others signed a letter sponsored by the Future Life Institute that has called for a ban on large AI projects. Quite a few AI experts agree with this suggestion, but the rationale (avoiding AI armageddon) has drawn criticism.

The New York Times has notice will not pay for Twitter verification. It’s another blow to Elon Musk’s plan to make Twitter profitable by charging for blue checkmarks.

On March 31, the Italian regulator ChatGPT is temporarily banned about privacy concerns. Specifically, regulators are investigating whether the way OpenAI trains models with user data violates the GDPR.

Lately, I’ve been hooked on some of the longer cultural stories. Here is a sample of my recent favorites:

  • My colleague Tanya Basu wrote a great story about people sleeping together, theoretically, in VR. It’s part of a new age of virtual social behavior that she calls “cozy but creepy”.
  • In the New York Times, Steven Johnson appears with an endearing, though haunting, profile of Thomas Midgley Jr.who created two of the most climate-damaging inventions in history
  • And Wired’s Jason Kehe spent months interviewing the most famous sci-fi author you’ve probably never heard of in the field. sharp and profound look into the mind of Brandon Sanderson.

What I Learned This Week

“A quick look at the news”—swiping through headlines or online trailers—seems like a pretty bad way to learn about current events and political news. A peer-reviewed study conducted by researchers at the University of Amsterdam and the Macromedia University of Applied Sciences in Germany found that “those who consume more news ‘snack’ than others derive little benefit from high exposure their own” and that “snacking” leads to “significantly less learning” than more specialized news consumption, which means that how people use information is more important than how much information they see. This study continues previous research that showed that while the number of people “meeting” the news each day is increasing, the amount of time they spend in each meeting is decreasing. not good for the informed public.




Source by [author_name]

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Check Also
Close
Back to top button