Tech

AI saves whales, stabilizes gait, and repels traffic • TechCrunch

Research in machine learning and AI, now a key technology in practically every industry and company, is too large for anyone to read. This column, Perceptronaims to collect some of the most relevant recent discoveries and papers – especially, but not limited to artificial intelligence – and explain why they are important.

Over the past few weeks, researchers at MIT have detail Their work on a system that tracks the progression of Parkinson’s patients by continuously monitoring their gait speed. Elsewhere, Whale Safe, a project led by the Benioff Ocean Science Laboratory and partners, launched The buoy was equipped with an AI-powered sensor in an experiment to prevent ships from attacking whales. Other aspects of ecology and academia have also seen advances powered by machine learning.

MIT’s Parkinson’s tracking effort aims to help clinicians overcome the challenges of treating an estimated 10 million people with the disease globally. Usually, the motor skills and cognitive function of Parkinson’s patients are assessed during the physical examination, but they can be skewed by external factors such as fatigue. Add to that the fact that going to the office is an overwhelming prospect for many patients, and their situation is increasingly dire.

As an alternative, the MIT team proposes a home device that collects data using radio signals reflected from patients’ bodies as they move around their homes. About the size of a Wi-Fi router, the device runs all day, using an algorithm to pick up the signal even when someone else moves in the room.

In a study published in the journal Medical translation science, MIT researchers have shown that their device can effectively monitor the progression and severity of Parkinson’s disease in dozens of participants in an experimental study. For example, they showed that the gait speed of people with Parkinson’s decreased nearly two times faster than that of people without the disease, and that daily fluctuations in the patient’s walking speed corresponded to the degree of response. their response to the drug.

Moving from health care to the plight of whales, the Whale Safe project – with its stated mission of “using best-in-class technology with best practice conservation strategies to create solutions to reduce minimize risk to whales” – in late September deployed a buoy equipped with an on-board computer that can record whale sounds using an underwater microphone. An AI system detects the sounds of specific species and relays the results to the researcher, so that the animal’s location – or animals – can be calculated by corroborating data with water conditions and local records of whale sightings. The whale’s location is then communicated to nearby ships so they can reroute as needed.

Ship collisions are a major cause of death for whales – many species are endangered. Based on research Conducted by the nonprofit Friend of the Seas, ship attacks kill more than 20,000 whales each year. That destroys local ecosystems, as whales play an important role in capturing carbon from the atmosphere. A single large whale can isolate around Average 33 tons of carbon dioxide.

Benioff Ocean Science Laboratory

Image credits: Benioff Ocean Science Laboratory

Whale Safe currently has buoys deployed in the Santa Barbara Canal near the ports of Los Angeles and Long Beach. In the future, the project aims to install buoys in other coastal areas of the US including Seattle, Vancouver and San Diego.

Forest conservation is another area where technology is being promoted. Aerial forestland surveys using lidar are useful in estimating growth rates and other metrics, but the data they generate is not always easy to read. The point clouds from lidar are just elevation and distance maps regardless – the forest is one large surface, not a bunch of individual trees. They tend to have to be watched by humans on the ground.

Researchers at Purdue built an algorithm (not quite AI but we’ll allow it this time) to turn large amounts of 3D lidar data into individually segmented trees, allowing for more than just canopy data collection trees and growth but also allows a good estimate of the actual trees. It does this by calculating the most efficient path from a given point to the ground, which is essentially the opposite of what nutrients would do in the plant. The results are quite accurate (after being checked with a live inventory) and may contribute to better monitoring of forests and resources in the future.

Self-driving cars are hitting our streets with more frequency these days, even if they’re still essentially beta tests. As their numbers grow, how should policymakers and citizen engineers adapt to them? Carnegie Mellon researchers released a policy brief that make some interesting arguments.

Diagram showing how collaborative decision making where some cars choose the longer route actually makes decision making faster for most people.

The key difference, they argue, is that self-driving cars drive “altruistically,” meaning they intentionally control other drivers — by always allowing other drivers to get ahead of them. They argue that this kind of behavior can be exploited, but at a policy level it should be rewarded, and AVs should be given access to things like toll roads, HOVs and bus lanes, because they won’t use them selfishly. “

They also recommend that planning agencies take a real microcosm when making decisions regarding other modes of transport such as bicycles and scooters and consider how or enhance communication between AVs and between fleets. You can Read the full 23-page report here (PDF).

Moving from traffic to translation, Meta last week announced a new system, Universal Speech Translator, designed to interpret unwritten languages ​​like Hokkien. As part of Engadget on the system notes, thousands of spoken languages ​​have no written component, posing a problem for most machine learning translation systems, often needing to convert speech to written words before translating the new language and reverting the text back voice.

To avoid the shortage of labeled language examples, Universal Speech Translator converts speech into “audio units” and then generates wave. Currently, the system is quite limited in what it can do – it allows speakers of Hokkien, a language commonly spoken in southeastern mainland China, to translate into English a full sentence. at a time. But the Meta team behind Universal Speech Translator believes it will continue to improve.

Illustration for AlphaTensor

Elsewhere in the field of AI, researchers at DeepMind have detailed AlphaTensor, which the Alphabet-backed lab claims is the first AI system to help discover new, efficient, and “possibly accurate” algorithms. AlphaTensor is specifically designed to find new techniques for matrix multiplication, a core operation in the way modern machine learning systems work.

To take advantage of AlphaTensor, DeepMind transformed the matrix multiplication algorithm problem into a single-player game where the “table” is a three-dimensional array of numbers called a tensor. According to DeepMind, AlphaTensor has learned to outdo it, improving on an algorithm first discovered 50 years ago and discovering new algorithms with “modern” complexity. One algorithm the system discovered, optimized for hardware like Nvidia’s V100 GPU, was between 10% and 20% faster than algorithms commonly used on the same hardware.



Source by [author_name]

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button
Immediate Peak