Tech

‘earables’ face tracking, analog AI chips, and accelerated particle accelerators TechCrunch


Research in machine learning and AI, now a key technology in practically every industry and company, is too large for anyone to read. This column, Perceptronaims to collect some of the most relevant recent discoveries and papers – especially, but not limited to artificial intelligence – and explain why they are important.

One “can be heard“Using ultrasound to read facial expressions has been one of the projects that has caught our attention in the past few weeks. Also ProcTHOR, a framework from the Allen Institute for AI (AI2) that aims to create procedural environments that can be used to train real-world robots. Among other highlights, Meta has created an AI system that can predict the structure of a protein with a single amino acid sequence. And researchers at MIT have developed Hardware which they claim provides faster computation for AI with less power.

“Aural,” developed by a team at Cornell, looks like a bulky pair of headphones. The speaker sends audio signals to one side of the wearer’s face, while the microphone picks up the undetectable echoes generated by the nose, lips, eyes, and other facial features. These “echo profiles” allow the device to capture movements such as raised eyebrows and glances, which the AI ​​algorithm translates into complete facial expressions.

AI can see

Image credits: Cornell

There are some limitations. It only lasts three hours on battery and has to offload smartphone processing, and the echo translation AI algorithm has to train on 32 minutes of facial data before it can start recognizing expressions. But the researchers make the case that it’s a much nicer experience than the tape recorders commonly used in animation for movies, TV and video games. For the LA Noire mystery game, for example, Rockstar Games built a rig with 32 cameras trained on each actor’s face.

Perhaps one day, Cornell’s detachable machine will be used to animate humanoid robots. But those robots will have to learn how to navigate a room first. Fortunately, AI2’s ProcTHOR takes a step (no pun intended) in this direction, creating thousands of custom scenes including classrooms, libraries, and offices in which the simulated robot must complete tasks. tasks, like picking up objects and moving around furniture.

The idea behind the scenes, which has simulated lighting and contains a subset of a variety of surface materials (e.g. wood, tile, etc.) and household items, is for the robots to simulate as much as possible. The more diverse the better. It’s a well-established theory in AI that performance in a simulated environment can improve the performance of real-world systems; Self-driving car companies like Alphabet’s Waymo simulate entire neighborhoods to refine how their real-world cars behave.

ProcTHOR AI2

Image credits: Allen Institute for Artificial Intelligence

As for ProcTHOR, AI2 claims in an article that expanding the number of training environments consistently improves performance. That bodes well for constrained robots at home, work, and elsewhere.

Of course, training these types of systems requires a lot of computing power. But that may not be the case forever. Researchers at MIT say they have created an “analog” processor that can be used to create superfast “neural” and “synaptic” networks that can then be used used to perform tasks like image recognition, language translation, and more.

The researchers’ processor uses “protonic programmable resistors” arranged in an array to “learn” skills. The increase and decrease in electrical conductivity of the electrical resistance mimics the strengthening and weakening of synapses between neurons in the brain, part of the learning process.

Conductivity is controlled by an electrolyte that governs the movement of protons. As more protons are pushed into a channel in the resistor, the conductivity increases. When protons are removed, conductivity decreases.

computer circuit board

Processor on a computer circuit board

An inorganic material, phosphosilicate glass, makes the MIT team’s processor extremely fast because it contains nanometer-sized holes whose surface provides perfect paths for protein diffusion. As an added benefit, the glass can run at room temperature and it is not damaged by proteins as they travel along the pores.

“Once you have a similar processor, you no longer have to train the networks everyone else is working on,” lead author and MIT postman Murat Onen is quoted as saying in a press release. . “You will train networks with unprecedented complexity that no one else can take on, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spaceship.”

Speaking of acceleration, machine learning is now being put to use managing the particle accelerator, at least in experimental form. At Lawrence Berkeley National Laboratory, two research groups have shown that ML-based whole-machine and beam simulations give them 10 times more accurate predictions than conventional statistical analysis.

Image credits: Thor Swift / Berkeley Lab

“If you can predict the properties of the beams with an accuracy that surpasses their oscillations, then you can use the prediction to increase the efficiency of the accelerators,” says Daniele Filippetto of the lab. It’s not a small feat to simulate all the physics and equipment involved, but surprisingly the initial efforts by various groups to do so have yielded promising results.

And at Oak Ridge National Laboratory, an AI-powered platform is allowing them to do Hyperspectral Computer Imaging using neutron scattering, find out… maybe we should let them explain.

In the medical world, there is a new application of machine learning-based image analysis in the field of neuroscience, where researchers at University College London trained a model to detect early signs of brain damage from epilepsy.

MRI of the brain is used to train the UCL algorithm.

A common cause of drug-resistant epilepsy is focal cortical dysplasia, an area of ​​the brain that has developed abnormally but for some reason is not clearly abnormal on MRI. Early detection can be extremely helpful, so the UCL team trained an MRI examination model called Multicentric Epilepsy Detection on thousands of examples of healthy and affected brain regions. by FCD.

The model is able to detect 2/3 of the FCDs it is displayed on, which is actually quite good since the markings are very subtle. In fact, it found 178 cases where doctors couldn’t identify FCD but it could. Of course, the final say lies with the specialists, but a computer that says something is wrong can sometimes just take a closer look and make a definite diagnosis.

“Our emphasis is on creating an AI algorithm that is interpretable and can help doctors make decisions. Showing doctors how the MELD algorithm makes its predictions is an essential part of that process,” said Mathilde Ripart of UCL.



Source link

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button
Immediate Peak