Tech

AI mixes concrete, designs molecules, and thinks with a space laser – TechCrunch


Welcome to Perceptron, TechCrunch’s weekly aggregator of AI news and research from around the world. Machine learning is a key technology in practically every industry today, and there’s too much going on for anyone to keep up with all of it. This column aims to collect some of the most interesting recent discoveries and articles in the field of artificial intelligence – and explain why they are important.

(Formerly known as Deep Science; check out previous versions here.)

This week’s round-up kicked off with a pair of futurist studies from Facebook/Meta. The first is a partnership with the University of Illinois at Urbana-Champaign that aims to reduce emissions from concrete production. Concrete accounts for about 8% of carbon emissions, so even a small improvement can help us meet our climate goals.

This is called a “drop test”.

What the Meta/UIUC team did was train a model on over a thousand concrete formulations, with varying ratios of sand, slag, ground glass, and other materials (you can see a sample snippet below). concrete is more photogenic at the top). Finding subtle trends in this data set, it could generate some new formulations that optimize for both power and low emissions. Winning Formula turns out to have 40% less emissions than regional standards and meets … well, some of strength requirements. It is extremely promising and further research in this area will soon move the ball again.

Second Meta Research involves changing the way language model Work. The company wants to work with neuroimaging experts and other researchers to compare how language models compare to actual brain activity on similar tasks.

In particular, they are interested in the human ability to predict words that go beyond the present word while speaking or listening – like knowing a sentence will end in a certain way or that there will be an imminent “but”. out. AI models are getting great, but they still mostly work by adding words one at a time like Lego bricks, occasionally looking back to see if it makes sense. They are just getting started but they already have some interesting results.

Back to the materials tip, researchers at Oak Ridge National Laboratory are starting to get interesting with the AI ​​formula. Using a dataset of quantum chemistry computations, whatever that may be, the team created a neural network that can predict the properties of matter – but then invert it so they can input properties and it suggests materials.

“Rather than taking a material and predicting its certain properties, we wanted to pick properties that were ideal for our purposes and work backwards to design those properties quickly and efficiently. results with a high degree of confidence. That’s called inverse design,” says Victor Fung of ORNL. Looks like it worked – but you can test it yourself by running code on Github.

View of the upper half of South America in the form of a canopy height map.

Image credits: ETHZ

Interested in physics predictions on a completely different scale, this ETHZ project estimates the height of trees around the globe using data from ESA’s Copernicus Sentinel-2 satellite (for optical imaging) and NASA’s GEDI (orbital laser range). Combining the two in a complex neural network produces an accurate global map of the height of trees up to 55 meters tall.

“We simply don’t know how tall trees are globally,” explains Ralph Dubayah of NASA. We need a good global map of the location of the trees. Because whenever we cut down a tree, we release carbon into the atmosphere, and we have no idea how much carbon we are emitting.”

You can go crazy Browse data in map form here.

Also relevant to the landscape is this DARPA project, which is all about creating extremely large-scale simulation environments for virtual autonomous vehicles to pass through. They awarded the contract to Intelalthough they could have saved some money by contacting the game makers Snowrunnerbasically does what DARPA wants for $30.

Image of a simulated desert and a real desert side by side.

Image credits: Intel

RACER-Sim’s goal is to develop off-road AVs that already know what it’s like to rumble over rocky deserts and other extreme terrain. The 4-year program will first focus on creating environments, building models in simulators, and then transferring skills to physical robotic systems.

In the field of AI pharmaceuticals, there are currently about 500 different companies, MIT has a sane approach in a model suggesting only real molecules can be produced. “Models often suggest new molecular structures that are difficult or impossible to produce in the laboratory. If a chemist really couldn’t make the molecule, its disease-fighting properties wouldn’t be able to be tested.”

Looks cool, but can you make it without unicorn horn powder?

The MIT model “ensures that molecules are composed of materials that can be purchased and that the chemical reactions that occur between those materials obey the laws of chemistry.” It sounds like What does Molecule.one do?, but integrated into the discovery process. It would certainly be nice to know that the magic potion your AI is suggesting doesn’t require any fairy dust or other exotic matter.

Another work done by MIT, the University of Washington and others is teaching robots to interact with everyday objects – something we all hope will become ubiquitous in the next few decades, as some in We don’t have a dishwasher. The problem is that it is difficult to know exactly how people interact with objects, since we cannot relay our data with high fidelity to train a model. So there is a lot of manual data annotation and labeling involved.

New technique focuses on observing and inferring 3D geometry so closely that it only takes a few examples of a person grasping an object for the system to learn how to do it on its own. Normally, it might need hundreds of examples or thousands of iterations in a single simulator, but this simulation only needs 10 human representations of each object to effectively manipulate that object.

Image credits: MIT

It achieves an 85% success rate with this minimal training, which is better than the base model. It is currently limited to a few categories, but the researchers hope it can be generalized.

Finally this week some Promising work from Deepmind on a multimodal “visual language model” that combines visual knowledge with linguistic knowledge so that ideas like “three cats sitting on a fence” have a sort of cross-representation between law and images. After all, that’s how our minds work.

Flamingo, their new “general purpose” paradigm, is visually identifiable but also engages in dialogue, not because it is two paradigms in one but because it combines language and figurative understanding. photo together. As we have seen from other research institutions, this type of multimodal approach produces good results but is still highly experimental and computationally intensive.



Source link

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button