Categories
ScienceDaily

Physicists develop basic principles for mini-labs on chips

Colloidal particles have become increasingly important for research as vehicles of biochemical agents. In future, it will be possible to study their behaviour much more efficiently than before by placing them on a magnetised chip. A research team from the University of Bayreuth reports on these new findings in the journal Nature Communications. The scientists have discovered that colloidal rods can be moved on a chip quickly, precisely, and in different directions, almost like chess pieces. A pre-programmed magnetic field even enables these controlled movements to occur simultaneously.

For the recently published study, the research team, led by Prof. Dr. Thomas Fischer, Professor of Experimental Physics at the University of Bayreuth, worked closely with partners at the University of Poznán and the University of Kassel. To begin with, individual spherical colloidal particles constituted the building blocks for rods of different lengths. These particles were assembled in such a way as to allow the rods to move in different directions on a magnetised chip like upright chess figures — as if by magic, but in fact determined by the characteristics of the magnetic field.

In a further step, the scientists succeeded in eliciting individual movements in various directions simultaneously. The critical factor here was the “programming” of the magnetic field with the aid of a mathematical code, which in encrypted form, outlines all the movements to be performed by the figures. When these movements are carried out simultaneously, they take up to one tenth of the time needed if they are carried out one after the other like the moves on a chessboard.

“The simultaneity of differently directed movements makes research into colloidal particles and their dynamics much more efficient,” says Adrian Ernst, doctoral student in the Bayreuth research team and co-author of the publication. “Miniaturised laboratories on small chips measuring just a few centimetres in size are being used more and more in basic physics research to gain insights into the properties and dynamics of materials. Our new research results reinforce this trend. Because colloidal particles are in many cases very well suited as vehicles for active substances, our research results could be of particular benefit to biomedicine and biotechnology,” says Mahla Mirzaee-Kakhki, first author and Bayreuth doctoral student.

Story Source:

Materials provided by Universität Bayreuth. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

AI learns to design

Trained AI agents can adopt human design strategies to solve problems, according to findings published in the ASME Journal of Mechanical Design.

Big design problems require creative and exploratory decision making, a skill in which humans excel. When engineers use artificial intelligence (AI), they have traditionally applied it to a problem within a defined set of rules rather than having it generally follow human strategies to create something new. This novel research considers an AI framework that learns human design strategies through observation of human data to generate new designs without explicit goal information, bias, or guidance.

The study was co-authored by Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University’s College of Engineering, Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon, and Chris McComb, an assistant professor of engineering design at the Pennsylvania State University.

“The AI is not just mimicking or regurgitating solutions that already exist,” said Cagan. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.” How good can AI be? “The answer is quite good.”

The study focuses on truss problems because they represent complex engineering design challenges. Commonly seen in bridges, a truss is an assembly of rods forming a complete structure. The AI agents were trained to observe the progression in design modification sequences that had been followed in creating a truss based on the same visual information that engineers use — pixels on a screen — but without further context. When it was the agents’ turn to design, they imagined design progressions that were similar to those used by humans and then generated design moves to realize them. The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems.

The framework was made up of multiple deep neural networks which worked together in a prediction-based situation. Using a neural network, the AI looked through a set of five sequential images and predicted the next design using the information it gathered from these images.

“We were trying to have the agents create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” said Raina.

The researchers tested the AI agents on similar problems and found that on average, they performed better than humans. Yet, this success came without many of the advantages humans have available when they are solving problems. Unlike humans, the agents were not working with a specific goal (like making something lightweight) and did not receive feedback on how well they were doing. Instead, they only used the vision-based human strategy techniques they had been trained to use.

“It’s tempting to think that this AI will replace engineers, but that’s simply not true,” said McComb. “Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively.”

Story Source:

Materials provided by College of Engineering, Carnegie Mellon University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
IEEE Spectrum

AI Agents Startle Researchers With Unexpected Hide-and-Seek Strategies

After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.

After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.

The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.

In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.” 

According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”