Future autonomous machines may build trust through emotion

Army research has extended the state-of-the-art in autonomy by providing a more complete picture of how actions and nonverbal signals contribute to promoting cooperation. Researchers suggested guidelines for designing autonomous machines such as robots, self-driving cars, drones and personal assistants that will effectively collaborate with Soldiers.

Dr. Celso de Melo, computer scientist with the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory at CCDC ARL West in Playa Vista, California, in collaboration with Dr. Kazunori Teradafrom Gifu University in Japan, recently published a paper in Scientific Reports where they show that emotion expressions can shape cooperation.

Autonomous machines that act on people’s behalf are poised to become pervasive in society, de Melo said; however, for these machines to succeed and be adopted, it is essential that people are able to trust and cooperate with them.

“Human cooperation is paradoxical,” de Melo said. “An individual is better off being a free rider, while everyone else cooperates; however, if everyone thought like that, cooperation would never happen. Yet, humans often cooperate. This research aims to understand the mechanisms that promote cooperation with a particular focus on the influence of strategy and signaling.”

Strategy defines how individuals act in one-shot or repeated interaction. For instance, tit-for-tat is a simple strategy that specifies that the individual should act as his/her counterpart acted in the previous interaction.

Signaling refers to communication that may occur between individuals, which could be verbal (e.g., natural language conversation) and nonverbal (e.g., emotion expressions).

This research effort, which supports the Next Generation Combat Vehicle Army Modernization Priority and the Army Priority Research Area for Autonomy, aims to apply this insight in the development of intelligent autonomous systems that promote cooperation with Soldiers and successfully operate in hybrid teams to accomplish a mission.

“We show that emotion expressions can shape cooperation,” de Melo said. “For instance, smiling after mutual cooperation encourages more cooperation; however, smiling after exploiting others — which is the most profitable outcome for the self — hinders cooperation.”

The effect of emotion expressions is moderated by strategy, he said. People will only process and be influenced by emotion expressions if the counterpart’s actions are insufficient to reveal the counterpart’s intentions.

For example, when the counterpart acts very competitively, people simply ignore-and even mistrust-the counterpart’s emotion displays.

“Our research provides novel insight into the combined effects of strategy and emotion expressions on cooperation,” de Melo said. “It has important practical application for the design of autonomous systems, suggesting that a proper combination of action and emotion displays can maximize cooperation from Soldiers. Emotion expression in these systems could be implemented in a variety of ways, including via text, voice, and nonverbally through (virtual or robotic) bodies.”

According to de Melo, the team is very optimistic that future Soldiers will benefit from research such as this as it sheds light on the mechanisms of cooperation.

“This insight will be critical for the development of socially intelligent autonomous machines, capable of acting and communicating nonverbally with the Soldier,” he said. “As an Army researcher, I am excited to contribute to this research as I believe it has the potential to greatly enhance human-agent teaming in the Army of the future.”

The next steps for this research include pursuing further understanding of the role of nonverbal signaling and strategy in promoting cooperation and identifying creative ways to apply this insight on a variety of autonomous systems that have different affordances for acting and communicating with the Soldier.

Go to Source


Scientists propose method for eliminating damaging heat bursts in fusion device

Picture an airplane that can only climb to one or two altitudes after taking off. That limitation would be similar to the plight facing scientists who seek to avoid instabilities that restrict the path to clean, safe and abundant fusion energy in doughnut-shaped tokamak facilities. Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and General Atomics (GA) have now published a breakthrough explanation of this tokamak restriction and how it may be overcome.

Toroidal, or doughnut-shaped, tokamaks are prone to intense bursts of heat and particles, called edge localized modes (ELMs). These ELMs can damage the reactor walls and must be controlled to develop reliable fusion power. Fortunately, scientists have learned to tame these ELMs by applying spiraling rippled magnetic fields to the surface of the plasma that fuels fusion reactions. However, the taming of ELMs requires very specific conditions that limit the operational flexibility of tokamak reactors.

ELM suppression

Now, researchers at PPPL and GA have developed a model that, for the first time, accurately reproduces the conditions for ELM suppression in the DIII-D National Fusion Facility that GA operates for DOE. The model predicts the conditions under which ELM suppression should extend over a wider range of operating conditions in the tokamak than previously thought possible. The work presents important predictions for how to optimize the effectiveness of ELM suppression in ITER, the massive international fusion device under construction in the south of France to demonstrate the feasibility of fusion power.

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that makes up 99 percent of the visible universe — to generate massive amounts of energy. Tokamaks are the most widely used devices by scientists seeking to replicate fusion as a renewable, carbon-free source of virtually limitless energy for generating electricity.

PPPL physicists Qiming Hu and Raffi Nazikian are the lead authors of a paper describing the model in Physical Review Letters. They note that under normal conditions the rippled magnetic field can only suppress ELMs for very precise values of the plasma current that produces the magnetic fields that confine the plasma. This creates a problem because tokamak reactors must operate over a wide range of plasma current to explore and optimize the conditions required to generate fusion power.

Modifying magnetic ripples

The authors show how, by modifying the structure of the helical magnetic ripples applied to the plasma, ELMs should be eliminated over a wider range of plasma current with improved generation of fusion power. Hu said he believes the findings could provide ITER with the wide operational flexibility it will need to demonstrate the practicality of fusion energy. “This model could have significant implications for suppressing ELMs in ITER,” he said.

Indeed, “What we have done is to accurately predict when we can achieve ELM suppression over wider ranges of the plasma current,” said Nazikian, who oversees PPPL research on tokamaks. “By trying to understand some strange results we saw on DIII-D, we figured out the key physics that controls the range of ELM suppression that can be achieved using these helically rippled magnetic fields. We then went back and figured out a method that could produce wider operational windows of ELM suppression more routinely in DIII-D and ITER.”

Enhanced tokamak operation

The findings open the door to enhanced tokamak operation. “This work describes a path to expand the operational space for controlling edge instability in tokamaks by modifying the structure of the ripples,” said Carlos Paz-Soldan, a GA scientist and a co-author of the paper. “We look forward to testing these predictions with our upgraded field coils that are planned for DIII-D in a few years’ time.”

Returning to the aircraft analogy, “If you could fly at only one or two different altitudes, travel would be very limited,” said PPPL physicist Brian Grierson, a co-author of the paper. “Fixing the restriction would enable the plane to fly over a wide range of altitudes in order to optimize its flight path and fulfill its mission.” In the same way, the present paper lays out an approach that is predicted to expand the capabilities of fusion reactors to operate free from ELMs that can damage the facilities and hinder the development of tokamaks for fusion energy.

Story Source:

Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by John Greenwald. Note: Content may be edited for style and length.

Go to Source


Team dramatically reduces image analysis times using deep learning, other approaches

A picture is worth a thousand words -but only when it’s clear what it depicts. And therein lies the rub in making images or videos of microscopic life. While modern microscopes can generate huge amounts of image data from living tissues or cells within a few seconds, extracting meaningful biological information from that data can take hours or even weeks of laborious analysis.

To loosen this major bottleneck, a team led by MBL Fellow Hari Shroff has devised deep-learning and other computational approaches that dramatically reduce image-analysis time by orders of magnitude — in some cases, matching the speed of data acquisition itself. They report their results this week in Nature Biotechnology.

“It’s like drinking from a firehose without being able to digest what you’re drinking,” says Shroff of the common problem of having too much imaging data and not enough post-processing power. The team’s improvements, which stem from an ongoing collaboration at the Marine Biological Laboratory (MBL), speed up image analysis in three major ways.

First, imaging data off the microscope is typically corrupted by blurring. To lessen the blur, an iterative “deconvolution” process is used. The computer goes back and forth between the blurred image and an estimate of the actual object, until it reaches convergence on a best estimate of the real thing.

By tinkering with the classic algorithm for deconvolution, Shroff and co-authors accelerated deconvolution by more than 10-fold. Their improved algorithm is widely applicable “to almost any fluorescence microscope,” Shroff says. “It’s a strict win, we think. We’ve released the code and other groups are already using it.”

Next, they addressed the problem of 3D registration: aligning and fusing multiple images of an object taken from different angles. “It turns out that it takes much longer to register large datasets, like for light-sheet microscopy, than it does to deconvolve them,” Shroff says. They found several ways to accelerate 3D registration, including moving it to the computer’s graphics processing unit (GPU). This gave them a 10- to more than 100-fold improvement in processing speed over using the computer’s central processing unit (CPU).

“Our improvements in registration and deconvolution mean that for datasets that fit onto a graphics card, image analysis can in principle keep up with the speed of acquisition,” Shroff says. “For bigger datasets, we found a way to efficiently carve them up into chunks, pass each chunk to the GPU, do the registration and deconvolution, and then stitch those pieces back together. That’s very important if you want to image large pieces of tissue, for example, from a marine animal, or if you are clearing an organ to make it transparent to put on the microscope. Some forms of large microscopy are really enabled and sped up by these two advances.”

Lastly, the team used deep learning to accelerate “complex deconvolution” — intractable datasets in which the blur varies significantly in different parts of the image. They trained the computer to recognize the relationship between badly blurred data (the input) and a cleaned, deconvolved image (the output). Then they gave it blurred data it hadn’t seen before. “It worked really well; the trained neural network could produce deconvolved results really fast,” Shroff says. “That’s where we got thousands-fold improvements in deconvolution speed.”

While the deep learning algorithms worked surprisingly well, “it’s with the caveat that they are brittle,” Shroff says. “Meaning, once you’ve trained the neural network to recognize a type of image, say a cell with mitochondria, it will deconvolve those images very well. But if you give it an image that is a bit different, say the cell’s plasma membrane, it produces artifacts. It’s easy to fool the neural network.” An active area of research is creating neural networks that work in a more generalized way.

“Deep learning augments what is possible,” Shroff says. “It’s a good tool for analyzing datasets that would be difficult any other way.”

Story Source:

Materials provided by Marine Biological Laboratory. Original written by Diana Kenney. Note: Content may be edited for style and length.

Go to Source