Categories
ScienceDaily

Security software for autonomous vehicles

Before autonomous vehicles participate in road traffic, they must demonstrate conclusively that they do not pose a danger to others. New software developed at the Technical University of Munich (TUM) prevents accidents by predicting different variants of a traffic situation every millisecond.

A car approaches an intersection. Another vehicle jets out of the cross street, but it is not yet clear whether it will turn right or left. At the same time, a pedestrian steps into the lane directly in front of the car, and there is a cyclist on the other side of the street. People with road traffic experience will in general assess the movements of other traffic participants correctly.

“These kinds of situations present an enormous challenge for autonomous vehicles controlled by computer programs,” explains Matthias Althoff, Professor of Cyber-Physical Systems at TUM. “But autonomous driving will only gain acceptance of the general public if you can ensure that the vehicles will not endanger other road users — no matter how confusing the traffic situation.”

Algorithms that peer into the future

The ultimate goal when developing software for autonomous vehicles is to ensure that they will not cause accidents. Althoff, who is a member of the Munich School of Robotics and Machine Intelligence at TUM, and his team have now developed a software module that permanently analyzes and predicts events while driving. Vehicle sensor data are recorded and evaluated every millisecond. The software can calculate all possible movements for every traffic participant — provided they adhere to the road traffic regulations — allowing the system to look three to six seconds into the future.

Based on these future scenarios, the system determines a variety of movement options for the vehicle. At the same time, the program calculates potential emergency maneuvers in which the vehicle can be moved out of harm’s way by accelerating or braking without endangering others. The autonomous vehicle may only follow routes that are free of foreseeable collisions and for which an emergency maneuver option has been identified.

Streamlined models for swift calculations

This kind of detailed traffic situation forecasting was previously considered too time-consuming and thus impractical. But now, the Munich research team has shown not only the theoretical viability of real-time data analysis with simultaneous simulation of future traffic events: They have also demonstrated that it delivers reliable results.

The quick calculations are made possible by simplified dynamic models. So-called reachability analysis is used to calculate potential future positions a car or a pedestrian might assume. When all characteristics of the road users are taken into account, the calculations become prohibitively time-consuming. That is why Althoff and his team work with simplified models. These are superior to the real ones in terms of their range of motion — yet, mathematically easier to handle. This enhanced freedom of movement allows the models to depict a larger number of possible positions but includes the subset of positions expected for actual road users.

Real traffic data for a virtual test environment

For their evaluation, the computer scientists created a virtual model based on real data they had collected during test drives with an autonomous vehicle in Munich. This allowed them to craft a test environment that closely reflects everyday traffic scenarios. “Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop,” Althoff sums up.

The computer scientist emphasizes that the new security software could simplify the development of autonomous vehicles because it can be combined with all standard motion control programs.

Story Source:

Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New theory hints at more efficient way to develop quantum algorithms

In 2019, Google claimed it was the first to demonstrate a quantum computer performing a calculation beyond the abilities of today’s most powerful supercomputers.

But most of the time, creating a quantum algorithm that stands a chance at beating a classical computer is an accidental process, Purdue University scientists say. To bring more guidance to this process and make it less arbitrary, these scientists developed a new theory that may eventually lead to more systematic design of quantum algorithms.

The new theory, described in a paper published in the journal Advanced Quantum Technologies, is the first known attempt to determine which quantum states can be created and processed with an acceptable number of quantum gates to outperform a classical algorithm.

Physicists refer to this concept of having the right number of gates to control each state as “complexity.” Since the complexity of a quantum algorithm is closely related to the complexity of quantum states involved in the algorithm, the theory could therefore bring order to the search for quantum algorithms by characterizing which quantum states meet that complexity criteria.

An algorithm is a sequence of steps to perform a calculation. The algorithm is usually implemented on a circuit.

In classical computers, circuits have gates that switch bits to either a 0 or 1 state. A quantum computer instead relies on computational units called “qubits” that store 0 and 1 states simultaneously in superposition, allowing more information to be processed.

What would make a quantum computer faster than a classical computer is simpler information processing, characterized by the enormous reduction in the number of quantum gates in a quantum circuit compared with a classical circuit.

In classical computers the number of gates in circuits increases exponentially with respect to the size of the problem of interest. This exponential model grows so astonishingly fast that it becomes physically impossible to handle for even a moderately sized problem of interest.

“For example, even a small protein molecule may contain hundreds of electrons. If each electron can only take two forms, then to simulate 300 electrons would require 2300 classical states, which is more than the number of all the atoms in the universe,” said Sabre Kais, a professor in Purdue’s Department of Chemistry and member of the Purdue Quantum Science and Engineering Institute.

For quantum computers, there is a way for quantum gates to scale up “polynomially” — rather than just exponentially like a classical computer — with the size of the problem (like the number of electrons in the last example). “Polynomial” means that there would be drastically fewer steps (gates) needed to process the same amount of information, making a quantum algorithm superior to a classical algorithm.

Researchers so far haven’t had a good way to identify which quantum states could satisfy this condition of polynomial complexity.

“There is a very large search space for finding the states and sequence of gates that match up in complexity to create a useful quantum algorithm capable of performing calculations faster than a classical algorithm,” said Kais, whose research group is developing quantum algorithms and quantum machine learning methods.

Kais and Zixuan Hu, a Purdue postdoctoral associate, used the new theory to identify a large group of quantum states with polynomial complexity. They also showed that these states may share a coefficient feature that could be used to better identify them when designing a quantum algorithm.

“Given any quantum state, we are now able to design an efficient coefficient sampling procedure to determine if it belongs to the class or not,” Hu said.

This work is supported by the U.S. Department of Energy (Office of Basic Energy Sciences) under Award No. DE-SC0019215. The Purdue Quantum Science and Engineering Institute is part of Purdue’s Discovery Park.

Story Source:

Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Cracking a decades-old test, researchers bolster case for quantum mechanics

In a new study, researchers demonstrate creative tactics to get rid of loopholes that have long confounded tests of quantum mechanics. With their innovative method, the researchers were able to demonstrate quantum interactions between two particles spaced more than 180 meters (590 feet) apart while eliminating the possibility that shared events during the past 11 years affected their interaction.

A paper explaining these results will be presented at the Frontiers in Optics + Laser Science (FIO + LS) conference, held 15-19 September in Washington, D.C., U.S.A.

Quantum phenomena are being explored for applications in computing, encryption, sensing and more, but researchers do not yet fully understand the physics behind them. The new work could help advance quantum applications by improving techniques for probing quantum mechanics.

A test for quantum theories

Physicists have long grappled with different ideas about the forces that govern our world. While theories of quantum mechanics have gradually overtaken classical mechanics, many aspects of quantum mechanics remain mysterious. In the 1960s, physicist John Bell proposed a way to test quantum mechanics known as Bell’s inequality.

The idea is that two parties, nicknamed Alice and Bob, make measurements on particles that are located far apart but connected to each other via quantum entanglement.

If the world were indeed governed solely by quantum mechanics, these remote particles would be governed by a nonlocal correlation through quantum interactions, such that measuring the state of one particle affects the state of the other. However, some alternate theories suggest that the particles only appear to affect each other, but that in reality they are connected by other hidden variables following classical, rather than quantum, physics.

Researchers have conducted many experiments to test Bell’s inequality. However, experiments can’t always be perfect, and there are known loopholes that could cause misleading results. While most experiments have strongly supported the conclusion that quantum interactions exist, these loopholes still leave a remote possibility that researchers could be inadvertently affecting hidden variables, thus leaving room for doubt.

Closing loopholes

In the new study, Li and his colleagues demonstrate ways to close those loopholes and add to the evidence that quantum mechanics governs the interactions between the two particles.

“We realized a loophole-free Bell test with the measurement settings determined by remote cosmic photons. Thus we verified the completeness of quantum mechanics with high-confidence probability,” said Ming-Han Li of the University of Science and Technology of China, who is lead author on the paper.

Their experimental setup includes three main components: a device that periodically sends out pairs of entangled photons and two stations that measure the photons. These stations are Alice and Bob, in the parlance of Bell’s inequality. The first measurement station is 93 meters (305 feet) from the photon pair source and the second station is 90 meters (295 feet) away in the opposite direction.

The entangled photons travel through single mode optical fiber to the measurement stations, where their polarization state is measured with a Pockels cell and the photons are detected by superconducting nanowire single-photon detectors.

In designing their experiment, the researchers sought to overcome three key problems: the idea that loss and noise make detection unreliable (the detection loophole), the idea that any communication that affects Alice’s and Bob’s measurement choices makes the measurement cheatable (the locality loophole), and the idea that a measurement-setting choice that is not “truly free and random” makes the result able to be controlled by a hidden cause in the common past (the freedom-of-choice loophole).

To address the first problem, Li and his colleagues demonstrated that their setup achieved a sufficiently low level of loss and noise by comparing measurements made at the start and end of the photon’s journey. To address the second, they built the experimental setup with space-like separation between the events of measurement setting choice. To address the third, they based their measurement-setting choices on cosmic photon behavior from 11 years earlier, which offers high confidence that nothing in the particles’ shared past — for at least the past 11 years — created a hidden variable affecting the outcome.

Combining theoretically calculated predictions with experimental results, the researchers were able to demonstrate quantum interactions between the entangled photon pairs with a high degree of confidence and fidelity. Their experiment thus provides robust evidence that quantum effects, rather than hidden variables, are behind the particles’ behavior.

Go to Source
Author: