Categories
ScienceDaily

Security software for autonomous vehicles

Before autonomous vehicles participate in road traffic, they must demonstrate conclusively that they do not pose a danger to others. New software developed at the Technical University of Munich (TUM) prevents accidents by predicting different variants of a traffic situation every millisecond.

A car approaches an intersection. Another vehicle jets out of the cross street, but it is not yet clear whether it will turn right or left. At the same time, a pedestrian steps into the lane directly in front of the car, and there is a cyclist on the other side of the street. People with road traffic experience will in general assess the movements of other traffic participants correctly.

“These kinds of situations present an enormous challenge for autonomous vehicles controlled by computer programs,” explains Matthias Althoff, Professor of Cyber-Physical Systems at TUM. “But autonomous driving will only gain acceptance of the general public if you can ensure that the vehicles will not endanger other road users — no matter how confusing the traffic situation.”

Algorithms that peer into the future

The ultimate goal when developing software for autonomous vehicles is to ensure that they will not cause accidents. Althoff, who is a member of the Munich School of Robotics and Machine Intelligence at TUM, and his team have now developed a software module that permanently analyzes and predicts events while driving. Vehicle sensor data are recorded and evaluated every millisecond. The software can calculate all possible movements for every traffic participant — provided they adhere to the road traffic regulations — allowing the system to look three to six seconds into the future.

Based on these future scenarios, the system determines a variety of movement options for the vehicle. At the same time, the program calculates potential emergency maneuvers in which the vehicle can be moved out of harm’s way by accelerating or braking without endangering others. The autonomous vehicle may only follow routes that are free of foreseeable collisions and for which an emergency maneuver option has been identified.

Streamlined models for swift calculations

This kind of detailed traffic situation forecasting was previously considered too time-consuming and thus impractical. But now, the Munich research team has shown not only the theoretical viability of real-time data analysis with simultaneous simulation of future traffic events: They have also demonstrated that it delivers reliable results.

The quick calculations are made possible by simplified dynamic models. So-called reachability analysis is used to calculate potential future positions a car or a pedestrian might assume. When all characteristics of the road users are taken into account, the calculations become prohibitively time-consuming. That is why Althoff and his team work with simplified models. These are superior to the real ones in terms of their range of motion — yet, mathematically easier to handle. This enhanced freedom of movement allows the models to depict a larger number of possible positions but includes the subset of positions expected for actual road users.

Real traffic data for a virtual test environment

For their evaluation, the computer scientists created a virtual model based on real data they had collected during test drives with an autonomous vehicle in Munich. This allowed them to craft a test environment that closely reflects everyday traffic scenarios. “Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop,” Althoff sums up.

The computer scientist emphasizes that the new security software could simplify the development of autonomous vehicles because it can be combined with all standard motion control programs.

Story Source:

Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New research leads to drones changing shape mid-flight

Soon, the U.S. Army will be able to deploy autonomous air vehicles that can change shape during flight, according to new research presented at the AIAA Aviation Forum and Exposition’s virtual event June 16.

Researchers with the U.S. Army’s Combat Capabilities Development Command’s Army Research Laboratory and Texas A&M University published findings of a two-year study in fluid-structure interaction. Their research led to a tool, which will be able to rapidly optimize the structural configuration for Future Vertical Lift vehicles while properly accounting for the interaction between air and the structure.

Within the next year, this tool will be used to develop and rapidly optimize Future Vertical Lift vehicles capable of changing shape during flight, thereby optimizing performance of the vehicle through different phases of flight.

“Consider an [Intelligence, Surveillance and Reconnaissance] mission where the vehicle needs to get quickly to station, or dash, and then attempt to stay on station for as long as possible, or loiter,” said Dr. Francis Phillips, an aerospace engineer at the laboratory. “During dash segments, short wings are desirable in order to go fast and be more maneuverable, but for loiter segments, long wings are desirable in order to enable low power, high endurance flight.”

This tool will enable the structural optimization of a vehicle capable of such morphing while accounting for the deformation of the wings due to the fluid-structure interaction, he said.

One concern with morphing vehicles is striking a balance between sufficient bending stiffness and softness to enable to morphing,” Phillips said. “If the wing bends too much, then the theoretical benefits of the morphing could be negated and also could lead to control issues and instabilities.”

Fluid-structure interaction analyses typically require coupling between a fluid and a structural solver.

This, in turn, means that the computational cost for these analyses can be very high — in the range of about 10,000s core hours — for a single fluid and structural configuration.

To overcome these challenges, researchers developed a process that decouples the fluid and structural solvers, which can reduce the computational cost for a single run by as much as 80 percent, Phillips said.

The analysis of additional structural configurations can also be performed without re-analyzing the fluid due to this decoupled approach, which in turn generates additional computational cost savings, leading to multiple orders of magnitude reductions in computational cost when considering this method within an optimization framework.

Ultimately, this means the Army could design multi-functional Future Vertical Lift vehicles much more quickly than through the use of current techniques, he said.

For the past 20 years, there have been advances in research in morphing aerial vehicles but what makes the Army’s studies different is its look at the fluid-structure interaction during vehicle design and structural optimization instead of designing a vehicle first and then seeing what the fluid-structure interaction behavior will be.

“This research will have a direct impact on the ability to generate vehicles for the future warfighter,” Phillips said. “By reducing the computational cost for fluid-structure interaction analysis, structural optimization of future vertical lift vehicles can be accomplished in a much shorter time-frame.”

According to Phillips, when implemented within an optimization framework and coupled with additive manufacturing, the future warfighter will be able to use this tool to manufacture optimized custom air vehicles for mission specific uses.

Phillips presented this work in a paper, Uncoupled Method for Massively Parallelizable 3-D Fluid-Structure Interaction Analysis and Design, co-authored by the laboratory’s Drs. Todd Henry and John Hrynuk, as well as Texas A&M University’s Trent White, William Scholten and Dr. Darren Hartl.

Go to Source
Author:

Categories
ScienceDaily

Could shrinking a key component help make autonomous cars affordable?

Engineers and business leaders have been working on autonomous cars for years, but there’s one big obstacle to making them cheap enough to become commonplace: They’ve needed a way to cut the cost of lidar, the technology that enables robotic navigation systems to spot and avoid pedestrians and other hazards along the roadway by bouncing light waves off these potential obstacles.

Today’s lidars use complex mechanical parts to send the flashlight-sized infrared lasers spinning around like the old-fashioned, bubblegum lights atop police cars — at a cost of $8,000 to $30,000.

But now a team led by electrical engineer Jelena Vuckovic is working on shrinking the mechanical and electronic components in a rooftop lidar down to a single silicon chip that she thinks could be mass produced for as little as a few hundred dollars.

The project grows out of years of research by Vuckovic’s lab to find a practical way to take advantage of a simple fact: Much like sunlight shines through glass, silicon is transparent to the infrared laser light used by lidar (short for light detection and ranging).

In a study published in Nature Photonics, the researchers describe how they structured the silicon in a way that used its infrared transparency to control, focus and harness the power of photons, the quirky particles that constitute light beams.

The team used a process called inverse design that Vuckovic’s lab has pioneered over the past decade. Inverse design relies on a powerful algorithm that drafts a blueprint for the actual photonic circuits that perform specific functions — in this case, shooting a laser beam out ahead of a car to locate objects in the road and routing the reflected light back to a detector. Based on the delay between when the light pulse is sent forward and when the beam reflects back to the detector, lidars measure the distance between car and objects.

It took Vuckovic’s team two years to create the circuit layout for the lidar-on-a-chip prototype they built in the Stanford nanofabrication facility. Postdoctoral scholar Ki Youl Yang and PhD student Jinhie Skarda played key roles in that process, with crucial theoretical insights from City University of New York physicist Andrea Alù and CUNY postdoctoral scholar Michele Cotrufo.

Building this range-finding mechanism on a chip is just the first — though essential — step toward creating inexpensive lidars. The researchers are now working on the next milestone, ensuring that the laser beam can sweep in a circle without using expensive mechanical parts. Vuckovic estimates her lab is about three years away from building a prototype that would be ready for a road test.

“We are on a trajectory to build a lidar-on-a-chip that is cheap enough to help create a mass market for autonomous cars,” Vuckovic said.

Story Source:

Materials provided by Stanford School of Engineering. Original written by Tom Abate. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
IEEE Spectrum

Apex.OS: An Open Source Operating System for Autonomous Cars

The facets of autonomous car development that automakers tend to get excited about are things like interpreting sensor data, decision making, and motion planning.

Unfortunately, if you want to make self-driving cars, there’s all kinds of other stuff that you need to get figured out first, and much of it is really difficult but also absolutely critical. Things like, how do you set up a reliable network inside of your vehicle? How do you manage memory and data recording and logging? How do you get your sensors and computers to all talk to each other at the same time? And how do you make sure it’s all stable and safe?

In robotics, the Robot Operating System (ROS) has offered an open-source solution for many of these challenges. ROS provides the groundwork for researchers and companies to build off of, so that they can focus on the specific problems that they’re interested in without having to spend time and money on setting up all that underlying software infrastructure first.

Apex.ai’s Apex OS, which is having its version 1.0 release today, extends this idea from robotics to autonomous cars. It promises to help autonomous carmakers shorten their development timelines, and if it has the same effect on autonomous cars as ROS has had on robotics, it could help accelerate the entire autonomous car industry.

For more about what this 1.0 software release offers, we spoke with Apex.ai CEO Jan Becker.

IEEE Spectrum: What exactly can Apex.OS do, and what doesn’t it do? 

Jan Becker: Apex.OS is a fork of ROS 2 that has been made robust and reliable so that it can be used for the development and deployment of highly safety-critical systems such as autonomous vehicles, robots, and aerospace applications. Apex.OS is API-compatible to ROS 2. In a  nutshell, Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. The components enable customers to focus on building their specific applications without having to worry about message passing, reliable real-time execution, hardware integration, and more.

Apex.OS is not a full [self-driving software] stack. Apex.OS enables customers to build their full stack based on their needs. We have built an automotive-grade 3D point cloud/lidar object detection and tracking component and we are in the process of building a lidar-based localizer, which is available as Apex.Autonomy. In addition, we are starting to work with other algorithmic component suppliers to integrate Apex.OS APIs into their software. These components make use of Apex.OS APIs, but are available separately, which allows customers to assemble a customized full software stack from building blocks such that it exactly fits their needs. The algorithmic components re-use the open architecture which is currently being built in the open source Autoware.Auto project.

So if every autonomous vehicle company started using Apex.OS, those companies would still be able to develop different capabilities?

Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. Just like iOS SDK provides an SDK for iPhone app developers enabling them to focus on the application, Apex.OS provides an SDK to developers of safety-critical mobility applications. 

Every autonomous mobility system deployed into a public environment must be safe. We enable customers to focus on their application without having to worry about the safety of the underlying components. Organizations will differentiate themselves through performance, discrete features, and other product capabilities. By adopting Apex.OS, we enable them to focus on developing these differentiators. 

What’s the minimum viable vehicle that I could install Apex.OS on and have it drive autonomously? 

In terms of compute hardware, we showed Apex.OS running on a Renesas R-Car H3 and on a Quanta V3NP at CES 2020. The R-Car H3 contains just four ARM Cortex-A57 cores and four ARM Cortex-A53 cores and is the smallest ECU for which our customers have requested support. You can install Apex.OS on much smaller systems, but this is the smallest one we have tested extensively so far, and which is also powering our vehicle.

We are currently adding support for the Renesas R-Car V3H, which contains four ARM Cortex-A53 cores (and no ARM Cortex-A57 cores) and an additional image processing processor. 

You suggest that Apex.OS is also useful for other robots and drones, in addition to autonomous vehicles. Can you describe how Apex.OS would benefit applications in these spaces?

Apex.OS provides a software framework that enables reading, processing, and outputting data on embedded real-time systems used in safety-critical environments. That pertains to robotics and aerospace applications just as much as to automotive applications. We simply started with automotive applications because of the stronger market pull. 

Industrial robots today often run ROS for the perception system and non-ROS embedded controller for highly-accurate position control, because ROS cannot run the realtime controller with the necessary precision. Drones often run PX4 for the autopilot and ROS for the perception stack. Apex.OS combines the capabilities of ROS with the requirements of mobility systems, specifically regarding real-time, reliability and the ability to run on embedded compute systems.

How will Apex contribute back to the open-source ROS 2 ecosystem that it’s leveraging within Apex.OS?

We have contributed back to the ROS 2 ecosystem from day one. Any and all bugs that we find in ROS 2 get fixed in ROS 2 and thereby contributed back to the open-source codebase. We also provide a significant amount of funding to Open Robotics to do this. In addition, we are on the ROS 2 Technical Steering Committee to provide input and guidance to make ROS 2 more useful for automotive applications. Overall we have a great deal of interest in improving ROS 2 not only because it increases our customer base, but also because we strive to be a good open-source citizen.

The features we keep in house pertain to making ROS 2 realtime, deterministic, tested, and certified on embedded hardware. Our goals are therefore somewhat orthogonal to the goals of an open-source project aiming to address as many applications as possible. We, therefore, live in a healthy symbiosis with ROS 2. 

[ Apex.ai ]

Categories
IEEE Spectrum

Test of Complex Autonomous Vehicle Designs

Autonomous vehicles (AV) combine multiple sensors, computers and communication technology to make driving safer and improve the driver’s experience. Learn about design and test of complex sensor and communication technologies being built into AVs from our white paper and posters.

Key points covered in our AV resources:

  • Comparison of dedicated short-range communications and C-V2X technologies
  • Definition of AV Levels 0 to 5
  • Snapshot of radar technology from 24 to 77 GHz

Categories
IEEE Spectrum

A Robot That Explains Its Actions Is a First Step Towards AI We Can (Maybe) Trust

For the most part, robots are a mystery to end users. And that’s part of the point: Robots are autonomous, so they’re supposed to do their own thing (presumably the thing that you want them to do) and not bother you about it. But as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, it’s going to be hard for us to trust them if their autonomy is such that we find it difficult to understand what they’re doing.

In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. Does this mean we can totally understand and trust robots now? Not yet—but it’s a start.

Categories
ScienceDaily

Smart intersections could cut autonomous car congestion

In the not-so-distant future, city streets could be flooded with autonomous vehicles. Self-driving cars can move faster and travel closer together, allowing more of them to fit on the road — potentially leading to congestion and gridlock on city streets.

A new study by Cornell researchers developed a first-of-its-kind model to control traffic and intersections in order to increase car capacity on urban streets, reduce congestion and minimize accidents.

“For the future of mobility, so much attention has been paid to autonomous cars,” said Oliver Gao, professor of civil and environmental engineering and senior author of the study, which published in Transportation Research Part B.

“If you have all these autonomous cars on the road, you’ll see that our roads and our intersections could become the limiting factor,” Gao said. “In this paper we look at the interaction between autonomous cars and our infrastructure on the ground so we can unlock the real capacity of autonomous transportation.”

The researchers’ model allows groups of autonomous cars, known as platoons, to pass through one-way intersections without waiting, and the results of a microsimulation showed it increased the capacity of vehicles on city streets up to 138% over a conventional traffic signal system, according to the study. The model assumes only autonomous cars are on the road; Gao’s team is addressing situations with a combination of autonomous and human-driven cars in future research.

Car manufacturers and researchers around the world are developing prototypes of self-driving cars, which are expected to be introduced by 2025. But until now, little research has focused on the infrastructure that will support these driverless cars.

Autonomous vehicles will be able to communicate with each other, offering opportunities for coordination and efficiency. The researchers’ model takes advantage of this capability, as well as smart infrastructure, in order to optimize traffic so cars can pass quickly and safely through intersections.

“Instead of having a fixed green or red light at the intersection, these cycles can be adjusted dynamically,” Gao said. “And this control can be adjusted to allow for platoons of cars to pass.”

Story Source:

Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Enabling autonomous vehicles to see around corners

To improve the safety of autonomous systems, MIT engineers have developed a system that can sense tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner.

Autonomous cars could one day use the system to quickly avoid a potential collision with another car or pedestrian emerging from around a building’s corner or from in between parked cars. In the future, robots that may navigate hospital hallways to make medication or supply deliveries could use the system to avoid hitting people.

In a paper being presented at next week’s International Conference on Intelligent Robots and Systems (IROS), the researchers describe successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. When sensing and stopping for an approaching vehicle, the car-based system beats traditional LiDAR — which can only detect visible objects — by more than half a second.

That may not seem like much, but fractions of a second matter when it comes to fast-moving autonomous vehicles, the researchers say.

“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”

Currently, the system has only been tested in indoor settings. Robotic speeds are much lower indoors, and lighting conditions are more consistent, making it easier for the system to sense and analyze shadows.

Joining Rus on the paper are: first author Felix Naser SM ’19, a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; recent graduate Christina Liao ’19; Guy Rosman of the Toyota Research Institute; and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Extending ShadowCam

For their work, the researchers built on their system, called “ShadowCam,” that uses computer-vision techniques to detect and classify changes to shadows on the ground. MIT professors William Freeman and Antonio Torralba, who are not co-authors on the IROS paper, collaborated on the earlier versions of the system, which were presented at conferences in 2017 and 2018.

For input, ShadowCam uses sequences of video frames from a camera targeting a specific area, such as the floor in front of a corner. It detects changes in light intensity over time, from image to image, that may indicate something moving away or coming closer. Some of those changes may be difficult to detect or invisible to the naked eye, and can be determined by various properties of the object and environment. ShadowCam computes that information and classifies each image as containing a stationary object or a dynamic, moving one. If it gets to a dynamic image, it reacts accordingly.

Adapting ShadowCam for autonomous vehicles required a few advances. The early version, for instance, relied on lining an area with augmented reality labels called “AprilTags,” which resemble simplified QR codes. Robots scan AprilTags to detect and compute their precise 3D position and orientation relative to the tag. ShadowCam used the tags as features of the environment to zero in on specific patches of pixels that may contain shadows. But modifying real-world environments with AprilTags is not practical.

The researchers developed a novel process that combines image registration and a new visual-odometry technique. Often used in computer vision, image registration essentially overlays multiple images to reveal variations in the images. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.

Visual odometry, used for Mars Rovers, estimates the motion of a camera in real-time by analyzing pose and geometry in sequences of images. The researchers specifically employ “Direct Sparse Odometry” (DSO), which can compute feature points in environments similar to those captured by AprilTags. Essentially, DSO plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner. (Regions of interest were annotated manually beforehand.)

As ShadowCam takes input image sequences of a region of interest, it uses the DSO-image-registration method to overlay all the images from same viewpoint of the robot. Even as a robot is moving, it’s able to zero in on the exact same patch of pixels where a shadow is located to help it detect any subtle deviations between images.

Next is signal amplification, a technique introduced in the first paper. Pixels that may contain shadows get a boost in color that reduces the signal-to-noise ratio. This makes extremely weak signals from shadow changes far more detectable. If the boosted signal reaches a certain threshold — based partly on how much it deviates from other nearby shadows — ShadowCam classifies the image as “dynamic.” Depending on the strength of that signal, the system may tell the robot to slow down or stop.

“By detecting that signal, you can then be careful. It may be a shadow of some person running from behind the corner or a parked car, so the autonomous car can slow down or stop completely,” Naser says.

Tag-free testing

In one test, the researchers evaluated the system’s performance in classifying moving or stationary objects using AprilTags and the new DSO-based method. An autonomous wheelchair steered toward various hallway corners while humans turned the corner into the wheelchair’s path. Both methods achieved the same 70-percent classification accuracy, indicating AprilTags are no longer needed.

In a separate test, the researchers implemented ShadowCam in an autonomous car in a parking garage, where the headlights were turned off, mimicking nighttime driving conditions. They compared car-detection times versus LiDAR. In an example scenario, ShadowCam detected the car turning around pillars about 0.72 seconds faster than LiDAR. Moreover, because the researchers had tuned ShadowCam specifically to the garage’s lighting conditions, the system achieved a classification accuracy of around 86 percent.

Next, the researchers are developing the system further to work in different indoor and outdoor lighting conditions. In the future, there could also be ways to speed up the system’s shadow detection and automate the process of annotating targeted areas for shadow sensing.

This work was funded by the Toyota Research Institute.

Go to Source
Author:

Categories
ScienceDaily

Platform for scalable testing of autonomous vehicle safety

In the race to manufacture autonomous vehicles (AVs), safety is crucial yet sometimes overlooked as exemplified by recent headline-making accidents. Researchers at the University of Illinois at Urbana-Champaign are using artificial intelligence (AI) and machine learning to improve the safety of autonomous technology through both software and hardware advances.

“Using AI to improve autonomous vehicles is extremely hard because of the complexity of the vehicle’s electrical and mechanical components, as well as variability in external conditions, such as weather, road conditions, topography, traffic patterns, and lighting,” said Ravi Iyer

“Progress is being made, but safety continues to be a significant concern.”

The group has developed a platform that enables companies to more quickly and cost-effectively address safety in the complex and ever-changing environment of autonomous technology. They are collaborating with many companies in the Bay area, including Samsung, NVIDIA, and a number of start-ups.

“We are seeing a stakeholder-wide effort across industries and universities with hundreds of startups and research teams, and are tackling a few challenges in our group,” said Saurabh Jha, a doctoral candidate in computer science who is leading student efforts on the project. “Solving this challenge requires a multidisciplinary effort across science, technology, and manufacturing.”

One reason this work is so challenging is that AVs are complex systems that use AI and machine learning to integrate mechanical, electronic, and computing technologies to make real-time driving decisions. A typical AV is a mini-supercomputer on wheels; they have more than 50 processors and accelerators running more than 100 million lines of code to support computer vision, planning, and other machine learning tasks.

As expected, there are concerns with the sensors and the autonomous driving stack (computing software and hardware) of these vehicles. When a car is traveling 70 mph down a highway, failures can be a significant safety risk to drivers.

“If a driver of a typical car senses a problem such as vehicle drift or pull, the driver can adjust his/her behavior and guide the car to a safe stopping point,” Jha explained. “However, the behavior of the autonomous vehicle may be unpredictable in such a scenario unless the autonomous vehicle is explicitly trained for such problems. In the real world, there are infinite number of such cases.”

Traditionally, when a person has trouble with software on a computer or smart phone, the most common IT response is to turn the device off and back on again. However, this type of fix is not advisable for AVs, as every millisecond impacts the outcome and a slow response could lead to death. The safety concerns of such AI-based systems has increased in the last couple of years among stakeholders due to various accidents caused by AVs.

“Current regulations require companies like Uber and Waymo, who test their vehicles on public roads to annually report to the California DMV about how safe their vehicles are,” said Subho Banerjee, a CSL and computer science graduate student. “We wanted to understand common safety concerns, how the cars behaved, and what the ideal safety metric is for understanding how well they are designed.”

The group analyzed all the safety reports submitted from 2014-2017, covering 144 AVs driving a cumulative 1,116,605 autonomous miles. They found that for the same number of miles driven, human-driven cars were up to 4000 times less likely than AVs to have an accident. This means that the autonomous technology failed, at an alarming rate, to appropriately handle a situation and disengaged the technology, often relying on the human driver to take over.

The problem researchers and companies have when it comes to improving those numbers is that until an autonomous vehicle system has a specific issue, it’s difficult to train the software to overcome it.

Further, errors in the software and hardware stacks manifest as safety critical issues only under certain driving scenarios. In other words, tests performed on AVs on highways or empty/less crowded roadways may not be sufficient as safety violations under software/hardware faults are rare.

When errors do occur, they take place after hundreds of thousands of miles have been driven. The work that goes into testing these AVs for hundreds of thousands of miles takes considerable time, money, and energy, making the process extremely inefficient. The team is using computer simulations and artificial intelligence to speed up this process.

“We inject errors in the software and hardware stack of the autonomous vehicles in computer simulations and then collect data on the autonomous vehicle responses to these problems,” said Jha. “Unlike humans, AI technology today cannot reason about errors that may occur in different driving scenarios. Therefore, needing vast amounts of data to teach the software to take the right action in the face of software or hardware problems.”

The research group is currently building techniques and tools to generate driving conditions and issues that maximally impact AV safety. Using their technique, they can find a large number of safety critical scenarios where errors can lead to accidents without having to enumerate all possibilities on the road — a huge savings of time and money.

During testing of one openly available AV technology, Apollo from Baidu, the team found more than 500 examples of when the software failed to handle an issue and the failure led to an accident. Results like these are getting the group’s work noticed in the industry. They are currently working on a patent for their testing technology, and plan to deploy it soon. Ideally, the researchers hope companies use this new technology to simulate the identified issue and fix the problems before the cars are deployed.

“The safety of autonomous vehicles is critical to their success in the marketplace and in society,” said Steve Keckler, vice president of Architecture Research for NVIDIA. “We expect that the technologies being developed by the Illinois research team will make it easier for engineers to develop safer automotive systems at lower cost. NVIDIA is excited about our collaboration with Illinois and is pleased to support their work.”

Go to Source
Author:

Categories
IEEE Spectrum

A Path Towards Reasonable Autonomous Weapons Regulation

Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.