Categories
ScienceDaily

Security software for autonomous vehicles

Before autonomous vehicles participate in road traffic, they must demonstrate conclusively that they do not pose a danger to others. New software developed at the Technical University of Munich (TUM) prevents accidents by predicting different variants of a traffic situation every millisecond.

A car approaches an intersection. Another vehicle jets out of the cross street, but it is not yet clear whether it will turn right or left. At the same time, a pedestrian steps into the lane directly in front of the car, and there is a cyclist on the other side of the street. People with road traffic experience will in general assess the movements of other traffic participants correctly.

“These kinds of situations present an enormous challenge for autonomous vehicles controlled by computer programs,” explains Matthias Althoff, Professor of Cyber-Physical Systems at TUM. “But autonomous driving will only gain acceptance of the general public if you can ensure that the vehicles will not endanger other road users — no matter how confusing the traffic situation.”

Algorithms that peer into the future

The ultimate goal when developing software for autonomous vehicles is to ensure that they will not cause accidents. Althoff, who is a member of the Munich School of Robotics and Machine Intelligence at TUM, and his team have now developed a software module that permanently analyzes and predicts events while driving. Vehicle sensor data are recorded and evaluated every millisecond. The software can calculate all possible movements for every traffic participant — provided they adhere to the road traffic regulations — allowing the system to look three to six seconds into the future.

Based on these future scenarios, the system determines a variety of movement options for the vehicle. At the same time, the program calculates potential emergency maneuvers in which the vehicle can be moved out of harm’s way by accelerating or braking without endangering others. The autonomous vehicle may only follow routes that are free of foreseeable collisions and for which an emergency maneuver option has been identified.

Streamlined models for swift calculations

This kind of detailed traffic situation forecasting was previously considered too time-consuming and thus impractical. But now, the Munich research team has shown not only the theoretical viability of real-time data analysis with simultaneous simulation of future traffic events: They have also demonstrated that it delivers reliable results.

The quick calculations are made possible by simplified dynamic models. So-called reachability analysis is used to calculate potential future positions a car or a pedestrian might assume. When all characteristics of the road users are taken into account, the calculations become prohibitively time-consuming. That is why Althoff and his team work with simplified models. These are superior to the real ones in terms of their range of motion — yet, mathematically easier to handle. This enhanced freedom of movement allows the models to depict a larger number of possible positions but includes the subset of positions expected for actual road users.

Real traffic data for a virtual test environment

For their evaluation, the computer scientists created a virtual model based on real data they had collected during test drives with an autonomous vehicle in Munich. This allowed them to craft a test environment that closely reflects everyday traffic scenarios. “Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop,” Althoff sums up.

The computer scientist emphasizes that the new security software could simplify the development of autonomous vehicles because it can be combined with all standard motion control programs.

Story Source:

Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Safer, more comfortable soldier uniforms are in the works

Uniforms of U.S. Army soldiers must meet a long list of challenging requirements. They need to feel comfortable in all climates, be durable through multiple washings, resist fires and ward off insects, among other things. Existing fabrics don’t check all of these boxes, so scientists have come up with a novel way of creating a flame-retardant, insect-repellent fabric that uses nontoxic substances.

The researchers will present their results today at the American Chemical Society (ACS) Fall 2020 Virtual Meeting & Expo. 

“The Army presented to us this interesting and challenging requirement for multifunctionality,” says study leader Ramaswamy Nagarajan, Ph.D. “There are flame-resistant Army combat uniforms made of various materials that meet flame retardant requirements. But they are expensive, and there are problems with dyeing the fabrics. Also, some of the raw materials are not produced in the U.S. So, our goal was to find an existing material that we could modify to make it flame retardant and insect repellent, yet still have a fabric that a soldier would want to wear.”

Because Nagarajan’s research group focuses on sustainable green chemistry, the team sought nontoxic chemicals and processes for this study. They chose to modify a commercially available 50-50 nylon-cotton blend, a relatively inexpensive, durable and comfortable fabric produced in the U.S. The material is used in a wide range of civilian and military applications because the nylon is strong and resistant to abrasion, whereas the cotton is comfortable to wear. But this type of textile doesn’t inherently repel bugs and is associated with a high fire risk.

“We started with making the fabric fire retardant, focusing on the cotton part of the blend,” explains Sourabh Kulkarni, a Ph.D. student who works with Nagarajan at the University of Massachusetts Lowell Center for Advanced Materials. “Cotton has a lot of hydroxyl groups (oxygen and hydrogen bonded together) on its surface, which can be activated by readily available chemicals to link with phosphorus-containing compounds that impart flame retardancy.” For their phosphorus-containing compound, they chose phytic acid, an abundant, nontoxic substance found in seeds, nuts and grains.

Next, the researchers tackled the issue of making the material repel insects so that soldiers wouldn’t have to spray themselves repeatedly or carry an additional item in their packs. The team took permethrin, an everyday nontoxic insect repellent, and attached it to the fabric using plasma-assisted deposition in collaboration with a local company, LaunchBay. Through trial and error, the researchers eventually got both the phytic acid and permethrin to link to the fabric’s surface molecules.

Using methods to measure heat release capacity and total heat release, as well as a vertical flame test, they found that the modified material performed at least 20% better than the untreated material. They also used a standard insect repellency test with live mosquitoes and found that the efficacy was greater than 98%. Finally, the fabric remained “breathable” after treatment as determined by air permeability studies.

“We are very excited,” Nagarajan says, “because we’ve shown we can modify this fabric to be flame retardant and insect repellent — and still be fairly durable and comfortable. We’d like to use a substance other than phytic acid that would contain more phosphorus and therefore impart a greater level of flame retardancy, better durability and still be nontoxic to a soldier’s skin. Having shown that we can modify the fabric, we would also like to see if we can attach antimicrobials to prevent infections from bacteria, as well as dyes that remain durable.”

Story Source:

Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Why are plants green?

When sunlight shining on a leaf changes rapidly, plants must protect themselves from the ensuing sudden surges of solar energy. To cope with these changes, photosynthetic organisms — from plants to bacteria — have developed numerous tactics. Scientists have been unable, however, to identify the underlying design principle.

An international team of scientists, led by physicist Nathaniel M. Gabor at the University of California, Riverside, has now constructed a model that reproduces a general feature of photosynthetic light harvesting, observed across many photosynthetic organisms.

Light harvesting is the collection of solar energy by protein-bound chlorophyll molecules. In photosynthesis — the process by which green plants and some other organisms use sunlight to synthesize foods from carbon dioxide and water — light energy harvesting begins with sunlight absorption.

The researchers’ model borrows ideas from the science of complex networks, a field of study that explores efficient operation in cellphone networks, brains, and the power grid. The model describes a simple network that is able to input light of two different colors, yet output a steady rate of solar power. This unusual choice of only two inputs has remarkable consequences.

“Our model shows that by absorbing only very specific colors of light, photosynthetic organisms may automatically protect themselves against sudden changes — or ‘noise’ — in solar energy, resulting in remarkably efficient power conversion,” said Gabor, an associate professor of physics and astronomy, who led the study appearing today in the journal Science. “Green plants appear green and purple bacteria appear purple because only specific regions of the spectrum from which they absorb are suited for protection against rapidly changing solar energy.”

Gabor first began thinking about photosynthesis research more than a decade ago, when he was a doctoral student at Cornell University. He wondered why plants rejected green light, the most intense solar light. Over the years, he worked with physicists and biologists worldwide to learn more about statistical methods and the quantum biology of photosynthesis.

Richard Cogdell, a botanist at the University of Glasgow in the United Kingdom and a coauthor on the research paper, encouraged Gabor to extend the model to include a wider range of photosynthetic organisms that grow in environments where the incident solar spectrum is very different.

“Excitingly, we were then able to show that the model worked in other photosynthetic organisms besides green plants, and that the model identified a general and fundamental property of photosynthetic light harvesting,” he said. “Our study shows how, by choosing where you absorb solar energy in relation to the incident solar spectrum, you can minimize the noise on the output — information that can be used to enhance the performance of solar cells.”

Coauthor Rienk van Grondelle, an influential experimental physicist at Vrije Universiteit Amsterdam in the Netherlands who works on the primary physical processes of photosynthesis, said the team found the absorption spectra of certain photosynthetic systems select certain spectral excitation regions that cancel the noise and maximize the energy stored.

“This very simple design principle could also be applied in the design of human-made solar cells,” said van Grondelle, who has vast experience with photosynthetic light harvesting.

Gabor explained that plants and other photosynthetic organisms have a wide variety of tactics to prevent damage due to overexposure to the sun, ranging from molecular mechanisms of energy release to physical movement of the leaf to track the sun. Plants have even developed effective protection against UV light, just as in sunscreen.

“In the complex process of photosynthesis, it is clear that protecting the organism from overexposure is the driving factor in successful energy production, and this is the inspiration we used to develop our model,” he said. “Our model incorporates relatively simple physics, yet it is consistent with a vast set of observations in biology. This is remarkably rare. If our model holds up to continued experiments, we may find even more agreement between theory and observations, giving rich insight into the inner workings of nature.”

To construct the model, Gabor and his colleagues applied straightforward physics of networks to the complex details of biology, and were able to make clear, quantitative, and generic statements about highly diverse photosynthetic organisms.

“Our model is the first hypothesis-driven explanation for why plants are green, and we give a roadmap to test the model through more detailed experiments,” Gabor said.

Photosynthesis may be thought of as a kitchen sink, Gabor added, where a faucet flows water in and a drain allows the water to flow out. If the flow into the sink is much bigger than the outward flow, the sink overflows and the water spills all over the floor.

“In photosynthesis, if the flow of solar power into the light harvesting network is significantly larger than the flow out, the photosynthetic network must adapt to reduce the sudden over-flow of energy,” he said. “When the network fails to manage these fluctuations, the organism attempts to expel the extra energy. In doing so, the organism undergoes oxidative stress, which damages cells.”

The researchers were surprised by how general and simple their model is.

“Nature will always surprise you,” Gabor said. “Something that seems so complicated and complex might operate based on a few basic rules. We applied the model to organisms in different photosynthetic niches and continue to reproduce accurate absorption spectra. In biology, there are exceptions to every rule, so much so that finding a rule is usually very difficult. Surprisingly, we seem to have found one of the rules of photosynthetic life.”

Gabor noted that over the last several decades, photosynthesis research has focused mainly on the structure and function of the microscopic components of the photosynthetic process.

“Biologists know well that biological systems are not generally finely tuned given the fact that organisms have little control over their external conditions,” he said. “This contradiction has so far been unaddressed because no model exists that connects microscopic processes with macroscopic properties. Our work represents the first quantitative physical model that tackles this contradiction.”

Next, supported by several recent grants, the researchers will design a novel microscopy technique to test their ideas and advance the technology of photo-biology experiments using quantum optics tools.

“There’s a lot out there to understand about nature, and it only looks more beautiful as we unravel its mysteries,” Gabor said.

Go to Source
Author:

Categories
ScienceDaily

As many as six billion Earth-like planets in our galaxy, according to new estimates

To be considered Earth-like, a planet must be rocky, roughly Earth-sized and orbiting Sun-like (G-type) stars. It also has to orbit in the habitable zones of its star — the range of distances from a star in which a rocky planet could host liquid water, and potentially life, on its surface.

“My calculations place an upper limit of 0.18 Earth-like planets per G-type star,” says UBC researcher Michelle Kunimoto, co-author of the new study in The Astronomical Journal. “Estimating how common different kinds of planets are around different stars can provide important constraints on planet formation and evolution theories, and help optimize future missions dedicated to finding exoplanets.”

According to UBC astronomer Jaymie Matthews: “Our Milky Way has as many as 400 billion stars, with seven per cent of them being G-type. That means less than six billion stars may have Earth-like planets in our Galaxy.”

Previous estimates of the frequency of Earth-like planets range from roughly 0.02 potentially habitable planets per Sun-like star, to more than one per Sun-like star.

Typically, planets like Earth are more likely to be missed by a planet search than other types, as they are so small and orbit so far from their stars. That means that a planet catalogue represents only a small subset of the planets that are actually in orbit around the stars searched. Kunimoto used a technique known as ‘forward modelling’ to overcome these challenges.

“I started by simulating the full population of exoplanets around the stars Kepler searched,” she explained. “I marked each planet as ‘detected’ or ‘missed’ depending on how likely it was my planet search algorithm would have found them. Then, I compared the detected planets to my actual catalogue of planets. If the simulation produced a close match, then the initial population was likely a good representation of the actual population of planets orbiting those stars.”

Kunimoto’s research also shed more light on one of the most outstanding questions in exoplanet science today: the ‘radius gap’ of planets. The radius gap demonstrates that it is uncommon for planets with orbital periods less than 100 days to have a size between 1.5 and two times that of Earth. She found that the radius gap exists over a much narrower range of orbital periods than previously thought. Her observational results can provide constraints on planet evolution models that explain the radius gap’s characteristics.

Previously, Kunimoto searched archival data from 200,000 stars of NASA’s Kepler mission. She discovered 17 new planets outside of the Solar System, or exoplanets, in addition to recovering thousands of already known planets.

Story Source:

Materials provided by University of British Columbia. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Countries must work together on CO2 removal to avoid dangerous climate change

The Paris Agreement lays out national quotas on CO2 emissions but not removal, and that must be urgently addressed, say the authors of a new study.

The Paris Agreement aims to keep global temperature rise this century well below 2°C above pre-industrial levels and to pursue efforts to limit it to 1.5°C. Reaching these targets will require mitigation — lowering the carbon dioxide (CO2) emitted through changes such as increased use of renewable energy sources, and removal of CO2 from the atmosphere through measures such as reforestation and carbon capture and storage.

However, while countries signed up to the Paris Agreement have individual quotas they need to meet in terms of mitigation and have individual plans for doing so, there are no agreed national quotas for CO2 removal.

Now, in a paper published today in Nature Climate Change, an international group of researchers have argued that to meet the Paris Agreement’s targets, CO2 removal quotas cannot be allocated in such a way that any one country can fulfil its obligations alone.

Cross-border cooperation

The team, from Imperial College London, the University of Girona, ETH Zürich and the University of Cambridge, say countries need to start working together now to make sure enough CO2 is removed in a fair and equitable way. This should involve deciding how quotas might be allocated fairly and devising a system where countries that cannot fulfil their obligations alone can trade with countries with greater capacity to remove CO2.

Co-author Dr Niall Mac Dowell, from the Centre for Environmental Policy and the Centre for Process Systems Engineering at Imperial, said: “Carbon dioxide removal is necessary to meet climate targets, since we have so far not done enough to mitigate our emissions. Both will be necessary going forward, but the longer we wait to start removing CO2 on a large scale, the more we will have to do.

“It is imperative that nations have these conversations now, to determine how quotas could be allocated fairly and how countries could meet those quotas via cross-border cooperation. It will work best if we all work together.”

Co-author Dr David Reiner, from Judge Business School at the University of Cambridge, added: “Countries such as the UK and France have begun to adopt binding ‘net-zero targets’ and whereas there has been extensive focus on greenhouse gas emissions and emissions reductions, meeting these targets will require greater attention to the negative emissions or carbon dioxide removal side of the equation.”

Allocating quotas

A critical element in any negotiations will be to determine the fairest way to allocate quotas to different nations. Different methods have been used for determining previous quotas, such as the ability of a country to pay and its historic culpability (how much CO2 it has emitted), with a blend of methods often used implicitly or explicitly in any final agreement.

The team modelled several of these different methods and applied them to countries across Europe. While the quotas varied significantly, they found that only a handful of countries could meet any of the quotas using only their own resources.

Co-lead author Dr Ángel Galán-Martín, from ETH Zürich, said: “The exercise of allocating CO2 removal quotas may help to break the current impasse, by incentivising countries to align their future national pledges with the expectations emerging from the fairness principles.”

Carbon dioxide removal can be achieved in several ways. Reforestation uses trees as natural absorbers of atmospheric CO2 but takes time to reach its full potential as the trees grow. Carbon capture and storage (CCS) takes CO2 out of the atmosphere and stores it in underground geological formations.

CCS is usually coupled with a fossil fuel power station to take the CO2 out of the emissions before they reach the atmosphere. However, it can also be coupled to bioenergy — growing crops to burn for fuel. These systems have the double benefit of the crops removing CO2 from the atmosphere, and the CCS capturing any CO2 from the power station before it is released.

Beginning the process

However, different countries have varying abilities to deploy these CO2 removal strategies. For example, small but rich countries like Luxembourg might incur a heavy CO2 removal burden but not have the geological capacity to implement large-scale CCS or have the space to plant enough trees or bioenergy crops.

The authors therefore suggest, after quotas have been determined, that a system of trading quotas could be established. For example, the UK has abundant space for CCS thanks to favourable geological formations in the North Sea, so could sell some of its capacity to other countries.

This system would take a while to set up, so the authors urge nations to begin the process now. Co-lead author Dr Carlos Pozo from the University of Girona, said: “By 2050, the world needs to be carbon neutral — taking out of the atmosphere as much CO2 as it puts in. To this end, a CO2 removal industry needs to be rapidly scaled up, and that begins now, with countries looking at their responsibilities and their capacity to meet any quotas.

“There are technological solutions ready to be deployed. Now it is time for international agreements to get the ball rolling so we can start making serious progress towards our climate goals.”

Go to Source
Author:

Categories
ScienceDaily

Free Internet access should be a basic human right: Study

Free internet access must be considered as a human right, as people unable to get online — particularly in developing countries — lack meaningful ways to influence the global players shaping their everyday lives, according to a new study.

As political engagement increasingly takes place online, basic freedoms that many take for granted including free expression, freedom of information and freedom of assembly are undermined if some citizens have access to the internet and others do not.

New research reveals that the internet could be a key way of protecting other basic human rights such as life, liberty, and freedom from torture — a means of enabling billions of people to lead ‘minimally decent lives’.

Dr. Merten Reglitz, Lecturer in Global Ethics at the University of Birmingham, has published his findings — the first study of its kind — in the Journal of Applied Philosophy.

“Internet access is no luxury, but instead a moral human right and everyone should have unmonitored and uncensored access to this global medium — provided free of charge for those unable to afford it,” commented Dr Reglitz.

“Without such access, many people lack a meaningful way to influence and hold accountable supranational rule-makers and institutions. These individuals simply don’t have a say in the making of the rules they must obey and which shape their life chances.”

He added that exercising free speech and obtaining information was now heavily dependent on having internet access. Much of today’s political debate took place online and politically relevant information is shared on the internet — meaning the relative value these freedoms held for people ‘offline’ had decreased.

Dr. Reglitz’s research attributes to the internet unprecedented possibilities for protecting basic human rights to life, liberty and bodily integrity.

Whilst acknowledging that being online does not guarantee these rights, he cites examples of internet engagement that helped hold Government and institutions to account. These examples include:

  • The ‘Arab Spring’- new ways of global reporting on government atrocities.
  • Documenting unjustified police violence against African Americans in the US.
  • #MeToo campaign — helping to ‘out’ sexual harassment of women by powerful men.

Dr. Reglitz defines ‘moral human rights’ as based on universal interests essential for a ‘minimally decent life’. They must also be of such fundamental importance that if a nation is unwilling or unable to uphold these rights, the international community must step in.

The study points to a number of important political institutions which have committed to ensuring universal access for their populations, convinced that this goal is affordable:

  • The Indian state of Kerala has declared universal internet access a human right and aims to provide it for its 35 million people by 2019.
  • The European Union has launched the WiFi4EU initiative to provide ‘every European village and city with free wireless internet access around main centres of public life by 2020.
  • Global internet access is part of the UN Sustainable Development Goals, with the UN demanding states help to deliver universal Internet access in developing nations.

Dr Reglitz outlines the size of the challenge posed in providing universal internet access, noting that the UN’s International Telecommunication Union estimated that, by the end of 2018, 51 percent of the world’s population of 7 billion people had access to the Internet.

Many people in poorer parts of the world are still without internet access, but their number is decreasing as technology becomes cheaper. However, internet expansion has slowed in recent years, suggesting universal access will not occur without intentional promotion.

“Universal internet access need not cost the earth — accessing politically important opportunities such as blogging, obtaining information, joining virtual groups, or sending and receiving emails does not require the latest information technology,” commented Dr Reglitz.

“Web-capable phones allow people to access these services and public internet provision, such as public libraries, can help get people online where individual domestic access is initially too expensive.”

He added that the human right to internet access was similar to the global right to health, which cannot require globally the highest possible medical treatment, as many states are too poor to provide such services and thus would face impossible demands.

Instead, poor states are called upon to provide basic medical services and work toward providing higher quality health care delivery. Similarly, such states should initially offer locations with public Internet access and develop IT infrastructure that increases access.

According to the NGO The World Wide Web Foundation, founded by World Wide Web inventor Tim Berners-Lee ‘affordability’ remains one of the most significant, but solvable, obstacles to universal access.

For the Foundation, internet access is affordable if one gigabyte of data costs no more than two percent of average monthly income — currently some 2.3 billion people are without affordable Internet access.

Go to Source
Author:

Categories
ScienceDaily

Light-based ‘tractor beam’ assembles materials at the nanoscale

Modern construction is a precision endeavor. Builders must use components manufactured to meet specific standards — such as beams of a desired composition or rivets of a specific size. The building industry relies on manufacturers to create these components reliably and reproducibly in order to construct secure bridges and sound skyscrapers.

Now imagine construction at a smaller scale — less than 1/100th the thickness of a piece of paper. This is the nanoscale. It is the scale at which scientists are working to develop potentially groundbreaking technologies in fields like quantum computing. It is also a scale where traditional fabrication methods simply will not work. Our standard tools, even miniaturized, are too bulky and too corrosive to reproducibly manufacture components at the nanoscale.

Researchers at the University of Washington have developed a method that could make reproducible manufacturing at the nanoscale possible. The team adapted a light-based technology employed widely in biology — known as optical traps or optical tweezers — to operate in a water-free liquid environment of carbon-rich organic solvents, thereby enabling new potential applications.

As the team reports in a paper published Oct. 30 in the journal Nature Communications, the optical tweezers act as a light-based “tractor beam” that can assemble nanoscale semiconductor materials precisely into larger structures. Unlike the tractor beams of science fiction, which grab spaceships, the team employs the optical tweezers to trap materials that are nearly one billion times shorter than a meter.

“This is a new approach to nanoscale manufacturing,” said co-senior author Peter Pauzauskie, a UW associate professor of materials science and engineering, faculty member at the Molecular Engineering & Sciences Institute and the Institute for Nano-engineered Systems, and a senior scientist at the Pacific Northwest National Laboratory. “There are no chamber surfaces involved in the manufacturing process, which minimizes the formation of strain or other defects. All of the components are suspended in solution, and we can control the size and shape of the nanostructure as it is assembled piece by piece.”

“Using this technique in an organic solvent allows us to work with components that would otherwise degrade or corrode on contact with water or air,” said co-senior author Vincent Holmberg, a UW assistant professor of chemical engineering and faculty member in the Clean Energy Institute and the Molecular Engineering & Sciences Institute. “Organic solvents also help us to superheat the material we’re working with, allowing us to control material transformations and drive chemistry.”

To demonstrate the potential of this approach, the researchers used the optical tweezers to build a novel nanowire heterostructure, which is a nanowire consisting of distinct sections comprised of different materials. The starting materials for the nanowire heterostructure were shorter “nanorods” of crystalline germanium, each just a few hundred nanometers long and tens of nanometers in diameter — or about 5,000 times thinner than a human hair. Each is capped with a metallic bismuth nanocrystal.

The researchers then used the light-based “tractor beam” to grab one of the germanium nanorods. Energy from the beam also superheats the nanorod, melting the bismuth cap. They then guide a second nanorod into the “tractor beam” and — thanks to the molten bismuth cap at the end — solder them end-to-end. The researchers could then repeat the process until they had assembled a patterned nanowire heterostructure with repeating semiconductor-metal junctions that was five-to-ten times longer than the individual building blocks.

“We’ve taken to calling this optically oriented assembly process ‘photonic nanosoldering’ — essentially soldering two components together at the nanoscale using light,” said Holmberg.

Nanowires that contain junctions between materials — such as the germanium-bismuth junctions synthesized by the UW team — may eventually be a route to creating topological qubits for applications in quantum computing.

The tractor beam is actually a highly focused laser that creates a type of optical trap, a Nobel Prize-winning method pioneered by Arthur Ashkin in the 1970s. To date, optical traps have been used almost exclusively in water- or vacuum-based environments. Pauzauskie’s and Holmberg’s teams adapted optical trapping to work in the more volatile environment of organic solvents.

“Generating a stable optical trap in any type of environment is a delicate balancing act of forces, and we were lucky to have two very talented graduate students working together on this project,” said Holmberg.

The photons that make up the laser beam generate a force on objects in the immediate vicinity of the optical trap. The researchers can adjust the laser’s properties so that the force generated can either trap or release an object, be it a single germanium nanorod or a longer nanowire.

“This is the kind of precision needed for reliable, reproducible nanofabrication methods, without chaotic interactions with other surfaces or materials that can introduce defects or strain into nanomaterials,” said Pauzauskie.

The researchers believe that their nanosoldering approach could enable additive manufacturing of nanoscale structures with different sets of materials for other applications.

“We hope that this demonstration results in researchers using optical trapping for the manipulation and assembly of a wider set of nanoscale materials, irrespective of whether or not those materials happen to be compatible with water,” said Holmberg.

Go to Source
Author:

Categories
ScienceDaily

Quantum vacuum: Less than zero energy

Energy is a quantity that must always be positive — at least that’s what our intuition tells us. If every single particle is removed from a certain volume until there is nothing left that could possibly carry energy, then a limit has been reached. Or has it? Is it still possible to extract energy even from empty space?

Quantum physics has shown time and again that it contradicts our intuition — and this is also true in this case. Under certain conditions negative energies are allowed, at least in a certain range of space and time. An international research team at the TU Vienna, the Université libre de Bruxelles (Belgium) and the IIT Kanpur (India) have now investigated the extent to which negative energy is possible. It turns out that no matter which quantum theories are considered, no matter what symmetries are assumed to hold in the universe, there are always certain limits to “borrowing” energy. Locally, the energy can be less than zero, but like money borrowed from a bank, this energy must be “paid back” in the end.

Repulsive Gravity

“In the theory of general relativity, we usually assume that the energy is greater than zero, at all times and everywhere in the universe,” says Prof. Daniel Grumiller from the Institute for Theoretical Physics at the TU Wien (Vienna). This has a very important consequence for gravity: Energy is linked to mass via the formula E=mc². Negative energy would therefore also mean negative mass. Positive masses attract each other, but with a negative mass, gravity could suddenly become a repulsive force.

Quantum theory, however, allows negative energy. “According to quantum physics, it is possible to borrow energy from a vacuum at a certain location, like money from a bank,” says Daniel Grumiller. “For a long time, we did not now about the maximum amount of this kind of energy credit and about possible interest rates that have to be paid. Various assumptions about this “interest” (known in the literature as “Quantum Interest”) have been published, but no comprehensive result has been agreed upon.

The so-called “Quantum Null Energy Condition” (QNEC), which was proven in 2017, prescribes certain limits for the “borrowing” of energy by linking relativity theory and quantum physics: An energy smaller than zero is thus permitted, but only in a certain range and only for a certain time. How much energy can be borrowed from a vacuum before the energetic credit limit has been exhausted depends on a quantum physical quantity, the so-called entanglement entropy.

“In a certain sense, entanglement entropy is a measure of how strongly the behavior of a system is governed by quantum physics,” says Daniel Grumiller. “If quantum entanglement plays a crucial role at some point in space, for example close to the edge of a black hole, then a negative energy flow can occur for a certain time, and negative energies become possible in that region.”

Grumiller was now able to generalize these special calculations together with Max Riegler and Pulastya Parekh. Max Riegler completed his dissertation in the research group of Daniel Grumiller at the TU Wien and is now working as a postdoc in Harvard. Pulastya Parekh from the IIT in Kanpur (India) was a guest at the Erwin Schrödinger Institute and at the TU Wien.

“All previous considerations have always referred to quantum theories that follow the symmetries of Special Relativity. But we have now been able to show that this connection between negative energy and quantum entanglement is a much more general phenomenon,” says Grumiller. The energy conditions that clearly prohibit the extraction of infinite amounts of energy from a vacuum are valid for very different quantum theories, regardless of symmetries.

The law of energy conservation cannot be outwitted

Of course, this has nothing to do with mystical “over unity machines” that allegedly generate energy out of nothing, as they are repeatedly presented in esoteric circles. “The fact that nature allows an energy smaller than zero for a certain period of time at a certain place does not mean that the law of conservation of energy is violated,” stresses Daniel Grumiller. “In order to enable negative energy flows at a certain location, there must be compensating positive energy flows in the immediate vicinity.”

Even if the matter is somewhat more complicated than previously thought, energy cannot be obtained from nothing, even though it can become negative. The new research results now place tight bounds on negative energy, thereby connecting it with quintessential properties of quantum mechanics.

Go to Source
Author:

Categories
ScienceDaily

Closing in on ‘holy grail’ of room temperature quantum computing chips

To process information, photons must interact. However, these tiny packets of light want nothing to do with each other, each passing by without altering the other. Now, researchers at Stevens Institute of Technology have coaxed photons into interacting with one another with unprecedented efficiency — a key advance toward realizing long-awaited quantum optics technologies for computing, communication and remote sensing.

The team, led by Yuping Huang, an associate professor of physics and director of the Center for Quantum Science and Engineering, brings us closer to that goal with a nano-scale chip that facilitates photon interactions with much higher efficiency than any previous system. The new method, reported as a memorandum in the Sept. 18 issue of Optica, works at very low energy levels, suggesting that it could be optimized to work at the level of individual photons — the holy grail for room-temperature quantum computing and secure quantum communication.

“We’re pushing the boundaries of physics and optical engineering in order to bring quantum and all-optical signal processing closer to reality,” said Huang.

To achieve this advance, Huang’s team fired a laser beam into a racetrack-shaped microcavity carved into a sliver of crystal. As the laser light bounces around the racetrack, its confined photons interact with one another, producing a harmonic resonance that causes some of the circulating light to change wavelength.

That isn’t an entirely new trick, but Huang and colleagues, including graduate student Jiayang Chen and senior research scientist Yong Meng Sua, dramatically boosted its efficiency by using a chip made from lithium niobate on insulator, a material that has a unique way of interacting with light. Unlike silicon, lithium niobate is difficult to chemically etch with common reactive gases. So, the Stevens’ team used an ion-milling tool, essentially a nanosandblaster, to etch a tiny racetrack about one-hundredth the width of a human hair.

Before defining the racetrack structure, the team needed to apply high-voltage electrical pulses to create carefully calibrated areas of alternating polarity, or periodic poling, that tailor the way photons move around the racetrack, increasing their probability of interacting with eachother.

Chen explained that to both etch the racetrack on the chip and tailor the way photons move around it, requires dozens of delicate nanofabrication steps, each requiring nanometer precision. “To the best of our knowledge, we’re among the first groups to master all of these nanofabrication steps to build this system — that’s the reason we could get this result first.”

Moving forward, Huang and his team aim to boost the crystal racetrack’s ability to confine and recirculate light, known as its Q-factor. The team has already identified ways to increase their Q-factor by a factor of at least 10, but each level up makes the system more sensitive to imperceptible temperature fluctuations — a few thousands of a degree — and requires careful fine-tuning.

Still, the Stevens team say they’re closing in on a system capable of generating interactions at the single-photon level reliably, a breakthrough that would allow the creation of many powerful quantum computing components such as photonics logic gates and entanglement sources, which along a circuit, can canvass multiple solutions to the same problem simultaneously, conceivably allowing calculations that could take years to be solved in seconds.

We could still be a while from that point, Chen said, but for quantum scientists the journey will be thrilling. “It’s the holy grail,” said Chen, the paper’s lead author. “And on the way to the holy grail, we’re realizing a lot of physics that nobody’s done before.”

Go to Source
Author:

Categories
IEEE Spectrum

FarmWise Raises $14.5 Million to Teach Giant Robots to Grow Our Food

We humans spend most of our time getting hungry or eating, which must be really inconvenient for the people who have to produce food for everyone. For a sustainable and tasty future, we’ll need to make the most of what we’ve got by growing more food with less effort, and that’s where the robots can help us out a little bit.

FarmWise, a California-based startup, is looking to enhance farming efficiency by automating everything from seeding to harvesting, starting with the worst task of all: weeding. And they’ve just raised US $14.5 million to do it.