Categories
ScienceDaily

Theoretically, two layers are better than one for solar-cell efficiency

Solar cells have come a long way, but inexpensive, thin film solar cells are still far behind more expensive, crystalline solar cells in efficiency. Now, a team of researchers suggests that using two thin films of different materials may be the way to go to create affordable, thin film cells with about 34% efficiency.

“Ten years ago I knew very little about solar cells, but it became clear to me they were very important,” said Akhlesh Lakhtakia, Evan Pugh University Professor and Charles Godfrey Binder Professor of Engineering Science and Mechanics, Penn State.

Investigating the field, he found that researchers approached solar cells from two sides, the optical side — looking on how the sun’s light is collected — and the electrical side — looking at how the collected sunlight is converted into electricity. Optical researchers strive to optimize light capture, while electrical researchers strive to optimize conversion to electricity, both sides simplifying the other.

“I decided to create a model in which both electrical and optical aspects will be treated equally,” said Lakhtakia. “We needed to increase actual efficiency, because if the efficiency of a cell is less than 30% it isn’t going to make a difference.” The researchers report their results in a recent issue of Applied Physics Letters.

Lakhtakia is a theoretician. He does not make thin films in a laboratory, but creates mathematical models to test the possibilities of configurations and materials so that others can test the results. The problem, he said, was that the mathematical structure of optimizing the optical and the electrical are very different.

Solar cells appear to be simple devices, he explained. A clear top layer allows sunlight to fall on an energy conversion layer. The material chosen to convert the energy, absorbs the light and produces streams of negatively charged electrons and positively charged holes moving in opposite directions. The differently charged particles get transferred to a top contact layer and a bottom contact layer that channel the electricity out of the cell for use. The amount of energy a cell can produce depends on the amount of sunlight collected and the ability of the conversion layer. Different materials react to and convert different wavelengths of light.

“I realized that to increase efficiency we had to absorb more light,” said Lakhtakia. “To do that we had to make the absorbent layer nonhomogeneous in a special way.”

That special way was to use two different absorbent materials in two different thin films. The researchers chose commercially available CIGS — copper indium gallium diselenide — and CZTSSe — copper zinc tin sulfur selenide — for the layers. By itself, CIGS’s efficiency is about 20% and CZTSSe’s is about 11%.

These two materials work in a solar cell because the structure of both materials is the same. They have roughly the same lattice structure, so they can be grown one on top of the other, and they absorb different frequencies of the spectrum so they should increase efficiency, according to Lakhtakia.

“It was amazing,” said Lakhtakia. “Together they produced a solar cell with 34% efficiency. This creates a new solar cell architecture — layer upon layer. Others who can actually make solar cells can find other formulations of layers and perhaps do better.”

According to the researchers, the next step is to create these experimentally and see what the options are to get the final, best answers.

Story Source:

Materials provided by Penn State. Original written by A’ndrea Elyse Messer. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Safer, more comfortable soldier uniforms are in the works

Uniforms of U.S. Army soldiers must meet a long list of challenging requirements. They need to feel comfortable in all climates, be durable through multiple washings, resist fires and ward off insects, among other things. Existing fabrics don’t check all of these boxes, so scientists have come up with a novel way of creating a flame-retardant, insect-repellent fabric that uses nontoxic substances.

The researchers will present their results today at the American Chemical Society (ACS) Fall 2020 Virtual Meeting & Expo. 

“The Army presented to us this interesting and challenging requirement for multifunctionality,” says study leader Ramaswamy Nagarajan, Ph.D. “There are flame-resistant Army combat uniforms made of various materials that meet flame retardant requirements. But they are expensive, and there are problems with dyeing the fabrics. Also, some of the raw materials are not produced in the U.S. So, our goal was to find an existing material that we could modify to make it flame retardant and insect repellent, yet still have a fabric that a soldier would want to wear.”

Because Nagarajan’s research group focuses on sustainable green chemistry, the team sought nontoxic chemicals and processes for this study. They chose to modify a commercially available 50-50 nylon-cotton blend, a relatively inexpensive, durable and comfortable fabric produced in the U.S. The material is used in a wide range of civilian and military applications because the nylon is strong and resistant to abrasion, whereas the cotton is comfortable to wear. But this type of textile doesn’t inherently repel bugs and is associated with a high fire risk.

“We started with making the fabric fire retardant, focusing on the cotton part of the blend,” explains Sourabh Kulkarni, a Ph.D. student who works with Nagarajan at the University of Massachusetts Lowell Center for Advanced Materials. “Cotton has a lot of hydroxyl groups (oxygen and hydrogen bonded together) on its surface, which can be activated by readily available chemicals to link with phosphorus-containing compounds that impart flame retardancy.” For their phosphorus-containing compound, they chose phytic acid, an abundant, nontoxic substance found in seeds, nuts and grains.

Next, the researchers tackled the issue of making the material repel insects so that soldiers wouldn’t have to spray themselves repeatedly or carry an additional item in their packs. The team took permethrin, an everyday nontoxic insect repellent, and attached it to the fabric using plasma-assisted deposition in collaboration with a local company, LaunchBay. Through trial and error, the researchers eventually got both the phytic acid and permethrin to link to the fabric’s surface molecules.

Using methods to measure heat release capacity and total heat release, as well as a vertical flame test, they found that the modified material performed at least 20% better than the untreated material. They also used a standard insect repellency test with live mosquitoes and found that the efficacy was greater than 98%. Finally, the fabric remained “breathable” after treatment as determined by air permeability studies.

“We are very excited,” Nagarajan says, “because we’ve shown we can modify this fabric to be flame retardant and insect repellent — and still be fairly durable and comfortable. We’d like to use a substance other than phytic acid that would contain more phosphorus and therefore impart a greater level of flame retardancy, better durability and still be nontoxic to a soldier’s skin. Having shown that we can modify the fabric, we would also like to see if we can attach antimicrobials to prevent infections from bacteria, as well as dyes that remain durable.”

Story Source:

Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Planet found orbiting small, cool star

Using the supersharp radio “vision” of the National Science Foundation’s continent-wide Very Long Baseline Array (VLBA), astronomers have discovered a Saturn-sized planet closely orbiting a small, cool star 35 light-years from Earth. This is the first discovery of an extrasolar planet with a radio telescope using a technique that requires extremely precise measurements of a star’s position in the sky, and only the second planet discovery for that technique and for radio telescopes.

The technique has long been known, but has proven difficult to use. It involves tracking the star’s actual motion in space, then detecting a minuscule “wobble” in that motion caused by the gravitational effect of the planet. The star and the planet orbit a location that represents the center of mass for both combined. The planet is revealed indirectly if that location, called the barycenter, is far enough from the star’s center to cause a wobble detectable by a telescope.

This technique, called the astrometric technique, is expected to be particularly good for detecting Jupiter-like planets in orbits distant from the star. This is because when a massive planet orbits a star, the wobble produced in the star increases with a larger separation between the planet and the star, and at a given distance from the star, the more massive the planet, the larger the wobble produced.

Starting in June of 2018 and continuing for a year and a half, the astronomers tracked a star called TVLM 513-46546, a cool dwarf with less than a tenth the mass of our Sun. In addition, they used data from nine previous VLBA observations of the star between March 2010 and August 2011.

Extensive analysis of the data from those time periods revealed a telltale wobble in the star’s motion indicating the presence of a planet comparable in mass to Saturn, orbiting the star once every 221 days. This planet is closer to the star than Mercury is to the Sun.

Small, cool stars like TVLM 513-46546 are the most numerous stellar type in our Milky Way Galaxy, and many of them have been found to have smaller planets, comparable to Earth and Mars.

“Giant planets, like Jupiter and Saturn, are expected to be rare around small stars like this one, and the astrometric technique is best at finding Jupiter-like planets in wide orbits, so we were surprised to find a lower mass, Saturn-like planet in a relatively compact orbit. We expected to find a more massive planet, similar to Jupiter, in a wider orbit,” said Salvador Curiel, of the National Autonomous University of Mexico. “Detecting the orbital motions of this sub-Jupiter mass planetary companion in such a compact orbit was a great challenge,” he added.

More than 4,200 planets have been discovered orbiting stars other than the Sun, but the planet around TVLM 513-46546 is only the second to be found using the astrometric technique. Another, very successful method, called the radial velocity technique, also relies on the gravitational effect of the planet upon the star. That technique detects the slight acceleration of the star, either toward or away from Earth, caused by the star’s motion around the barycenter.

“Our method complements the radial velocity method which is more sensitive to planets orbiting in close orbits, while ours is more sensitive to massive planets in orbits further away from the star,” said Gisela Ortiz-Leon of the Max Planck Institute for Radio Astronomy in Germany. “Indeed, these other techniques have found only a few planets with characteristics such as planet mass, orbital size, and host star mass, similar to the planet we found. We believe that the VLBA, and the astrometry technique in general, could reveal many more similar planets.”

A third technique, called the transit method, also very successful, detects the slight dimming of the star’s light when a planet passes in front of it, as seen from Earth.

The astrometric method has been successful for detecting nearby binary star systems, and was recognized as early as the 19th Century as a potential means of discovering extrasolar planets. Over the years, a number of such discoveries were announced, then failed to survive further scrutiny. The difficulty has been that the stellar wobble produced by a planet is so small when seen from Earth that it requires extraordinary precision in the positional measurements.

“The VLBA, with antennas separated by as much as 5,000 miles, provided us with the great resolving power and extremely high precision needed for this discovery,” said Amy Mioduszewski, of the National Radio Astronomy Observatory. “In addition, improvements that have been made to the VLBA’s sensitivity gave us the data quality that made it possible to do this work now,” she added.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Go to Source
Author:

Categories
ScienceDaily

How galaxies die: New insights into the quenching of star formation

Astronomers studying galaxy evolution have long struggled to understand what causes star formation to shut down in massive galaxies. Although many theories have been proposed to explain this process, known as “quenching,” there is still no consensus on a satisfactory model.

Now, an international team led by Sandra Faber, professor emerita of astronomy and astrophysics at UC Santa Cruz, has proposed a new model that successfully explains a wide range of observations about galaxy structure, supermassive black holes, and the quenching of star formation. The researchers presented their findings in a paper published July 1 in the Astrophysical Journal.

The model supports one of the leading ideas about quenching which attributes it to black hole “feedback,” the energy released into a galaxy and its surroundings from a central supermassive black hole as matter falls into the black hole and feeds its growth. This energetic feedback heats, ejects, or otherwise disrupts the galaxy’s gas supply, preventing the infall of gas from the galaxy’s halo to feed star formation.

“The idea is that in star-forming galaxies, the central black hole is like a parasite that ultimately grows and kills the host,” Faber explained. “That’s been said before, but we haven’t had clear rules to say when a black hole is big enough to shut down star formation in its host galaxy, and now we have quantitative rules that actually work to explain our observations.”

The basic idea involves the relationship between the mass of the stars in a galaxy (stellar mass), how spread out those stars are (the galaxy’s radius), and the mass of the central black hole. For star-forming galaxies with a given stellar mass, the density of stars in the center of the galaxy correlates with the radius of the galaxy so that galaxies with bigger radii have lower central stellar densities. Assuming that the mass of the central black hole scales with the central stellar density, star-forming galaxies with larger radii (at a given stellar mass) will have lower black-hole masses.

What that means, Faber explained, is that larger galaxies (those with larger radii for a given stellar mass) have to evolve further and build up a higher stellar mass before their central black holes can grow large enough to quench star formation. Thus, small-radius galaxies quench at lower masses than large-radius galaxies.

“That is the new insight, that if galaxies with large radii have smaller black holes at a given stellar mass, and if black hole feedback is important for quenching, then large-radius galaxies have to evolve further,” she said. “If you put together all these assumptions, amazingly, you can reproduce a large number of observed trends in the structural properties of galaxies.”

This explains, for example, why more massive quenched galaxies have higher central stellar densities, larger radii, and larger central black holes.

Based on this model, the researchers concluded that quenching begins when the total energy emitted from the black hole is approximately four times the gravitational binding energy of the gas in the galactic halo. The binding energy refers to the gravitational force that holds the gas within the halo of dark matter enveloping the galaxy. Quenching is complete when the total energy emitted from the black hole is twenty times the binding energy of the gas in the galactic halo.

Faber emphasized that the model does not yet explain in detail the physical mechanisms involved in the quenching of star formation. “The key physical processes that this simple theory evokes are not yet understood,” she said. “The virtue of this, though, is that having simple rules for each step in the process challenges theorists to come up with physical mechanisms that explain each step.”

Astronomers are accustomed to thinking in terms of diagrams that plot the relations between different properties of galaxies and show how they change over time. These diagrams reveal the dramatic differences in structure between star-forming and quenched galaxies and the sharp boundaries between them. Because star formation emits a lot of light at the blue end of the color spectrum, astronomers refer to “blue” star-forming galaxies, “red” quiescent galaxies, and the “green valley” as the transition between them. Which stage a galaxy is in is revealed by its star formation rate.

One of the study’s conclusions is that the growth rate of black holes must change as galaxies evolve from one stage to the next. The observational evidence suggests that most of the black hole growth occurs in the green valley when galaxies are beginning to quench.

“The black hole seems to be unleashed just as star formation slows down,” Faber said. “This was a revelation, because it explains why black hole masses in star-forming galaxies follow one scaling law, while black holes in quenched galaxies follow another scaling law. That makes sense if black hole mass grows rapidly while in the green valley.”

Faber and her collaborators have been discussing these issues for many years. Since 2010, Faber has co-led a major Hubble Space Telescope galaxy survey program (CANDELS, the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey), which produced the data used in this study. In analyzing the CANDELS data, she has worked closely with a team led by Joel Primack, UCSC professor emeritus of physics, which developed the Bolshoi cosmological simulation of the evolution of the dark matter halos in which galaxies form. These halos provide the scaffolding on which the theory builds the early star-forming phase of galaxy evolution before quenching.

The central ideas in the paper emerged from analyses of CANDELS data and first struck Faber about four years ago. “It suddenly leaped out at me, and I realized if we put all these things together — if galaxies had a simple trajectory in radius versus mass, and if black hole energy needs to overcome halo binding energy — it can explain all these slanted boundaries in the structural diagrams of galaxies,” she said.

At the time, Faber was making frequent trips to China, where she has been involved in research collaborations and other activities. She was a visiting professor at Shanghai Normal University, where she met first author Zhu Chen. Chen came to UC Santa Cruz in 2017 as a visiting researcher and began working with Faber to develop these ideas about galaxy quenching.

“She is mathematically very good, better than me, and she did all of the calculations for this paper,” Faber said.

Faber also credited her longtime collaborator David Koo, UCSC professor emeritus of astronomy and astrophysics, for first focusing attention on the central densities of galaxies as a key to the growth of central black holes.

Among the puzzles explained by this new model is a striking difference between our Milky Way galaxy and its very similar neighbor Andromeda. “The Milky Way and Andromeda have almost the same stellar mass, but Andromeda’s black hole is almost 50 times bigger than the Milky Way’s,” Faber said. “The idea that black holes grow a lot in the green valley goes a long way toward explaining this mystery. The Milky Way is just entering the green valley and its black hole is still small, whereas Andromeda is just exiting so its black hole has grown much bigger, and it is also more quenched than the Milky Way.”

In addition to Faber, Chen, Koo, and Primack, the coauthors of the paper include researchers at some two dozen institutions in seven countries. This work was funded by grants from NASA and the National Science Foundation.

Go to Source
Author:

Categories
ScienceDaily

Levitating droplets allow scientists to perform ‘touchless’ chemical reactions

Levitation has long been a staple of magic tricks and movies. But in the lab, it’s no trick. Scientists can levitate droplets of liquid, though mixing them and observing the reactions has been challenging. The pay-off, however, could be big as it would allow researchers to conduct contact-free experiments without containers or handling that might affect the outcome. Now, a team reporting in ACS’ Analytical Chemistry has developed a method to do just that.

Scientists have made devices to levitate small objects, but most methods require the object to have certain physical properties, such as electric charge or magnetism. In contrast, acoustic levitation, which uses sound waves to suspend an object in a gas, doesn’t rely on such properties. Yet existing devices for acoustic levitation and mixing of single particles or droplets are complex, and it is difficult to obtain measurements from them as a chemical reaction is happening. Stephen Brotton and Ralf Kaiser wanted to develop a versatile technique for the contactless control of two chemically distinct droplets, with a set of probes to follow the reaction as the droplets merge.

The team made an acoustic levitator and suspended two droplets in it, one above the other. Then, they made the upper droplet oscillate by varying the amplitude of the sound wave. The oscillating upper droplet merged with the lower droplet, and the resulting chemical reaction was monitored with infrared, Raman and ultraviolet-visible spectroscopies. The researchers tested the technique by combining different droplets. In one experiment, for example, they merged an ionic liquid with nitric acid, causing a tiny explosion. The new levitation method could help scientists study many different types of chemical reactions in areas such as material sciences, medicinal chemistry and planetary science, the researchers say.

Story Source:

Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

The many lifetimes of plastics

Many of us have seen informational posters at parks or aquariums specifying how long plastics bags, bottles, and other products last in the environment. They’re a good reminder to not litter, but where does the information on the lifetime expectancy of plastic goods come from, and how reliable is it?

It turns out, getting a true read on how long it takes for plastic to break down in the environment is tricky business, says Collin Ward, a marine chemist at Woods Hole Oceanographic Institution and member of the its Microplastics Catalyst Program, a long-term research program on plastics in the ocean.

“Plastics are everywhere, but one of the most pressing questions is how long plastics last in the environment,” he says. “The environmental and human health risks associated with something that lasts one year in the environment, versus the same thing that lasts 500 years, are completely different.”

Knowing the fate of plastics may be tricky, but it’s critical. Consumers need the information to make good, sustainable decisions; scientists need it to understand the fate of plastics in the environment and assess associated health risks; and legislators need it to make well-informed decisions around plastic bans.

The long-standing mystery around the life expectancy of plastic goods has prompted a new study from Woods Hole Oceanographic Institution looking at how the lifetime estimates of straws, cups, bags, and other products are being communicated to the public via infographics. Ward, the lead author of a new paper published in the journal Proceedings of the National Academy of Sciences, along with WHOI marine chemist Chris Reddy, analyzed nearly 60 individual infographics and documents from a variety of sources, including governmental agencies, non-profits, textbooks, and social media sites. To their surprise, there was little consistency in the lifetime estimates numbers reported for many everyday products, like plastic bags, among the materials.

“The estimates being reported to the general public and legislators vary widely,” says Ward. “In some cases, they vary from one year to hundreds of years to forever.”

On the other end of the spectrum, certain lifetime estimates seemed far too similar among the infographics. Of particular interest, Ward notes, were the estimates for how long fishing line lasts in the ocean. He says that all 37 infographics that included a lifetime for fishing line reported 600 years.

“Every single one said 600 years, it was incredible” he says. “I’m being a little tongue-in-cheek here, but we’re all more likely to win the lottery than 37 independent science studies arriving at the same answer of 600 years for fishing line to degrade in the environment.”

In reality, these estimates didn’t stem from actual scientific studies. Ward said he did a lot of digging to find peer-reviewed literature that was either funded, or conducted, by the agencies putting the information out there and couldn’t find a single instance where the estimates originated from a scientific study. He and Reddy believe that while the information was likely well intentioned, the lack of traceable and documented science behind it was a red flag.

“The reality is that what the public and legislators know about the environmental persistence of plastic goods is often not based on solid science, despite the need for reliable information to form the foundation for a great many decisions, large and small,” the scientists state in the paper.

In one of their own peer-reviewed studies on the life expectancy of plastics published last year, Ward and his team found that polystyrene, one of the world’s most ubiquitous plastics, may degrade in decades when exposed to sunlight, rather than thousands of years as previously thought. The discovery was made, in part, by working with researchers at WHOI’s National Ocean Sciences Accelerator Mass Spectrometry (NOSAMS) facility to track the degradation of the plastic into gas and water phases, and with the aid of a specialized weathering chamber in Ward’s lab. The chamber tested how environmental factors such as sunlight and temperature influenced the chemical breakdown of the polystyrene, the first type of plastic found in the coastal ocean by WHOI scientists nearly fifty years ago.

Reddy feels that one of the biggest misconceptions surrounding the fate of plastics in the environment is that they simply break down in to smaller bits that hang around forever.

“This is the narrative we see all the time in the press and social media, and it’s not a complete picture,” says Reddy. “But through our own research and collaborating with others, we’ve determined that in addition to plastics breaking down into smaller fragments, they also degrade partially into different chemicals, and they break down completely into CO2.” These newly identified breakdown products no longer resemble plastic and would be otherwise missed when scientists survey the oceans for missing plastics.

Chelsea Rochman, a biologist at the University of Toronto who wasn’t involved in the study, says that understanding the various forms of plastic degradation will be key to solving one of the enduring mysteries of plastic pollution: more than 99 percent of the plastic that should be detected in the ocean is missing.

“Researchers are beginning to talk about the global plastic cycle,” says Rochman. “A key part of this will be understanding the persistence of plastics in nature. We know they break down into smaller and smaller pieces, but truly understanding mechanisms and transformation products are key parts of the puzzle.”

Overall, analyzing the infographics turned out to be an eye-opening exercise for the scientists, and unscored the importance of backing public information with sound science.

“The question of environmental persistence of plastics is not going to be easy to answer,” says Ward. “But by bringing transparency to this environmental issue, we will help improve the quality of information available to all stakeholders — consumers, scientists, and legislators — to make informed, sustainable decisions.”

This research was funded by The Seaver Institute and internal funding from the WHOI Microplastics Catalyst Program.

Go to Source
Author:

Categories
ScienceDaily

New biosensor visualizes stress in living plant cells in real time

Plant biologists have long sought a deeper understanding of foundational processes involving kinases, enzymes that catalyze key biological activities in proteins. Analyzing the processes underlying kinases in plants takes on greater urgency in today’s environment increasingly altered by climate warming.

Certain “SnRK2” kinases (sucrose-non-fermenting-1-related protein kinase-2s) are essential since they are known to be activated in response to drought conditions, triggering the protective closure of small pores on leaf surfaces known as stoma. These pores allow carbon dioxide to enter leaves, but plants also lose more than 90 percent of their water by evaporation through them. Pore opening and closing functions help optimize growth and drought tolerance in response to changes in the environment.

Now, plant biologists at the University of California San Diego have developed a new nanosensor that allows researchers to monitor SnRK2 protein kinase activity in live plant cells. The SnRK2 activity sensor, or “SNACS,” is described in the journal eLife.

Prior efforts to dissect protein kinase activities involved a tedious process of grinding up plant tissues and measuring kinase activities through cell extracts. More than 100 leaves were required per experiment for analyses of the stomatal pore forming “guard cells.” SNACS now allows researchers to analyze changes in real time as they happen.

“Previously, it was not possible to investigate time-resolved SnRK2 activity in living plant cells,” said Biological Sciences Distinguished Professor Julian Schroeder, a member of the Section of Cell and Developmental Biology and senior author of the new paper. “The SNACS sensor reports direct real-time visualization of SnRK2 kinase activity in single live plant cells or tissues.”

The new biosensor is already paying dividends. The researchers describe using SNACS to provide new evidence about longstanding questions about SnRK2 and foundational interactions with carbon dioxide. The researchers show that abscisic acid, a drought stress hormone in plants, activates the kinases, but that elevated carbon dioxide does not, resolving a recently debated question.

“Our findings could benefit researchers investigating environmental stress responses in plants and analyzing how different signaling pathways interact with one another in plant cells,” said Yohei Takahashi, a UC San Diego project scientist and co-corresponding author of the study. “The ability to investigate time-resolved SnRK2 kinase regulation in live plants is of particular importance for understanding environmental stress responses of plant cells.”

The new nanosensor was developed using an approach pioneered by the late UC San Diego Professor Roger Tsien, in part for which he was awarded a Nobel Prize.

The research team included Li Zhang, Yohei Takahashi, Po-Kai Hsu, Kollist Hannes, Ebe Merilo, Patrick Krysan and Julian Schroeder.

Story Source:

Materials provided by University of California – San Diego. Original written by Mario Aguilera. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Planetary exploration rover avoids sand traps with ‘rear rotator pedaling’

The rolling hills of Mars or the moon are a long way from the nearest tow truck. That’s why the next generation of exploration rovers will need to be good at climbing hills covered with loose material and avoiding entrapment on soft granular surfaces.

Built with wheeled appendages that can be lifted and wheels able to “wiggle,” a new robot known as the “Mini Rover” has developed and tested complex locomotion techniques robust enough to help it climb hills covered with such granular material — and avoid the risk of getting ignominiously stuck on some remote planet or moon.

Using a complex move the researchers dubbed “rear rotator pedaling,” the robot can climb a slope by using its unique design to combine paddling, walking, and wheel spinning motions. The rover’s behaviors were modeled using a branch of physics known as terradynamics.

“When loose materials flow, that can create problems for robots moving across it,” said Dan Goldman, the Dunn Family Professor in the School of Physics at the Georgia Institute of Technology. “This rover has enough degrees of freedom that it can get out of jams pretty effectively. By avalanching materials from the front wheels, it creates a localized fluid hill for the back wheels that is not as steep as the real slope.

The rover is always self-generating and self-organizing a good hill for itself.”

The research will be reported on May 13 as the cover article in the journal Science Robotics. The work was supported by the NASA National Robotics Initiative and the Army Research Office.

A robot built by NASA’s Johnson Space Center pioneered the ability to spin its wheels, sweep the surface with those wheels and lift each of its wheeled appendages where necessary, creating a broad range of potential motions. Using in-house 3D printers, the Georgia Tech researchers collaborated with the Johnson Space Center to re-create those capabilities in a scaled-down vehicle with four wheeled appendages driven by 12 different motors.

“The rover was developed with a modular mechatronic architecture, commercially available components, and a minimal number of parts,” said Siddharth Shrivastava, an undergraduate student in Georgia Tech’s George W. Woodruff School of Mechanical Engineering. “This enabled our team to use our robot as a robust laboratory tool and focus our efforts on exploring creative and interesting experiments without worrying about damaging the rover, service downtime, or hitting performance limitations.”

The rover’s broad range of movements gave the research team an opportunity to test many variations that were studied using granular drag force measurements and modified Resistive Force Theory. Shrivastava and School of Physics Ph.D. candidate Andras Karsai began with the gaits explored by the NASA RP15 robot, and were able to experiment with locomotion schemes that could not have been tested on a full-size rover.

The researchers also tested their experimental gaits on slopes designed to simulate planetary and lunar hills using a fluidized bed system known as SCATTER (Systematic Creation of Arbitrary Terrain and Testing of Exploratory Robots) that could be tilted to evaluate the role of controlling the granular substrate. Karsai and Shrivastava collaborated with Yasemin Ozkan-Aydin, a postdoctoral research fellow in Goldman’s lab, to study the rover motion in the SCATTER test facility.

“By creating a small robot with capabilities similar to the RP15 rover, we could test the principles of locomoting with various gaits in a controlled laboratory environment,” Karsai said. “In our tests, we primarily varied the gait, the locomotion medium, and the slope the robot had to climb. We quickly iterated over many gait strategies and terrain conditions to examine the phenomena that emerged.”

In the paper, the authors describe a gait that allowed the rover to climb a steep slope with the front wheels stirring up the granular material — poppy seeds for the lab testing — and pushing them back toward the rear wheels. The rear wheels wiggled from side-to-side, lifting and spinning to create a motion that resembles paddling in water. The material pushed to the back wheels effectively changed the slope the rear wheels had to climb, allowing the rover to make steady progress up a hill that might have stopped a simple wheeled robot.

The experiments provided a variation on earlier robophysics work in Goldman’s group that involved moving with legs or flippers, which had emphasized disturbing the granular surfaces as little as possible to avoid getting the robot stuck.

“In our previous studies of pure legged robots, modeled on animals, we had kind of figured out that the secret was to not make a mess,” said Goldman. “If you end up making too much of a mess with most robots, you end up just paddling and digging into the granular material. If you want fast locomotion, we found that you should try to keep the material as solid as possible by tweaking the parameters of motion.”

But simple motions had proved problematic for Mars rovers, which got stuck in granular materials. Goldman says the gait discovered by Shrivastava, Karsai and Ozkan-Aydin might be able to help future rovers avoid that fate.

“This combination of lifting and wheeling and paddling, if used properly, provides the ability to maintain some forward progress even if it is slow,” Goldman said. “Through our laboratory experiments, we have shown principles that could lead to improved robustness in planetary exploration — and even in challenging surfaces on our own planet.”

The researchers hope next to scale up the unusual gaits to larger robots, and to explore the idea of studying robots and their localized environments together. “We’d like to think about the locomotor and its environment as a single entity,” Goldman said. “There are certainly some interesting granular and soft matter physics issues to explore.”

Though the Mini Rover was designed to study lunar and planetary exploration, the lessons learned could also be applicable to terrestrial locomotion — an area of interest to the Army Research Laboratory, one of the project’s sponsors.

“Basic research is revealing counter-intuitive principles and novel approaches for locomotion and granular intrusion in complex and yielding terrain,” said Dr. Samuel Stanton, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This may lead to novel, terrain-aware platforms capable of intelligently transitioning between wheeled and legged modes of movement to maintain high operational tempo.”

Beyond those already mentioned, the researchers worked with Robert Ambrose and William Bluethmann at NASA, and traveled to NASA JSC to study the full-size NASA rover.

Go to Source
Author:

Categories
ScienceDaily

Long spaceflights affect astronaut brain volume

Extended periods in space have long been known to cause vision problems in astronauts. Now a new study in the journal Radiology suggests that the impact of long-duration space travel is more far-reaching, potentially causing brain volume changes and pituitary gland deformation.

More than half of the crew members on the International Space Station (ISS) have reported changes to their vision following long-duration exposure to the microgravity of space. Postflight evaluation has revealed swelling of the optic nerve, retinal hemorrhage and other ocular structural changes.

Scientists have hypothesized that chronic exposure to elevated intracranial pressure, or pressure inside the head, during spaceflight is a contributing factor to these changes. On Earth, the gravitational field creates a hydrostatic gradient, a pressure of fluid that progressively increases from your head down to your feet while standing or sitting. This pressure gradient is not present in space.

“When you’re in microgravity, fluid such as your venous blood no longer pools toward your lower extremities but redistributes headward,” said study lead author Larry A. Kramer, M.D., from the University of Texas Health Science Center at Houston. Dr. Kramer further explained, “That movement of fluid toward your head may be one of the mechanisms causing changes we are observing in the eye and intracranial compartment.”

To find out more, Dr. Kramer and colleagues performed brain MRI on 11 astronauts, including 10 men and one woman, before they traveled to the ISS. The researchers followed up with MRI studies a day after the astronauts returned, and then at several intervals throughout the ensuing year.

MRI results showed that the long-duration microgravity exposure caused expansions in the astronauts’ combined brain and cerebrospinal fluid (CSF) volumes. CSF is the fluid that flows in and around the hollow spaces of the brain and spinal cord. The combined volumes remained elevated at one-year postflight, suggesting permanent alteration.

“What we identified that no one has really identified before is that there is a significant increase of volume in the brain’s white matter from preflight to postflight,” Dr. Kramer said. “White matter expansion in fact is responsible for the largest increase in combined brain and cerebrospinal fluid volumes postflight.”

MRI also showed alterations to the pituitary gland, a pea-sized structure at the base of the skull often referred to as the “master gland” because it governs the function of many other glands in the body. Most of the astronauts had MRI evidence of pituitary gland deformation suggesting elevated intracranial pressure during spaceflight.

“We found that the pituitary gland loses height and is smaller postflight than it was preflight,” Dr. Kramer said. “In addition, the dome of the pituitary gland is predominantly convex in astronauts without prior exposure to microgravity but showed evidence of flattening or concavity postflight. This type of deformation is consistent with exposure to elevated intracranial pressures.”

The researchers also observed a postflight increase in volume, on average, in the astronauts’ lateral ventricles, spaces in the brain that contain CSF. However, the overall resulting volume would not be considered outside the range of healthy adults. The changes were similar to those that occur in people who have spent long periods of bed rest with their heads tilted slightly downward in research studies simulating headward fluid shift in microgravity.

Additionally, there was increased velocity of CSF flow through the cerebral aqueduct, a narrow channel that connects the ventricles in the brain. A similar phenomenon has been seen in normal pressure hydrocephalus, a condition in which the ventricles in the brain are abnormally enlarged. Symptoms of this condition include difficulty walking, bladder control problems and dementia. To date, these symptoms have not been reported in astronauts after space travel.

The researchers are studying ways to counter the effects of microgravity. One option under consideration is the creation of artificial gravity using a large centrifuge that can spin people in either a sitting or prone position. Also under investigation is the use of negative pressure on the lower extremities as a way to counteract the headward fluid shift due to microgravity.

Dr. Kramer said the research could also have applications for non-astronauts.

“If we can better understand the mechanisms that cause ventricles to enlarge in astronauts and develop suitable countermeasures, then maybe some of these discoveries could benefit patients with normal pressure hydrocephalus and other related conditions,” he said.

Go to Source
Author:

Categories
Hackster.io

Handwash Boogie

How long is 20 seconds, anyway? And how do you keep your distractable mammal brain excited about it? Here are a few projects, from the simple to the beautifully full-featured, to help you be a better you.

// https://www.hackster.io/eric-schneider/musical-soap-dispenser-e612d1
// https://www.hackster.io/deeplocal/scrubber-your-handwashing-soundtrack-39c5fe
// https://www.hackster.io/331510/wash-a-lot-bot-a-diy-hand-washing-timer-2df500
// https://www.hackster.io/glowascii/touchless-handwashing-timer-nightlight-7917f9
// https://makecode.com/_7iWH8L0a6Yw4