Categories
ScienceDaily

Metal wires of carbon complete toolbox for carbon-based computers

Transistors based on carbon rather than silicon could potentially boost computers’ speed and cut their power consumption more than a thousandfold — think of a mobile phone that holds its charge for months — but the set of tools needed to build working carbon circuits has remained incomplete until now.

A team of chemists and physicists at the University of California, Berkeley, has finally created the last tool in the toolbox, a metallic wire made entirely of carbon, setting the stage for a ramp-up in research to build carbon-based transistors and, ultimately, computers.

“Staying within the same material, within the realm of carbon-based materials, is what brings this technology together now,” said Felix Fischer, UC Berkeley professor of chemistry, noting that the ability to make all circuit elements from the same material makes fabrication easier. “That has been one of the key things that has been missing in the big picture of an all-carbon-based integrated circuit architecture.”

Metal wires — like the metallic channels used to connect transistors in a computer chip — carry electricity from device to device and interconnect the semiconducting elements within transistors, the building blocks of computers.

The UC Berkeley group has been working for several years on how to make semiconductors and insulators from graphene nanoribbons, which are narrow, one-dimensional strips of atom-thick graphene, a structure composed entirely of carbon atoms arranged in an interconnected hexagonal pattern resembling chicken wire.

The new carbon-based metal is also a graphene nanoribbon, but designed with an eye toward conducting electrons between semiconducting nanoribbons in all-carbon transistors. The metallic nanoribbons were built by assembling them from smaller identical building blocks: a bottom-up approach, said Fischer’s colleague, Michael Crommie, a UC Berkeley professor of physics. Each building block contributes an electron that can flow freely along the nanoribbon.

While other carbon-based materials — like extended 2D sheets of graphene and carbon nanotubes — can be metallic, they have their problems. Reshaping a 2D sheet of graphene into nanometer scale strips, for example, spontaneously turns them into semiconductors, or even insulators. Carbon nanotubes, which are excellent conductors, cannot be prepared with the same precision and reproducibility in large quantities as nanoribbons.

“Nanoribbons allow us to chemically access a wide range of structures using bottom-up fabrication, something not yet possible with nanotubes,” Crommie said. “This has allowed us to basically stitch electrons together to create a metallic nanoribbon, something not done before. This is one of the grand challenges in the area of graphene nanoribbon technology and why we are so excited about it.”

Metallic graphene nanoribbons — which feature a wide, partially-filled electronic band characteristic of metals — should be comparable in conductance to 2D graphene itself.

“We think that the metallic wires are really a breakthrough; it is the first time that we can intentionally create an ultra-narrow metallic conductor — a good, intrinsic conductor — out of carbon-based materials, without the need for external doping,” Fischer added.

Crommie, Fischer and their colleagues at UC Berkeley and Lawrence Berkeley National Laboratory (Berkeley Lab) will publish their findings in the Sept. 25 issue of the journal Science.

Tweaking the topology

Silicon-based integrated circuits have powered computers for decades with ever increasing speed and performance, per Moore’s Law, but they are reaching their speed limit — that is, how fast they can switch between zeros and ones. It’s also becoming harder to reduce power consumption; computers already use a substantial fraction of the world’s energy production. Carbon-based computers could potentially switch many times times faster than silicon computers and use only fractions of the power, Fischer said.

Graphene, which is pure carbon, is a leading contender for these next-generation, carbon-based computers. Narrow strips of graphene are primarily semiconductors, however, and the challenge has been to make them also work as insulators and metals — opposite extremes, totally nonconducting and fully conducting, respectively — so as to construct transistors and processors entirely from carbon.

Several years ago, Fischer and Crommie teamed up with theoretical materials scientist Steven Louie, a UC Berkeley professor of physics, to discover new ways of connecting small lengths of nanoribbon to reliably create the full gamut of conducting properties.

Two years ago, the team demonstrated that by connecting short segments of nanoribbon in the right way, electrons in each segment could be arranged to create a new topological state — a special quantum wave function — leading to tunable semiconducting properties.

In the new work, they use a similar technique to stitch together short segments of nanoribbons to create a conducting metal wire tens of nanometers long and barely a nanometer wide.

The nanoribbons were created chemically and imaged on very flat surfaces using a scanning tunneling microscope. Simple heat was used to induce the molecules to chemically react and join together in just the right way. Fischer compares the assembly of daisy-chained building blocks to a set of Legos, but Legos designed to fit at the atomic scale.

“They are all precisely engineered so that there is only one way they can fit together. It’s as if you take a bag of Legos, and you shake it, and out comes a fully assembled car,” he said. “That is the magic of controlling the self-assembly with chemistry.”

Once assembled, the new nanoribbon’s electronic state was a metal — just as Louie predicted — with each segment contributing a single conducting electron.

The final breakthrough can be attributed to a minute change in the nanoribbon structure.

“Using chemistry, we created a tiny change, a change in just one chemical bond per about every 100 atoms, but which increased the metallicity of the nanoribbon by a factor of 20, and that is important, from a practical point of view, to make this a good metal,” Crommie said.

The two researchers are working with electrical engineers at UC Berkeley to assemble their toolbox of semiconducting, insulating and metallic graphene nanoribbons into working transistors.

“I believe this technology will revolutionize how we build integrated circuits in the future,” Fischer said. “It should take us a big step up from the best performance that can be expected from silicon right now. We now have a path to access faster switching speeds at much lower power consumption. That is what is driving the push toward a carbon-based electronics semiconductor industry in the future.”

Co-lead authors of the paper are Daniel Rizzo and Jingwei Jiang from UC Berkeley’s Department of Physics and Gregory Veber from the Department of Chemistry. Other co-authors are Steven Louie, Ryan McCurdy, Ting Cao, Christopher Bronner and Ting Chen of UC Berkeley. Jiang, Cao, Louie, Fischer and Crommie are affiliated with Berkeley Lab, while Fischer and Crommie are members of the Kavli Energy NanoSciences Institute.

The research was supported by the Office of Naval Research, the Department of Energy, the Center for Energy Efficient Electronics Science and the National Science Foundation.

Go to Source
Author:

Categories
ProgrammableWeb

7 Top Fantasy Sports APIs

Fantasy Sport leagues are more popular than ever these days, with an estimated 60 million people participating in league play. Typically the participants create virtual teams based on real players of various sports, and utilize those real player statistics to compute finals scores and compete with other virtual teams. Nearly every real team sport imaginable has a fantasy component played by fans at the present, with many of the leagues requiring dues for players and payoffs for winners.

Several “official” fantasy sports leagues are commissioned by real leagues, including the NFL and Premier League soccer. Other leagues are created by individual organizations, or partner organizations, for a plethora of sports, including basketball, baseball, soccer, college sports, UFC, golf, tennis, auto racing and eSports. Developers looking to create applications to accompany this popular past time can start by finding the best APIs to suit their needs.

What is a Fantasy Sports API?

A Fantasy Sports API is an Application Programming Interface that enables developers to create applications that tap into Fantasy Sports data.

The best place to find these APIs is in the Fantasy Sports category in the ProgrammableWeb directory. In this article we highlight some favorites from our readers.

1. Sportradar Sports Data API

Sportradar provides real-time, accurate sports statistics and sports content. Sportradar’s data coverage includes all major U.S. sports, plus hundreds of leagues throughout the world. Data can be retrieved from Sportsradar via REST APITrack this API. This data includes schedules, standings, statistics, play by play, live images, and more.

2. Yahoo Fantasy Sports API

Yahoo Fantasy Sports allows users to compete against each other using statistics from real-world competitions. The Yahoo Fantasy Sports APITrack this API provides rich data on leagues, teams and player information. Use it to analyze draft results, review free agents, optimize current rosters, or create other applications. The Yahoo Fantasy Sports API utilizes the Yahoo Query Language (YQL) as a mechanism to access Yahoo Fantasy Sports data, returning data in XML and JSON formats.

3. Cric API

CricAPI provides data about the game of Cricket. Use the API to get live cricket match data, a list of matches, latest scores, player batting and bowling stats. The CricAPI Fantasy APITrack this API can be used before the match to help you with choosing players (batsmen / bowlers) for your fantasy game; once this is done you can hit the API at regular intervals and calculate the results of your Fantasy Cricket.

4. ProFootballAPI.com API

The ProFootballAPI NFL APITrack this API provides users with access to a database of current and past NFL football statistics and game information. The database is updated every minute, even while games are being played. Data is available going back to 2009. The NFL API can provide answers to simple queries or return large data sets for more in-depth use.

5. Goalserve MLB API

Goalserve provides live sports data feeds for multiple sports. The Goalserve Sports Data Feeds MLB APITrack this API delivers fixtures, live scores, results, in-game player statistics, profiles, injuries, odds, historical data since 2010, prematch and more.

6. GameScorekeeper API

GameScorekeeper provides feeds of data about eSports including League of Legends, Counter-Strike: Global Offensive, Heroes of the Storm, and DOTA 2. The GameScorekeeper REST APITrack this API provides JSON data related to eSports such as upcoming matches, competitions, teams, and results. The GameScorekeeper Live APITrack this API provides real-time data from eSports matches through websockets.

7. Sportmonks Soccer API<

SportMonks is a provider of data feeds for a variety of different professional sports. The SportMonks Soccer APITrack this API provides data feeds for live scores, full season fixtures, video highlights, and in-play odds among other features. Users can access historical data stretching way back to 2005.

Screenshot: SportMonks

Check out the Fantasy Sports category for more APIs, plus SDKs, Source Code Samples, and other resources.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">joyc</a>

Categories
ScienceDaily

Modern theory from ancient impacts

Around 4 billion years ago, the solar system was far less hospitable than we find it now. Many of the large bodies we know and love were present, but probably looked considerably different, especially the Earth. We know from a range of sources, including ancient meteorites and planetary geology, that around this time there were vastly more collisions between, and impacts from, asteroids originating in the Mars-Jupiter asteroid belt.

Knowledge of these events is especially important to us, as the time period in question is not only when the surface of our planet was taking on a more recognizable form, but was also when life was just getting started. With more accurate details of Earth’s rocky history, it could help researchers answer some long-standing questions concerning the mechanisms responsible for life, as well as provide information for other areas of life science.

“Meteorites provide us with the earliest history of ourselves,” said Professor Yuji Sano from the Atmosphere and Ocean Research Institute at the University of Tokyo. “This is what fascinated me about them. By studying properties, such as radioactive decay products, of meteorites that fell to Earth, we can deduce when they came and where they came from. For this study we examined meteorites that came from Vesta, the second-largest asteroid after the dwarf planet Ceres.”

Sano and his team found evidence that Vesta was hit by multiple impacting bodies around 4.4 billion to 4.15 billion years ago. This is earlier than 3.9 billion years ago, which is when the late heavy bombardment (LHB) is thought to have occurred. Current evidence for the LHB comes from lunar rocks collected during the Apollo moon missions of the 1970s, as well as other sources. But these new studies are improving upon previous models and will pave the way for an up-to-date database of early solar impact records.

“That Vesta-origin meteorites clearly show us impacts earlier than the LHB raises the question, ‘Did the late heavy bombardment truly occur?'” said Sano. “It seems to us that early solar system impacts peaked sooner than the LHB and reduced smoothly with time. It may not have been the cataclysmic period of chaos that current models describe.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Study shows difficulty in finding evidence of life on Mars

In a little more than a decade, samples of rover-scooped Martian soil will rocket to Earth.

While scientists are eager to study the red planet’s soils for signs of life, researchers must ponder a considerable new challenge: Acidic fluids — which once flowed on the Martian surface — may have destroyed biological evidence hidden within Mars’ iron-rich clays, according to researchers at Cornell University and at Spain’s Centro de Astrobiología.

The researchers conducted simulations involving clay and amino acids to draw conclusions regarding the likely degradation of biological material on Mars. Their paper, “Constraining the Preservation of Organic Compounds in Mars Analog Nontronites After Exposure to Acid and Alkaline Fluids,” published Sept. 15 in Nature Scientific Reports.

Alberto G. Fairén, a visiting scientist in the Department of Astronomy in the College of Arts and Sciences at Cornell, is a corresponding author.

NASA’s Perseverance rover, launched July 30, will land at Mars’ Jezero Crater next February; the European Space Agency’s Rosalind Franklin rover will launch in late 2022. The Perseverance mission will collect Martian soil samples and send them to Earth by the 2030s. The Rosalind Franklin rover will drill into the Martian surface, collect soil samples and analyze them in situ.

In the search for life on Mars, the red planet’s clay surface soils are a preferred collection target since the clay protects the molecular organic material inside. However, the past presence of acid on the surface may have compromised the clay’s ability to protect evidence of previous life.

“We know that acidic fluids have flowed on the surface of Mars in the past, altering the clays and its capacity to protect organics,” Fairén said.

He said the internal structure of clay is organized into layers, where the evidence of biological life — such as lipids, nucleic acids, peptides and other biopolymers — can become trapped and well preserved.

In the laboratory, the researchers simulated Martian surface conditions by aiming to preserve an amino acid called glycine in clay, which had been previously exposed to acidic fluids. “We used glycine because it could rapidly degrade under the planet’s environmental conditions,” he said. “It’s perfect informer to tell us what was going on inside our experiments.”

After a long exposure to Mars-like ultraviolet radiation, the experiments showed photodegradation of the glycine molecules embedded in the clay. Exposure to acidic fluids erases the interlayer space, turning it into a gel-like silica.

“When clays are exposed to acidic fluids, the layers collapse and the organic matter can’t be preserved. They are destroyed,” Fairén said. “Our results in this paper explain why searching for organic compounds on Mars is so sorely difficult.”

The paper’s lead author was Carolina Gil?Lozano of Centro de Astrobiología, Madrid and the Universidad de Vigo, Spain. The European Research Council funded this research.

Story Source:

Materials provided by Cornell University. Original written by Blaine Friedlander. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Unraveling the secrets of Tennessee whiskey

More than a century has passed since the last scientific analyses of the famed “Lincoln County [Tennessee] process” was published, but the secrets of the famous Tennessee whiskey flavor are starting to unravel at the University of Tennessee Institute of Agriculture. The latest research promises advancements in the field of flavor science as well as marketing.

Conducted John P. Munafo, Jr., assistant professor of flavor science and natural products, and his graduate student, Trenton Kerley, the study “Changes in Tennessee Whiskey Odorants by the Lincoln County Process” was recently published in the Journal of Agricultural and Food Chemistry (JAFC).

The study incorporated a combination of advanced flavor chemistry techniques to probe the changes in flavor chemistry occurring during charcoal filtration. This type of filtration is a common step in the production of distilled beverages, including vodka and rum, but it’s a required step for a product to be labeled “Tennessee whiskey.” The step is called the Lincoln County Process (LCP), after the locale of the original Jack Daniel’s distillery. It is also referred to as “charcoal mellowing.”

The LCP step is performed by passing the fresh whiskey distillate through a bed of charcoal, usually derived from burnt sugar maple, prior to barrel-aging the product. Although no scientific studies have proved such a claim, it is believed that the LCP imparts a “smoother” flavor to Tennessee whiskey. In addition, by law for the distinction of having “Tennessee whiskey” on the label, the liquor must be produced in the state of Tennessee from at least 51% corn after having been aged in Tennessee for at least 2 years in unused charred oak barrels.

The actual LCP differs from distiller to distiller, and, as the details are generally held as a trade secret, the process has been historically shrouded in mystery. There are no regulations as to how the process is performed, only that the step is required. In other words, all a manufacturer needs to do is pass the distillate over charcoal (an undefined amount — possibly even just one piece). Thus, depending on how it’s conducted, the LCP step may not impact the whisky flavor at all. On the other hand, even small adjustments to the LCP can modify the flavor profile of the whiskey positively or negatively, potentially causing any number of surprises.

Munafo and Kerley describe how distillers adjust parameters empirically throughout the whiskey production process, then rely on professional tasters to sample products, blending subtly unique batches to achieve their target flavor. Munafo says, “By gaining a fundamental understanding of the changes in flavor chemistry occurring during whiskey production, our team could advise distillers about exactly what changes are needed to make their process produce their desired flavor goals. We want to give distillers levers to pull, so they are not randomly or blindly attempting to get the precise flavor they want.”

Samples used in the study were provided by the Sugarlands Distilling Company (SDC), in Gatlinburg, Tennessee, producers of the Roaming Man Whiskey. SDC invited the UTIA researchers to visit their distillery and collect in-process samples. Munafo says SDC prioritizes transparency around their craft and takes pride in sharing the research, discovery and distillation process of how their whiskey is made and what makes Tennessee whiskey unique.

Olfactory evaluations — the good ole smell test — revealed that the LCP treatment generally decreased malty, rancid, fatty and roasty aromas in the whiskey distillates. As for the odorants (i.e., molecules responsible for odor), 49 were identified in the distillate samples using an analytical technique called gas chromatography-olfactometry (GC-O). Nine of these odorants have never been reported in the scientific whiskey literature.

One of the newly found whiskey odorants, called DMPF, was originally discovered in cocoa. It is described as having a unique anise or citrus-like smell. Another of the newly discovered whiskey odorants (called MND) is described as having a pleasant dried hay-like aroma. Both odorants have remarkably low odor thresholds in the parts-per-trillion range, meaning that the smells can be detected at very low levels by people but are difficult to detect with scientific instrumentation.

The only previous investigation into how charcoal treatment affects whiskey was published in 1908 by William Dudley in the Journal of the American Chemical Society. The new study revealed fresh knowledge for optimizing Tennessee whiskey production. Thirty-one whiskey odorants were measured via a technique called stable isotope dilution assay (SIDA), all showing a decrease in concentration as a result of LCP treatment, albeit to different degrees. That is to say, while the LCP appears to be selective in removing certain odorants, the process didn’t increase or add any odorants to the distillate. This new knowledge can be used to optimize Tennessee whiskey production. For instance, the process can be optimized for the removal of undesirable aromas, while maintaining higher levels of desirable aromas, thus “tailoring” the flavor profile of the finished whiskey.

“We want to provide the analytical tools needed to help enable distillers to have more control of their processes and make more consistent and flavorful whiskey, says Dr. Munafo. “We want to help them to take out some of the guesswork involved in whiskey production.”

Additional studies are now underway at the UT Department of Food Science to characterize both the flavor chemistry of different types of whiskey and their production processes. The ultimate aim of the whiskey flavor chemistry program is to aid whiskey manufacturers in producing a consistent product with the exact flavor profile that they desire. Even with the aid of science Munafo says, “Whiskey making will ‘still’ remain an impressive art form.” Pun intended.

The researchers acknowledge support from the USDA National Institute of Food and Agriculture (NIFA) Hatch Project #1015002 and funding through the Food Science Department and start-up funding from the University of Tennessee Institute of Agriculture.

Go to Source
Author:

Categories
ScienceDaily

A multinational study overturns a 130-year old assumption about seawater chemistry

There’s more to seawater than salt. Ocean chemistry is a complex mixture of particles, ions and nutrients. And for over a century, scientists believed that certain ion ratios held relatively constant over space and time.

But now, following a decade of research, a multinational study has refuted this assumption. Debora Iglesias-Rodriguez, professor and vice chair of UC Santa Barbara’s Department of Ecology, Evolution, and Marine Biology, and her colleagues discovered that the seawater ratios of three key elements vary across the ocean, which means scientists will have to re-examine many of their hypotheses and models. The results appear in the Proceedings of the National Academy of Sciences.

Calcium, magnesium and strontium (Ca, Mg and Sr) are important elements in ocean chemistry, involved in a number of biologic and geologic processes. For instance, a host of different animals and microbes use calcium to build their skeletons and shells. These elements enter the ocean via rivers and tectonic features, such as hydrothermal vents. They’re taken up by organisms like coral and plankton, as well as by ocean sediment.

The first approximation of modern seawater composition took place over 130 years ago. The scientists who conducted the study concluded that, despite minor variations from place to place, the ratios between the major ions in the waters of the open ocean are nearly constant.

Researchers have generally accepted this idea from then on, and it made a lot of sense. Based on the slow turnover of these elements in the ocean — on the order of millions of years — scientists long thought the ratios of these ions would remain relatively stable over extended periods of time.

“The main message of this paper is that we have to revisit these ratios,” said Iglesias-Rodriguez. “We cannot just continue to make the assumptions we have made in the past essentially based on the residency time of these elements.”

Back in 2010, Iglesias-Rodriguez was participating in a research expedition over the Porcupine Abyssal Plain, a region of North Atlantic seafloor west of Europe. She had invited a former student of hers, this paper’s lead author Mario Lebrato, who was pursuing his doctorate at the time.

Their study analyzed the chemical composition of water at various depths. Lebrato found that the Ca, Mg and Sr ratios from their samples deviated significantly from what they had expected. The finding was intriguing, but the data was from only one location.

Over the next nine years, Lebrato put together a global survey of these element ratios. Scientists including Iglesias-Rodriguez collected over 1,100 water samples on 79 cruises ranging from the ocean’s surface to 6,000 meters down. The data came from 14 ecosystems across 10 countries. And to maintain consistency, all the samples were processed by a single person in one lab.

The project’s results overturned the field’s 130-year old assumption about seawater chemistry, revealing that the ratio of these ions varies considerably across the ocean.

Scientists have long used these ratios to reconstruct past ocean conditions, like temperature. “The main implication is that the paleo-reconstructions we have been conducting have to be revisited,” Iglesias-Rodriguez explained, “because environmental conditions have a substantial impact on these ratios, which have been overlooked.”

Oceanographers can no longer assume that data they have on past ocean chemistry represent the whole ocean. It has become clear they can extrapolate only regional conditions from this information.

This revelation also has implications for modern marine science. Seawater ratios of Mg to Ca affect the composition of animal shells. For example, a higher magnesium content tends to make shells more vulnerable to dissolution, which is an ongoing issue as increasing carbon dioxide levels gradually make the ocean more acidic. “Biologically speaking, it is important to figure out these ratios with some degree of certainty,” said Iglesias-Rodriguez.

Iglesias-Rodriguez’s latest project focuses on the application of rock dissolution as a method to fight ocean acidification. She’s looking at lowering the acidity of seawater using pulverized stones like olivine and carbonate rock. This intervention will likely change the balance of ions in the water, which is something worth considering. As climate change continues unabated, this intervention could help keep acidity in check in small areas, like coral reefs.

Go to Source
Author:

Categories
ScienceDaily

Demonstrating the dynamics of electron-light interaction originating from first principle

With the highest possible spatial resolution of less than a millionth of a millimetre, electron microscopes make it possible to study the properties of materials at the atomic level and thus demonstrate the realm of quantum mechanics. Quantum-physical fundamentals can be studied particularly well by the interactions between electrons and photons. Excited with laser light, for example, the energy, mass or velocity of the electrons changes. Professor Nahid Talebi from the Institute for Experimental and Applied Physics at Kiel University has invented a new toolbox to extend the theoretical description of electron-light interactions to the highest accurate level possible. She has combined Maxwell and Schrödinger equations in a time-dependent loop to fully simulate the interactions from first principles. Talebi’s simulation allows it for the first time to describe ultra-fast processes precisely in theory and to map them in real-time without using adiabatic approximation. Recently, she presented her results in the journal Physical Review Letters. In the long term, they could help to improve microscopy methods as Talebi is investigating in her ERC Starting Grant project “NanoBeam” funded by the European Research Council.

The ultrafast electron microscopy combines electron microscopy and laser technology. Having ultrafast electron pulses, the dynamics of the sample can be studied with femtosecond temporal resolutions. This also allows conclusions about the properties of the sample. Due to the further development of spectroscopy technology, it is now possible to study not only atomic and electronic structure of the samples but also their photonic excitations, such as plasmon polaritons.

For the first time the simulation depicts the process of the interactions as a film in real-time

However, the simulation of such electron-light-interactions is time-consuming and can only be carried out with high-performance computers. “Therefore, adiabatic approximations and one-dimensional electron models are often used, meaning that electron recoil and amplitude modulations have been neglected,” explains Nahid Talebi, Professor of Nanooptics at the Institute of Experimental and Applied Physics (IEAP) and an expert in simulations. For the first time, her new simulation shows the process of the electron-light interactions as a film in real-time, describing the complex interactions to the highest accurate level possible.

In her toolbox, she has combined Maxwell and Schroedinger equations in a time-dependent loop to fully simulate the interactions from first principles; therefore laying down the new field of electron-light interactions beyond adiabatic approximations. Due to this combination, Talebi was able to simulate what happens when an electron approaches a nanostructure of gold that was previously excited by a laser. Her simulation shows how the energy, momentum, and in general the shape of the wave packet of the electron change for each moment of the interaction. In this way, the full dynamics of the interaction caused by both single-photon and two-photon processes are captured. Single-photon processes are important for example to model electron energy-loss and -gain channels, whereas two-photon processes are responsible for modeling the laser-induced elastic channels such as the diffraction phenomenon.

Particularly in her simulation, Talebi observed a pronounced diffraction pattern that originates from strong interactions between electrons and photons based on the Kapitza-Dirac effect. This diffraction pattern can have promising applications in time-resolved holography, to unravel charge-carrier dynamics of solid-state and molecular systems.

Further improving spectroscopy methods with ERC project “NanoBeam”

“Our toolbox can be used to benchmark the many approximations in theoretical developments, including eikonal approximations, neglecting the recoil, and neglecting two-photon processes.” Talebi thinks. “Although we already have made a great step towards electron-light interactions beyond adiabatic approximations, there is still room for further developments.” Together with her team, she plans to include a three-dimensional Maxwell-Dirac simulation domain to model relativistic and spin interactions. She also wants to better understand the role of exchange and correlations during electron-electron interactions.

Another aim of Talebi is to utilize the insights from her theoretical modelling to propose novel methodologies for coherent control and shaping of the sample excitations using electron beams. With her project “NanoBeam” she intends to develop a novel spectral interferometry technique with the ability to retrieve and control the spectral phase in a scanning electron microscope to overcome the challenges in meeting both nanometers spatial and attosecond time resolution. The project is funded by an ERC grant from the European Research Council with about 1.5 million euros.

This study was funded by the European Union as part of the project “NanoBeam” as “ERC Starting Grant” of the European Research Council (ERC).

Story Source:

Materials provided by Kiel University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Study rules out dark matter destruction as origin of extra radiation in galaxy center

The detection more than a decade ago by the Fermi Gamma Ray Space Telescope of an excess of high-energy radiation in the center of the Milky Way convinced some physicists that they were seeing evidence of the annihilation of dark matter particles, but a team led by researchers at the University of California, Irvine has ruled out that interpretation.

In a paper published recently in the journal Physical Review D, the UCI scientists and colleagues at Virginia Polytechnic Institute and State University and other institutions report that — through an analysis of the Fermi data and an exhaustive series of modeling exercises — they were able to determine that the observed gamma rays could not have been produced by what are called weakly interacting massive particles, most popularly theorized as the stuff of dark matter.

By eliminating these particles, the destruction of which could generate energies of up to 300 giga-electron volts, the paper’s authors say, they have put the strongest constraints yet on dark matter properties.

“For 40 years or so, the leading candidate for dark matter among particle physicists was a thermal, weakly interacting and weak-scale particle, and this result for the first time rules out that candidate up to very high-mass particles,” said co-author Kevork Abazajian, UCI professor of physics & astronomy.

“In many models, this particle ranges from 10 to 1,000 times the mass of a proton, with more massive particles being less attractive theoretically as a dark matter particle,” added co-author Manoj Kaplinghat, also a UCI professor of physics & astronomy. “In this paper, we’re eliminating dark matter candidates over the favored range, which is a huge improvement in the constraints we put on the possibilities that these are representative of dark matter.”

Abazajian said that dark matter signals could be crowded out by other astrophysical phenomena in the Galactic Center — such as star formation, cosmic ray deflection off molecular gas and, most notably, neutron stars and millisecond pulsars — as sources of excess gamma rays detected by the Fermi space telescope.

“We looked at all of the different modeling that goes on in the Galactic Center, including molecular gas, stellar emissions and high-energy electrons that scatter low-energy photons,” said co-author Oscar Macias, a postdoctoral scholar in physics and astronomy at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo whose visit to UCI in 2017 initiated this project. “We took over three years to pull all of these new, better models together and examine the emissions, finding that there is little room left for dark matter.”

Macias, who is also a postdoctoral researcher with the GRAPPA Centre at the University of Amsterdam, added that this result would not have been possible without data and software provided by the Fermi Large Area Telescope collaboration.

The group tested all classes of models used in the Galactic Center region for excess emission analyses, and its conclusions remained unchanged. “One would have to craft a diffuse emission model that leaves a big ‘hole’ in them to relax our constraints, and science doesn’t work that way,” Macias said.

Kaplinghat noted that physicists have predicted that radiation from dark matter annihilation would be represented in a neat spherical or elliptical shape emanating from the Galactic Center, but the gamma ray excess detected by the Fermi space telescope after its June 2008 deployment shows up as a triaxial, bar-like structure.

“If you peer at the Galactic Center, you see that the stars are distributed in a boxy way,” he said. “There’s a disk of stars, and right in the center, there’s a bulge that’s about 10 degrees on the sky, and it’s actually a very specific shape — sort of an asymmetric box — and this shape leaves very little room for additional dark matter.”

Does this research rule out the existence of dark matter in the galaxy? “No,” Kaplinghat said. “Our study constrains the kind of particle that dark matter could be. The multiple lines of evidence for dark matter in the galaxy are robust and unaffected by our work.”

Far from considering the team’s findings to be discouraging, Abazajian said they should encourage physicists to focus on concepts other than the most popular ones.

“There are a lot of alternative dark matter candidates out there,” he said. “The search is going to be more like a fishing expedition where you don’t already know where the fish are.”

Also contributing to this research project — which was supported by the National Science Foundation, the U.S. Department of Energy Office of Science and Japan’s World Premier International Research Center Initiative — were Ryan Keeley, who earned a Ph.D. in physics & astronomy at UCI in 2018 and is now at the Korea Astronomy and Space Science Institute, and Shunsaku Horiuchi, a former UCI postdoctoral scholar in physics & astronomy who is now an assistant professor of physics at Virginia Tech.

Go to Source
Author:

Categories
ScienceDaily

New study warns: We have underestimated the pace at which the Arctic is melting

Temperatures in the Arctic Ocean between Canada, Russia and Europe are warming faster than researchers’ climate models have been able to predict.

Over the past 40 years, temperatures have risen by one degree every decade, and even more so over the Barents Sea and around Norway’s Svalbard archipelago, where they have increased by 1.5 degrees per decade throughout the period.

This is the conclusion of a new study published in Nature Climate Change.

“Our analyses of Arctic Ocean conditions demonstrate that we have been clearly underestimating the rate of temperature increases in the atmosphere nearest to the sea level, which has ultimately caused sea ice to disappear faster than we had anticipated,” explains Jens Hesselbjerg Christensen, a professor at the University of Copenhagen’s Niels Bohr Institutet (NBI) and one of the study’s researchers.

Together with his NBI colleagues and researchers from the Universities of Bergen and Oslo, the Danish Metrological Institute and Australian National University, he compared current temperature changes in the Arctic with climate fluctuations that we know from, for example, Greenland during the ice age between 120,000-11,000 years ago.

“The abrupt rise in temperature now being experienced in the Arctic has only been observed during the last ice age. During that time, analyses of ice cores revealed that temperatures over the Greenland Ice Sheet increased several times, between 10 to 12 degrees, over a 40 to 100-year period,” explains Jens Hesselbjerg Christensen.

He emphasizes that the significance of the steep rise in temperature is yet to be fully appreciated. And, that an increased focus on the Arctic and reduced global warming, more generally, are musts.

Climate models ought to take abrupt changes into account Until now, climate models predicted that Arctic temperatures would increase slowly and in a stable manner. However, the researchers’ analysis demonstrates that these changes are moving along at a much faster pace than expected.

“We have looked at the climate models analysed and assessed by the UN Climate Panel. Only those models based on the worst-case scenario, with the highest carbon dioxide emissions, come close to what our temperature measurements show over the past 40 years, from 1979 to today,” says Jens Hesselbjerg Christensen.

In the future, there ought to be more of a focus on being able to simulate the impact of abrupt climate change on the Arctic. Doing so will allow us to create better models that can accurately predict temperature increases:

“Changes are occurring so rapidly during the summer months that sea ice is likely to disappear faster than most climate models have ever predicted. We must continue to closely monitor temperature changes and incorporate the right climate processes into these models,” says Jens Hesselbjerg Christensen. He concludes:

“Thus, successfully implementing the necessary reductions in greenhouse gas emissions to meet the Paris Agreement is essential in order to ensure a sea-ice packed Arctic year-round.”

Story Source:

Materials provided by University of Copenhagen. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New insights for sun-gathering technologies

Every hour, the sun saturates the earth with more energy than humans use in a year. Harnessing some of this energy to meet global demand has become a grand challenge, with the world poised to double its energy consumption in just thirty years.

In a new study, researchers at the Biodesign Center for Applied Structural Discovery (CASD) and ASU’s School of Molecular Sciences take a page from Nature’s lesson book. Inspired by the way plants and other photosynthetic organisms collect and use the sun’s radiant energy, they hope to develop technologies that harvest sunlight and store it as carbon-free or carbon-neutral fuels.

“This article describes a general yet useful strategy for better understanding the role of catalysts in emerging technologies for converting sunlight to fuels,” says corresponding author Gary Moore.

The research appears in the current issue of the American Chemical Society (ACS) journal Applied Energy Materials.

Despite the advances in solar panel technologies, their limitations are apparent. Researchers would like to store accumulated energy from the sun in a concentrated form, to be used when and where it is needed. Catalysts — materials that act to speed up the rate at which chemical reactions occur — are a critical ingredient for harvesting sunlight and stockpiling it as fuels, through a process known as photoelectrosynthesis.

As the authors demonstrate, however, the effectiveness of catalysts is critically dependent on how they are used in new green technologies. The goal is to maximize energy efficiency and where possible, make use of earth-abundant elements.

According to Brian Wadsworth, researcher in the CASD center and lead author of the new study, a less-is-more approach to catalysts may improve the performance of photoelectrosynthetic devices:

“There is a traditional notion that relatively high loadings of catalyst are beneficial to maximizing the reaction rates and related performance of catalytic materials,” Wadsworth says. “However, this design strategy should not always be implemented in assemblies involving the capture and conversion of solar energy as relatively thick catalyst layers can hamper performance by screening sunlight from reaching an underlying light-absorbing material and/or disfavoring the accumulation of catalytically-active states.”

The new research provides a framework for better understanding catalytic performance in solar fuel devices and points the way to further discoveries.

Story Source:

Materials provided by Arizona State University. Original written by Richard Harth. Note: Content may be edited for style and length.

Go to Source
Author: