Categories
Hackster.io

Elephant Edge Webinar 1: The Software

The ElephantEdge challenge is calling on the community to build ML models using the Edge Impulse Studio and tracking dashboards using Avnet’s IoTConnect, which will be deployed onto 10 production-grade collars manufactured by our engineering partner, Institute IRNAS, and deployed by Smart Parks.

In this first of two ElephantEdge webinars you’ll learn about the problems park rangers are facing and how to get started with IoTConnect and Edge Impulse Studio.

Contest Link: https://www.hackster.io/contests/ElephantEdge

Categories
ScienceDaily

Excitons form superfluid in certain 2D combos

Mixing and matching computational models of 2D materials led scientists at Rice University to the realization that excitons — quasiparticles that exist when electrons and holes briefly bind — can be manipulated in new and useful ways.

The researchers identified a small set of 2D compounds with similar atomic lattice dimensions that, when placed together, would allow excitons to form spontaneously. Generally, excitons happen when energy from light or electricity boosts electrons and holes into a higher state.

But in a few of the combinations predicted by Rice materials theorist Boris Yakobson and his team, excitons were observed stabilizing at the materials’ ground state. According to their determination, these excitons at their lowest energy state could condense into a superfluidlike phase. The discovery shows promise for electronic, spintronic and quantum computing applications.

“The very word ‘exciton’ means that electrons and holes ‘jump up’ into a higher energy,” Yakobson said. “All cold systems sit in their lowest-possible energy states, so no excitons are present. But we found a realization of what seems a paradox as conceived by Nevill Mott 60 years ago: a material system where excitons can form and exist in the ground state.”

The open-access study by Yakobson, graduate student Sunny Gupta and research scientist Alex Kutana, all of Rice’s Brown School of Engineering, appears in Nature Communications.

After evaluating many thousands of possibilities, the team precisely modeled 23 bilayer heterostructures, their layers loosely held in alignment by weak van der Waals forces, and calculated how their band gaps aligned when placed next to each other. (Band gaps define the distance an electron has to leap to give a material its semiconducting properties. Perfect conductors — metals or semimetals like graphene — have no band gap.)

Ultimately, they produced phase diagrams for each combination, maps that allowed them to view which had the best potential for experimental study.

“The best combinations are distinguished by a lattice parameter match and, most importantly, by the special positions of the electronic bands that form a broken gap, also called type III,” Yakobson said.

Conveniently, the most robust combinations may be adjusted by applying stress through tension, curvature or an external electric field, the researchers wrote. That could allow the phase state of the excitons to be tuned to take on the “perfect fluid” properties of a Bose-Einstein condensate or a superconducting BCS condensate.

“In a quantum condensate, bosonic particles at low temperatures occupy a collective quantum ground state,” Gupta said. “That supports macroscopic quantum phenomena as remarkable as superfluidity and superconductivity.”

“Condensate states are intriguing because they possess bizarre quantum properties and exist on an everyday scale, accessible without a microscope, and only low temperature is required,” Kutana added. “Because they are at the lowest possible energy state and because of their quantum nature, condensates cannot lose energy and behave as a perfect frictionless fluid.

“Researchers have been looking to realize them in various solid and gas systems,” he said. “Such systems are very rare, so having two-dimensional materials among them would greatly expand our window into the quantum world and create opportunities for use in new, amazing devices.”

The best combinations were assemblies of heterostructure bilayers of antimony-tellurium-selenium with bismuth-tellurium-chlorine; hafnium-nitrogen-iodine with zirconium-nitrogen-chlorine; and lithium-aluminum-tellurium with bismuth-tellurium-iodine.

“Except for having similar lattice parameters within each pair, the chemistry compositions appear rather nonintuitive,” Yakobson said. “We saw no way to anticipate the desired behavior without the painstaking quantitative analysis.

“One can never deny a chance to find serendipity — as Robert Curl said, chemistry is all about getting lucky — but sifting through hundreds of thousands of material combinations is unrealistic in any lab. Theoretically, however, it can be done.”

Go to Source
Author:

Categories
ProgrammableWeb

OpenAI Announces API Access to New AI Models

OpenAI has announced an API for accessing new AI models it develops. OpenAI is known for open sourcing most of its AI tools, but it has taken a different approach with this particular project. The company has decided to commercialize this product to ensure that it can be controlled from a privacy and safety standpoint, and help fund its broader mission to put artificial intelligence in the hands of as many developers as possible.

The new API takes a different approach from typical AI systems. Most AI systems are designed for specific use cases. OpenAI considers it’s offering a “general-purpose” AI system provided by a “text in, text out interface.” In sum, users can try the text in, text out interface for essentially any English language task. After any text prompt, the API will return a text completion.

The text completion is based on the API attempting to match the pattern delivered. Developers can pre-program the pattern with examples, or allow the engine to naturally learn. The API uses models with weights from the GPT-3 family. Delivering this AI through an API will also allow OpenAI to keep up to date with the constantly evolving tech behind AI and machine learning.

The API is currently in beta. Those interested in participating should apply here. Companies already in the beta include AlgoliaQuizlet, and Reddit, and researchers at institutions like the Middlebury Institute. Check out the blog post announcement for more API details.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ecarter</a>

Categories
ScienceDaily

US wind plants show relatively low levels of performance decline as they age

Wind plants in the United States — especially the newest models — remain relatively efficient over time, with only a 13% drop in the plants’ performance over 17 years, researchers at the Lawrence Berkeley National Laboratory report in the May 13 issue of the journal Joule. Their study also suggests that a production tax credit provides an effective incentive to maintain the plants during the 10-year window in which they are eligible to receive it. When this tax credit window closes, wind plant performance drops.

“Since wind plant operators are now receiving less revenue after the tax credit expires, the effective payback period to recoup the costs of any maintenance expenditure is longer,” says study author Dev Millstein, a research scientist at Lawrence Berkeley National Laboratory. “Due to this longer payback period, we hypothesize that plants may choose to spend less on maintenance overall, and their performance may therefore drop.”

Wind power is on the rise, supplying 7.3% of electricity generation in the United States in 2019 and continuing to grow around the world due to its low cost and ability to help states and countries reach their carbon emission reduction goals. But while the technology is highly promising, it isn’t infallible — like any engineered system, wind plant performance declines with age, although the rate of decline varies based on the location of the plant. In order to understand the potential growth of this technology and its ability to impact electricity systems, accurate estimates of future wind plant performance are essential.

Building from previous research with a European focus, Millstein and colleagues assessed the US onshore wind fleet, evaluating the performance of 917 US wind plants (including newer plants introduced in 2008 or later as well as older plants) over a 10-year period. Since measurements of long-term wind speed are typically not available for a given location, the researchers determined wind speed using global reanalysis data, accounting for shifts in available wind from one year to the next. They obtained data on the energy generated from each plant from the US Energy Information Administration, which tracks electricity generation from each plant on a monthly basis, and they performed a statistical analysis to determine the average rate of age-related performance decline across the entire fleet.

Millstein and colleagues found significant differences in performance decline between the older and younger wind plant models, with older vintages declining by 0.53% each year for the first 10 years while their younger counterparts declined by only 0.17% per year during the same decade.

But a notable change occurred as soon as the plants turned 10 years old — a trend in decline that has not been observed in Europe. As soon as the plants lost their eligibility for a production tax credit of 2.3 cents per kilowatt-hour, their performance began dropping at a yearly rate of 3.6%.

Still, the researchers are optimistic about the ability of US wind plants to weather the years.

“We found that performance decline with age for US plants was on the lower end of the spectrum found from wind fleets in other countries, specifically compared to European research studies,” says Millstein. “This is generally good news for the US wind fleet. This study will help people account for a small amount of performance loss with age while not exaggerating the magnitude of such losses.”

As the wind energy sector continues to swell, the researchers note that their findings can be used to inform investors, operators, and energy modelers, enabling accurate long-term wind plant energy production estimates and guiding the development of an evolving electrical grid.

“The hope is that, overall, the improved estimates of wind generation and costs will lead to more effective decision making from industry, academia, and policy makers,” says Millstein.

Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Paired with super telescopes, model Earths guide hunt for life

Cornell University astronomers have created five models representing key points from our planet’s evolution, like chemical snapshots through Earth’s own geologic epochs.

The models will be spectral templates for astronomers to use in the approaching new era of powerful telescopes, and in the hunt for Earth-like planets in distant solar systems.

“These new generation of space- and ground-based telescopes coupled with our models will allow us to identify planets like our Earth out to about 50 to 100 light-years away,” said Lisa Kaltenegger, associate professor of astronomy and director of the Carl Sagan Institute.

For the research and model development, Kaltenegger, doctoral student Jack Madden and Zifan Lin authored “High-Resolution Transmission Spectra of Earth through Geological Time,” published in Astrophysical Journal Letters.

“Using our own Earth as the key, we modeled five distinct Earth epochs to provide a template for how we can characterize a potential exo-Earth — from a young, prebiotic Earth to our modern world,” she said. “The models also allow us to explore at what point in Earth’s evolution a distant observer could identify life on the universe’s ‘pale blue dots’ and other worlds like them.”

Kaltenegger and her team created atmospheric models that match the Earth of 3.9 billion years ago, a prebiotic Earth, when carbon dioxide densely cloaked the young planet. A second throwback model chemically depicts a planet free of oxygen, an anoxic Earth, going back 3.5 billion years. Three other models reveal the rise of oxygen in the atmosphere from a 0.2% concentration to modern-day levels of 21%.

“Our Earth and the air we breathe have changed drastically since Earth formed 4.5 billions years ago,” Kaltenegger said, “and for the first time, this paper addresses how astronomers trying to find worlds like ours, could spot young to modern Earth-like planets in transit, using our own Earth’s history as a template.”

In Earth’s history, the timeline of the rise of oxygen and its abundancy is not clear, Kaltenegger said. But, if astronomers can find exoplanets with nearly 1% of Earth’s current oxygen levels, those scientists will begin to find emerging biology, ozone and methane — and can match it to ages of the Earth templates.

“Our transmission spectra show atmospheric features, which would show a remote observer that Earth had a biosphere as early as about 2 billion years ago,” Kaltenegger said.

Using forthcoming telescopes like NASA’s James Webb Space Telescope, scheduled to launch in March 2021, or the Extremely Large Telescope in Antofagasta, Chile, scheduled for first light in 2025, astronomers could watch as an exoplanet transits in front of its host star, revealing the planet’s atmosphere.

“Once the exoplanet transits and blocks out part of its host star, we can decipher its atmospheric spectral signatures,” Kaltenegger said. “Using Earth’s geologic history as a key, we can more easily spot the chemical signs of life on the distant exoplanets.”

The research was funded by the Brinson Foundation and the Carl Sagan Institute.

Story Source:

Materials provided by Cornell University. Original written by Blaine Friedlander. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

NVIDIA proposes way of teaching robots depth perception, and how to turn 2D images into 3D models

A method of machine learning has proven capable of turning 2D images into 3D models. Created by researchers at multi-million-dollar GPU manufacturer NVIDIA, the framework shows that it is possible to infer shape, texture, and light from a single image, in a similar way to the workings of the human eye. “Close your left eye […]

Go to Source
Author: Beau Jackson

Categories
ScienceDaily

Experiment measures velocity in 3D

Many of today’s scientific processes are simulated using computer-driven mathematical models. But for a model to accurately predict how air flow behaves at high speeds, for example, scientists need supplemental real life data. Providing validation data, using up-to-date methods, was a key motivating factor for a recent experimental study conducted by researchers at the University of Illinois at Urbana-Champaign.

“We created a physical experiment that could measure the flow field that others try to simulate with computational models to predict turbulence. It validates their models and gives them additional data to compare their results against, particularly in terms of velocity,” said Kevin Kim, a doctoral student in the Department of Aerospace Engineering.

Kim said the wind tunnel that was built and the design of the experiments were based on simple geometry and fundamental physics that allowed them to manipulate two streams of air flow, one from an air tank and the other from ambient room air. There is a physical barrier between the two streams before they reach the test section of the wind tunnel, where they begin to mix. Images are taken of seed particles in the flow.

“There are two nozzles that come after the air tank. We changed the geometry of one of the nozzles to change the overall Mach number, then studied the different mixing layers where the two flows meet,” Kim said. “Depending on the different speeds of the two streams coming in, you start to see different characteristics of the mixing.”

The primary free stream speed started at subsonic Mach 0.5, and increased to 2.5 in 0.5 increments. The secondary free stream was all subsonic, below Mach 1.

Kim said that in most previous experiments of this flow field, velocity has generally only been measured in two directions: in the direction of the freestream and perpendicular to it. What made this experiment unique is that velocity measurements were also taken in the span-wise direction for all of the different Mach numbers.

“Low speed, incompressible cases, are largely characterized by two-dimensional mixing, so you can get a lot of important information from just looking at the X and Y components,” Kim said. “Because we increased the Mach number, the compressibility goes up in the shear layer. Consequently, we see wider-scale mixing in the span-wise direction that we didn’t see when it was incompressible. A key target of the work was to make sure we got that third component of velocity in order to understand how it relates to the overall turbulence with changing compressibility. And also to capture the incoming flow conditions, the boundary layers.”

According to Kim, only two other mixing layer experiments have been performed that obtained all three components of velocity. “Our results match up with theirs, which validates our own experiments, but we took it further by measuring the flow for a wide range of Mach numbers.”

He said one direct real-world application for this work is for improving scramjet combustion, in which supersonic air comes in through the combustor and mixes with fuel.

“Scientifically, the main application is the fact that we have these results for a very fundamental flow field that simulators now can use to validate their models. In addition, all of our data are available to the public through a University of Illinois Wiki page,” Kim said. “I hope that a lot of people use this information in their modeling and that it can ultimately help improve the accuracy and advance the methods in high-speed flow simulations.”

Story Source:

Materials provided by University of Illinois College of Engineering. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Hydrologic simulation models that inform policy decisions are difficult to interpret

Hydrologic models that simulate and predict water flow are used to estimate how natural systems respond to different scenarios such as changes in climate, land use, and soil management. The output from these models can inform policy and regulatory decisions regarding water and land management practices.

Numerical models have become increasingly easy to employ with advances in computer technology and software with graphical user interface (GUI). While these technologies make the models more accessible, problems can arise if they are used by inexperienced modelers, says Juan Sebastian Acero Triana, a doctoral student in the Department of Agricultural and Biological Engineering at the University of Illinois.

Acero Triana is lead author on a study that evaluates the accuracy of a commonly used numerical model in hydrology.

Findings from the research show that even when the model appears to be properly calibrated, its results can be difficult to interpret correctly. The study, published in the Journal of Hydrology, provides recommendations for how to fine-tune the process and obtain more precise results.

Model accuracy is important to ensure that policy decisions are based on realistic scenarios, says Maria Chu, a co-author of the study. Chu is an assistant professor of agricultural and biological engineering in the College of Agricultural, Consumer and Environmental Sciences and The Grainger College of Engineering at U of I.

“For example, you may want to estimate the impacts of future climate on the water availability over the next 100 years. If the model is not representing reality, you are going to draw the wrong conclusions. And wrong conclusions will lead to wrong policies, which can greatly affect communities that rely on the water supply,” Chu says.

The study focuses on the Soil and Water Assessment model (SWAT), which simulates water circulation by incorporating data on land use, soil, topography, and climate. It is a popular model used to evaluate the impacts of climate and land management practices on water resources and contaminant movement.

The researchers conducted a case study at the Fort Cobb Reservoir Experimental Watershed (FCREW) in Oklahoma to assess the model’s accuracy. FCREW serves as a test site for the United States Department of Agriculture-Agricultural Research Service (USDA-ARS) and the United States Geological Survey (USGS); thus, detailed data are already available on stream flow, reservoir, groundwater, and topography.

The study coupled the SWAT model with another model called MODFLOW, or the Modular Finite-difference Flow Model, which includes more detailed information on groundwater levels and fluxes.

“Our purpose was to determine if the SWAT model by itself can appropriately represent the hydrologic system,” Acero Triana says. “We discovered that is not the case. It cannot really represent the entire hydrologic system.”

In fact, the SWAT model yielded 12 iterations of water movement that all appeared to be acceptable. However, when combined with MODFLOW it became clear that only some of these results properly accounted for groundwater flow. The researchers compared the 12 results from SWAT with 103 different groundwater iterations from MODFLOW in order to find a realistic representation of the water fluxes in the watershed.

Yielding several different results that all appear equally likely to be correct is called “equifinality.” Careful calibration of the model can reduce equifinality, Acero Triana explains. Calibration must also be able to account for inherent limitations in the way the model is designed and how parameters are defined. In technical terms, it must account for model and constraint inadequacy.

However, inexperienced modelers may not fully understand the intricacies of calibration. And because of the inherent constraints of both SWAT and MODFLOW, using metrics from just one model may not provide accurate results.

The researchers recommend using a combination model called SWATmf, which integrates the SWAT and the MODFLOW processes.

“This paper presents a case study that provides general guidelines for how to use hydrological models,” Acero Triana says. “We show that to really represent a hydrologic system you need two domain models. You need to represent both the surface and the sub-surface processes that are taking place.”

The differences in results may be small, but over time the effect could be significant, he concludes.

Go to Source
Author:

Categories
ScienceDaily

3D virtual reality models help yield better surgical outcomes

A UCLA-led study has found that using three-dimensional virtual reality models to prepare for kidney tumor surgeries resulted in substantial improvements, including shorter operating times, less blood loss during surgery and a shorter stay in the hospital afterward.

Previous studies involving 3D models have largely asked qualitative questions, such as whether the models gave the surgeons more confidence heading into the operations. This is the first randomized study to quantitatively assess whether the technology improves patient outcomes.

The 3D model provides surgeons with a better visualization of a person’s anatomy, allowing them to see the depth and contour of the structure, as opposed to viewing a two-dimensional picture.

The study was published in JAMA Network Open.

“Surgeons have long since theorized that using 3D models would result in a better understanding of the patient anatomy, which would improve patient outcomes,” said Dr. Joseph Shirk, the study’s lead author and a clinical instructor in urology at the David Geffen School of Medicine at UCLA and at the UCLA Jonsson Comprehensive Cancer Center. “But actually seeing evidence of this magnitude, generated by very experienced surgeons from leading medical centers, is an entirely different matter. This tells us that using 3D digital models for cancer surgeries is no longer something we should be considering for the future — it’s something we should be doing now.”

In the study, 92 people with kidney tumors at six large teaching hospitals were randomly placed into two groups. Forty-eight were in the control group and 44 were in the intervention group.

For those in the control group, the surgeon prepared for surgery by reviewing the patient’s CT or MRI scan only. For those in the intervention group, the surgeon prepared for surgery by reviewing both the CT or MRI scan and the 3D virtual reality model. The 3D models were reviewed by the surgeons from their mobile phones and through a virtual reality headset.

“Visualizing the patient’s anatomy in a multicolor 3D format, and particularly in virtual reality, gives the surgeon a much better understanding of key structures and their relationships to each other,” Shirk said. “This study was for kidney cancer, but the benefits of using 3D models for surgical planning will translate to many other types of cancer operations, such as prostate, lung, liver and pancreas.”

Story Source:

Materials provided by University of California – Los Angeles Health Sciences. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

Serving aces with optimized 3D printed tennis handle from Ogle Models

Ogle Models and Prototypes, a Hertfordshire-based model making company, has used 3D printing to create a fully customizable, weight-balanced tennis handle.  The handle was developed for Unstrung Customs, a tennis racket painting, stringing and customising specialist, who were looking for a modern and innovative method for adapting the grip size of a racket, without using […]

Go to Source
Author: Anas Essop