Foam offers way to manipulate light

There is more to foam than meets the eye. Literally. A study by Princeton scientists has shown that a type of foam long studied by scientists is able to block particular wavelengths of light, a coveted property for next-generation information technology that uses light instead of electricity.

The researchers, integrating expertise from materials science, chemistry and physics, conducted exhaustive computational simulations of a structure known as a Weaire-Phelan foam. They found that this foam would allow some frequencies of light to pass through while completely reflecting others. This selective blocking, known as a photonic band gap, is similar to the behavior of a semiconductor, the bedrock material behind all modern electronics because of its ability to control the flow of electrons at extremely small scales.

“This has the property we want: an omnidirectional mirror for a certain range of frequencies,” said Salvatore Torquato, professor of chemistry and the Princeton Institute for the Science and Technology of Materials. Torquato, the Lewis Bernard Professor of Natural Sciences, published the results Nov. 6 in the Proceedings of the National Academy of Sciences, with coauthors Michael Klatt, a postdoctoral researcher, and physicist Paul Steinhardt, who is Princeton’s Albert Einstein Professor in Science.

While numerous examples of photonic band gaps have been shown previously in various types of crystals, the researchers believe that their new finding is the first example in a foam, similar to the froth of soap bubbles or a draft beer. Unlike the disordered foam of beer however, the Weaire-Phelan foam is a precisely structured arrangement with deep roots in mathematics and physics.

The origins of the Weaire-Phelan foam date to 1887 when the Scottish physicist Lord Kelvin proposed a structure for the “ether,” the mysterious substance that was then thought to comprise a background structure to all space. Although the concept of the ether was already falling out of favor at the time, Kelvin’s proposed foam went on to intrigue mathematicians for a century because it appeared to be the most efficient way to fill space with interlocking geometrical shapes that have the least possible surface area.

In 1993, physicists Denis Weaire and Robert Phelan found an alternative arrangement that requires slightly less surface area. Since then, interest in the Weaire-Phelan structure was mainly in the mathematics, physics and artistic communities. The structure was used as the outer wall of the “Beijing Water Cube” created for the 2008 Olympics. The new finding now makes the structure of interest to materials scientists and technologists.

“You start out with a classical, beautiful problem in geometry, in mathematics, and now suddenly you have this material that opens up a photonic band gap,” Torquato said.

Torquato, Klatt and Steinhardt became interested in the Weaire-Phelan foam as a tangent of another project in which they were investigating “hyperuniform” disordered materials as an innovative way to control light. Although not their original focus, the three realized that this precisely structured foam had intriguing properties.

“Little by little, it became apparent that there was something interesting here,” Torquato said. “And eventually we said, ‘Ok, let’s put the main project to the side for a while to pursue this.'”

“Always look out for what’s at the wayside of research,” Klatt added.

Weaire, who was not involved in this new finding, said that the Princeton discovery is part of a broadening interest in the material since he and Phelan discovered it. He said the possible new use in optics likely stems from the material being very isotropic, or not having strongly directional properties.

“The fact that it displays a photonic band gap is very interesting because it turns out to have so many special properties,” said Andrew Kraynik, an expert in foams who earned his Ph.D. in chemical engineering from Princeton in 1977 and has studied the Weaire-Phelan foam extensively but was not involved in the Princeton study. Another Princeton connection, said Kraynik, is that a key tool in discovering and analyzing the Weaire-Phelan foam is a software tool called Surface Evolver, which optimizes shapes according to their surface properties and was written by Ken Brakke, who earned his Ph.D. in math at Princeton in 1975.

To show that the Weaire-Phelan foam exhibited the light-controlling properties they were seeking, Klatt developed a meticulous set of calculations that he executed on the supercomputing facilities of the Princeton Institute for Computational Science and Engineering.

“The programs he had to run are really computationally intensive,” Torquato said.

The work opens numerous possibilities for further invention, said the researchers, who dubbed the new area of work as “phoamtonics” (a mashup of “foam” and “photonics”). Because foams occur naturally and are relatively easy to make, one possible goal would be to coax raw materials to self-organize into the precise arrangement of the Weaire-Phelan foam, Torquato said.

With further development, the foam could transport and manipulate light used in telecommunications. Currently much of the data traversing the internet is carried by glass fibers. However, at its destination, the light is converted back to electricity. Photonic band gap materials could guide the light much more precisely than conventional fiber optic cables and might serve as optical transistors that perform computations using light.

“Who knows?” said Torquato. “Once you have this as a result, then it provides experimental challenges for the future.”

Go to Source



With the rise of Artificial Intelligence, it is more important than ever for application developers to be able to determine if a user is a human or a machine. Enter CAPTCHA, which is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”. CAPTCHAS, which come in a variety of shapes and sizes, are designed to decrease spam and malicious activity. The most common CAPTCHA would be a series of random alphanumeric characters displayed on a web page in which a human must copy into a web form.

Developers looking to add a CAPTCHA function, or a CAPTCHA-solving function, to applications would need an Application Programming Interface, or API, to accomplish these tasks. The best place to find one is the CAPTCHA category on ProgrammableWeb. Dozens of APIs, including several services that recognize and bypass CAPTCHAs, are available there.

In this article we highlight the most popular APIs for CAPTCHA, as chosen by the number of page visits on ProgrammableWeb.


CAPTCHAs.IO is an automated captcha recognition service that supports more than 30,000 image captchas, audio captchas, and reCAPTCHA v2 and v3, including invisible reCAPTCHA. The CAPTCHAs.IO APITrack this API provides RESTful access to all of’s captcha-solving methods. Developers can choose to get API responses in either JSON or plain text.


Death By CAPTCHA offers a CAPTCHA bypass service. Users pass captchas through the APITrack this API where they are solved by an OCR or manually. The solved CAPTCHA is then passed back where it can be used. The API has an average solved response time of 15 seconds, and an average accuracy rate of 90%.

3. Anti Captcha API

Anti Captcha is a human powered CAPTCHA solving service. The Anti Captcha APITrack this API integrates authentication solutions into applications via HTTP POST and API Key. Resources allow to upload CAPTCHA & receive ID, request, & receive captcha responses.

4. AZcaptcha

AZcaptcha is a automatic image and CAPTCHA recognition service. The AZcaptcha APITrack this API‘s main purpose is solving CAPTCHAs in a quick and accurate way by AI employees, but the service is not limited only to CAPTCHA solving. You can convert to text any image that a AI can recognize.

5. ProxyCrawl API

ProxyCrawl combines artificial intelligence with a team of engineers to bypass crawling restrictions and captchas and provide easy access to scraping and crawling websites around the internet. The ProxyCrawl APITrack this API allows developers to scrape any website using real web browsers. This means that even if a page is built using only JavaScript, ProxyCrawl can crawl it and provide the HTML necessary to scrape it. The API handles proxy management, avoids captchas and blocks, and manages automated browsers.

6. Solve Recaptcha API

The Solve Recaptcha APITrack this API automatically solves Google’s reCAPTCHA2 CAPTCHAs via data-site key. The API is fee-based depending on the number of threads per month.

7. Google reCAPTCHA API

Google reCAPTCHA v3 APITrack this API is a CAPTCHA implementation that distinguishes humans from computers without user interactive tests. reCAPTCHA works via a machine learning-based risk analysis engine and determines a user validity score. This API is accessed indirectly from the Javascript SDK.

Video: YouTube/ Google Webmasters

8. Captcha Solutions API

Captcha Solutions is a CAPTCHA decoding web service offering solutions based on a flat rate per CAPTCHA solved. This RESTful Captcha Solutions APITrack this API is designed to solve a large variety of a CAPTCHA challenges for a broad spectrum of applications.

9. 2Captcha API

2Captcha provides human-powered image and CAPTCHA solving services. The 2Captcha API returns data of human-powered image recognition to authorize online users. With the API, developers can apply an available algorithm that includes sending an image to a server, obtaining the ID of the picture, beginning the cycle that checks if the CAPTCHA is solved, and confirming if the answer is correct.

10. API

The APITrack this API provides reCAPTCHA and antiCAPTCHA services. With the API, developers can use an image that contains distorted but human-readable text. To solve the CAPTCHA, the user has to type the text from the image. The API supports JSON formats. API Keys are required to authenticate.

The above APIs, along with about 34 more APIs, plus more than 50 SDKs and 25 Source Code Samples, are available in the CAPTCHA category on ProgrammableWeb.

Go to Source
Author: <a href="">joyc</a>


Better understanding of soft artificial muscles

Artificial muscles will power the soft robots and wearable devices of the future. But more needs to be understood about the underlying mechanics of these powerful structures in order to design and build new devices.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have uncovered some of the fundamental physical properties of artificial muscle fibers.

“Thin soft filaments that can easily stretch, bend, twist or shear are capable of extreme deformations that lead to knot-like, braid-like or loop-like structures that can store or release energy easily,” said L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics. “This has been exploited by a number of experimental groups recently to create prototypical artificial muscle fibers. But how the topology, geometry and mechanics of these slender fibers come together during this process was not completely clear. Our study explains the theoretical principles underlying these shape transformations, and sheds light on the underlying design principles.”

“Soft fibers are the basic unit of a muscle and could be used in everything from robotics to smart textiles that can respond to stimuli such as heat or humidity,” said Nicholas Charles, a PhD student in Applied Mathematics and first author of the paper. “The possibilities are endless, if we can understand the system. Our work explains the complex morphology of soft, strongly stretched and twisted fibers and provides guidelines for the best designs.”

The research is published in Physical Review Letters.

Soft fibers, or filaments, can be stretched, sheared, bent or twisted. How these different actions interact to form knots, braids, and helices is important to the design of soft actuators. Imagine stretching and twisting a rubber band as tight as you can. As the twist gets tighter and tighter, part of the band will pop out of the plane and start twisting around itself into a coil or knot. These coils and loops, in the right form, can be harnessed to actuate the knotted fiber.

The researchers found that different levels of stretch and twist result in different types of complex non-planar shapes. They characterized which shapes lead to kinked loops, which to tight coils, and which to a mixture of the two. They found that pre-stretch is important for forming coils, as these shapes are the most stable under stretching, and modeled how such coils can be used to produce mechanical work.

“This research gives us a simple way to predict how soft filaments will respond to twisting and stretching,” said Charles.

“Going forward, our work might also be relevant in other situations involving tangled filaments, as in hair curls, polymer dynamics and the dynamics of magnetic field lines in the sun and other stars,” said Mahadevan.

This research was co-authored by Mattia Gazzola, Assistant Professor of Mechanical Sciences and Engineering at the University of Illinois, and a former member of the group. It was supported in part by the National Science Foundation.

Go to Source

IEEE Spectrum

Expert Discusses Key Challenges for the Next Generation of Wearables

For decades, pedometers have counted our steps and offered insights into mobility. More recently, smartwatches have begun to track a suite of vital signs in real-time. There is, however, a third category of information—in addition to mobility and vitals—that could be monitored by wearable devices: biochemistry.

To be clear, there are real-time biochemistry monitoring devices available like the FreeStyle Libre, which can monitor blood glucose in people with diabetes for 14 days at a time, relatively unobtrusively. But Joseph Wang, a professor of nanoengineering at the University of California, San Diego thinks there’s still room for improvement. During a talk about biochemistry wearables at ApplySci’s 12thWearable Tech + Digital Health + Neurotech event at Harvard on 14 November, Wang outlined some of the key challenges to making such wearables as ubiquitous and unobtrusive as the Apple Watch.

Wang identified three engineering problems that must be tackled: flexibility, power, and treatment delivery. He also discussed potential solutions that his research team has identified for each of these problems.

IEEE Spectrum

Bionic Pacemaker Controlled By Neural Network Reverses Heart Failure in Rats

For more than 60 years, the pacemaker—a device implanted in the chest that delivers electrical pulses to the heart—has served as the ticker’s ticker, producing a steady beat for hearts that can’t do it on their own.

The device has prolonged countless lives, but even the most sophisticated pacemakers ignore a significant biological fact: Healthy hearts don’t beat steadily like a metronome. They speed up as we inhale and and slow down as we exhale. 

Focusing on this natural variation, called respiratory sinus arrhythmia, may be the key to improving the pacemaker. “Devices have to listen to feedback from the body,” says Julian Paton, a professor at the University of Bristol, in the UK, who is leading some of the research in this area. “We need smarter devices.”

In a paper published this week in the Journal of Physiology, Paton and his colleagues describe a smarter pacemaker that puts natural variation back into a failing heart, helping it to work more efficiently. 

The device reads the electrical signals generated by each breath, and paces the heart accordingly. In rats with heart failure, the device increased the amount of blood their hearts could pump by 20%, compared with monotonic pacemaking, according to the study. ​

“People are beginning to think about ways in which pacemakers could become more intelligent, but there’s nothing on the market that has demonstrated such a profound increase in heart rate,” says Paton.

Current pacemakers adjust heart rate by responding to changes in the body in relatively rudimentary ways, such as with accelerometers or by detecting increases in body temperature. Some newer devices can pace the heart based on respiration. But those devices track average respiration over a period of time, says Paton. “That’s not what we’re doing. We’re modulating the heart based on every breath,” he says.

The device features a neural-network-based analog chip developed by Paton’s coauthor Alain Nogaret at the University of Bath. In the rat experiments, it recorded electrical activity from the rat’s diaphragm muscles, which contract during inhalation. The chip interprets the signals conveyed to it by a lead in real time using Hodgkin-Huxley equations—mathematical modeling of how action potentials in neurons are initiated and propagated. The device then delivers electrical stimulation to the left atria of the heart, prompting it to beat in sync with breathing. 

The advantage of using an analog device, compared with digital, is that it can respond quickly to changes in input from the body, says Paton. The device is scalable and can be miniaturized to the size of a postage stamp.

If the research progresses to humans, Paton says his team will not need to record signals from the diaphragm muscle. Instead, they will be able to integrate the device into conventional pacemakers, and gauge breathing by measuring electrical changes in chest resistance. 

Paton’s work is among several approaches researchers are taking to modernize the pacemaker. Other groups aim to power pacemakers more efficiently, including powering them with the heart itself, and making them out of graphene so they can run on light. Some groups are developing optical pacemakers using a genetic engineering technique called optogenetics, rather than hardware, to trigger cardiac cell contraction.


Novel mathematical framework provides a deeper understanding of how drugs interact

Combining two or more drugs can be an effective treatment of diverse diseases, such as cancer. Yet, at the same time, the wrong drug combination can cause major side effects. Currently there is no systematic understanding of how different drugs influence each other. Thus, elucidating how two given drugs interact, and whether they have a beneficial effect, would mean a major step towards drug development to treat diseases more effectively in the future.

On a molecular level, drugs cause complex perturbations of various cellular processes in our body. These processes are orchestrated by an intricate network of molecular interactions, the so-called interactome. Over the last decade, numerous studies have revealed a close relationship between the structure of the interactome and the functional organization of the molecular machinery within the cell. This opened exciting opportunities for using network-based approaches to investigate the foundations of both healthy and disease states. Following this trend, Principal Investigator Jörg Menche and his group at CeMM developed a novel mathematical framework for mapping out precisely how different perturbations of the interactome influence each other.

The new study performed by Caldera et al., represents the first general approach to quantifying with precision how drugs interact with each other, based on a mathematical model that considers their high-dimensional effects. Their research reveals that the position of targets of a given drug on the interactome is not random but rather localized within so-called drug modules. They found that the location of a drug module is linked to the specific cell morphological changes induced by the respective treatments, making morphology screens a valuable resource for the investigation of drug interactions. Further they identified various factors that contribute to the emergence of such interactions. Most notably, the distance between two drug modules on the interactome plays a key role: Certain types of interactions are more likely depending on the exact proximity between the two respective drug modules. If the modules are too far away from each other, it is rather unlikely that an interaction will take place.

“We developed a completely new methodology to classify drug interactions. Previous methods could characterize interactions only as synergistic or antagonistic. Our methodology is able to distinguish 12 distinct interactions types and also reveals the direction of an interaction,” says Michael Caldera, first author of the study and PhD student at Jörg Menche’s Group.

The study of the Menche group has broadened the understanding of how drugs perturb the human interactome, and what causes drugs to interact. Moreover, the introduced methodology offers the first comprehensive and complete description of any potential outcome that may arise from combining two perturbations. Finally, this methodology could also be applied to address other key challenges, such as dissecting the combined impact of genetic variations or predicting the effect of a drug on a particular disease phenotype. Their research forms a solid base for understanding and developing more effective drug therapies in the future.

Story Source:

Materials provided by CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences. Note: Content may be edited for style and length.

Go to Source


Environmental cost of cryptocurrency mines

Bitcoin, Ethereum, Litecoin and Monero — the names of digital-based ‘cryptocurrencies’ are being heard more and more frequently. But despite having no physical representation, could these new methods of exchange actually be negatively impacting our planet? It’s a question being asked by researchers at The University of New Mexico, who are investigating the environmental impacts of mining cryptocurrencies.

“What is most striking about this research is that it shows that the health and environmental costs of cryptocurrency mining are substantial; larger perhaps than most people realized,” said Benjamin Jones, UNM Researcher and asst. professor of economics.

Cryptocurrency is an internet-based form of exchange that exists solely in the digital world. Its allure comes from using a decentralized peer-to-peer network of exchange, produced and recorded by the entire cryptocurrency community. Independent “miners” compete to solve complex computing algorithms that then provides secure cryptographic validation of an exchange. Miners are rewarded in units of the currency. Digital public ledgers are kept for “blocks” of these transactions, which are combined to create what is called the blockchain. According to proponents, cryptocurrencies do not need a third party, or traditional bank, or centralized government control to provide secure validation for transactions. In addition, cryptocurrencies are typically designed to limit production after a point, meaning the total amount in circulation eventually hits a cap. These caps and ledgers are maintained through the systems of users .

But the mechanisms that make these currencies so appealing are also using exorbitant amounts of energy.

In a new paper titled ‘Cryptodamages: Monetary value estimates of the air pollution and human health impacts of cryptocurrency mining’ published in the journal, Energy Research & Social Science, University of New Mexico researchers Andrew Goodkind (asst. professor, Economics), Benjamin Jones (asst. professor, Economics) and Robert Berrens (professor, Economics) estimate the environmental impact of these cryptocurrency mining techniques. Using existing data that assessed energy use on cryptocurrency, and a battery of economic valuation techniques, the three were able to put a monetary figure on the mining practices.

“Our expertise is in estimating the monetary damages, due to health and environmental impacts, of different economics activities and sectors,” Berrens explained. “For example, it is common for economists to study the impacts from energy use connected to production and consumption patterns in agriculture, or with automobile production and use. In a world confronting climate change, economists can help us understand the impacts connected to different activities and technologies.”

The independent production, or ‘mining’, practices of cryptocurrencies are done using energy-consuming specialized computer hardware and can take place in any geographic location. Large-scale operations, called mining camps, are now congregating around the fastest internet connections and cheapest energy sources — regardless of whether the energy is green or not.

“With each cryptocurrency, the rising electricity requirements to produce a single coin can lead to an almost inevitable cliff of negative net social benefit,” the paper states.

The UNM researchers argue that although mining practices create financial value, the electricity consumption is generating “cryptodamages” — a term coined to describe the human health and climate impacts of the digital exchange.

“We looked at climate change from greenhouse gas emissions of electricity production and also the impacts local air pollutants have when they are carried downwind and across local communities,” Goodkind said.

The researchers estimate that in 2018, every $1 of Bitcoin value created was responsible for $.49 in health and climate damages in the United States.

Their data shows that at one point during 2018, the cost in damages that it took to create Bitcoin matched the value of the exchange itself. Those damages arise from increased pollutants generated from the burning of fossil fuels used to produce energy, such as carbon dioxide, fine particulate matter, nitrogen oxides and sulfur dioxide. Exposure to some of these pollutants has been linked to increased risk of premature death.

“By using large amounts of electricity generated from burning fossil fuels,” Jones said. “Cryptocurrency mining is associated with worse air quality and increased CO2 emissions, which impacts communities and families all across the country, including here in New Mexico.”

In addition to the human health impacts from increased pollutants, the trio looked at the climate change implications and how the current system of mining encourages high energy use.

“An important issue is the production process employed in the blockchain for securing new blocks of encrypted transactions,” Berrens explained. “Along with supply rules for new units of a currency, some production processes, like the predominate Proof-of Work (POW) scheme used in Bitcoin, require ever increasing computing power and energy use in the winner-take-all competition to solve complex algorithms, and secure new blocks in the chain.”

Although relatively limited in overall use currently, there are cryptocurrencies with alternative production schemes which require significantly less energy use. The researchers hope by publicizing the health and climate impacts of such schemes, they will encourage alternative methods of mining.

“The ability to locate cryptomining almost anywhere (i.e. following the cheapest, under-regulated electricity source) …creates significant challenges to implementing regulation,” the paper says.

Goodkind says the specialized machines used for mining also have to kept cool, so they won’t overheat while computing such complex algorithms. That additional energy-use was not part of this study, which means even more energy is being consumed than is currently being accounted for when looking solely at the usage of running the machines.

Moving forward, the challenging public policy question is: “How can you make the people who are creating the damage pay for the cost, so that it is considered in the decision in how to mine cryptocurrencies,” Goodkind concluded.

Go to Source


Deep neural networks speed up weather and climate models

When you check the weather forecast in the morning, the results you see are more than likely determined by the Weather Research and Forecasting (WRF) model, a comprehensive model that simulates the evolution of many aspects of the physical world around us.

“It describes everything you see outside of your window,” said Jiali Wang, an environmental scientist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, “from the clouds, to the sun’s radiation, to snow to vegetation — even the way skyscrapers disrupt the wind.”

The myriad characteristics and causes of weather and climate are coupled together, communicating with one another. Scientists have yet to fully describe these complex relationships with simple, unified equations. Instead, they approximate the equations using a method called parameterization in which they model the relationships at a scale greater than that of the actual phenomena.

Although parameterizations simplify the physics in a way that allows the models to produce relatively accurate results in a reasonable time, they are still computationally expensive. Environmental scientists and computational scientists from Argonne are collaborating to use deep neural networks, a type of machine learning, to replace the parameterizations of certain physical schemes in the WRF model, significantly reducing simulation time.

“With less-expensive models, we can achieve higher-resolution simulations to predict how short-term and long-term changes in weather patterns affect the local scale,” said Wang, “even down to neighborhoods or specific critical infrastructure.”

In a recent study, the scientists focused on the planetary boundary layer (PBL), or lowest part of the atmosphere. The PBL is the atmospheric layer that human activity affects the most, and it extends only a few hundred meters above Earth’s surface. The dynamics in this layer, such as wind velocity, temperature and humidity profiles, are critical in determining many of the physical processes in the rest of the atmosphere and on Earth.

The PBL is a crucial component in the WRF model, but it is also one of the least computationally expensive. This makes it an excellent testbed for studying how more complicated components might be enhanced by deep learning neural networks in the same way.

“We used 20 years of computer-generated data from the WRF model to train the neural networks and two years of data to evaluate whether they could provide an accurate alternative to the physics-based parameterizations,” said Prasanna Balaprakash, a computer scientist and DOE Early Career Award recipient in Argonne’s Mathematics and Computer Science division and the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

Balaprakash developed the neural network and trained it to learn an abstract relationship between the inputs and outputs by feeding it more than 10,000 data points (8 per day) from two locations, one in Kansas and one in Alaska. The result was an algorithm that the scientists are confident could replace the PBL parameterization in the WRF model.

The scientists demonstrated that a deep neural network that considers some of the underlying structure of the relationship between the input and output variables can successfully simulate wind velocities, temperature and water vapor over time. The results also show that a trained neural network from one location can predict behavior across nearby locations with correlations higher than 90 percent compared with the test data.

“Collaboration between the climate scientists and the computer scientists was crucial for the results we achieved,” said Rao Kotamarthi, chief scientist and department head of atmospheric science and climate research in Argonne’s Environmental Science division. “Incorporating our domain knowledge makes the algorithm much more predictive.”

The algorithms — called domain-aware neural networks — that consider known relationships not only can predict environmental data more accurately, but they also require training of significantly less data than do algorithms that do not consider domain expertise.

Any machine learning project requires a large amount of high-quality data, and there was no shortage of data for this study. Supercomputing resources at the ALCF and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, contributed to the production of more than 300 years (700 terabytes) of data describing past, present and future weather and climate in North America.

“This database is unique to climate science at Argonne,” said Wang, “and we are using it to conduct further studies in deep learning and determine how it can apply to climate models.”

The scientists’ ultimate goal is to replace all of the expensive parameterizations in the WRF model with deep learning neural networks to enable faster and higher-resolution simulation.

Currently, the team is working to emulate long-wave and short-wave solar radiation parameterization — two portions of the WRF model that together take up almost 40% of the calculation time of the physics in the simulations.

Go to Source


Governor Wolf Announces Investments to Bolster Pennsylvania’s Workforce Skills – PA Department of Community & Economic Development

Harrisburg, PA – Today, Governor Tom Wolf announced that more than $7.8 million in training assistance funding was provided to 745 Pennsylvania companies through the commonwealth’s Workforce and Economic Development Network of Pennsylvania. This year marks 20 years of partnerships and funding assistance for qualified businesses in the state, with 20,683 companies providing training for 1,238,915 existing staff members.

“Continuing education and workforce training allow everyone a path to future success and is a way for employers and employees to show allegiance and good faith in each other,” said Gov. Wolf. “It’s pivotal in strengthening our state’s workforce, our communities, and ultimately, our economy.”

Last fiscal year, the commonwealth invested $7,896,801 in trainings and provided 30,460 employees with access to education in the areas of Essential Skills and Advanced Technology. Essential Skills training can include guidance in the areas of communication and teamwork, health and safety, business and computer operations, manufacturing fundamentals, and quality assurance. Advanced technology training can include guidance in the areas of advanced manufacturing technology, advanced software implementation, computer programming, and software engineering.

“Job training helps people achieve their career goals and improve their quality of life, and workforce development is one of the strongest drivers of our economy,” said Department of Community and Economic Development (DCED) Secretary Dennis Davin. “WEDnetPA isn’t just a tool to help workers further their careers; it’s a tool that has helped us attract and retain businesses and grow Pennsylvania’s economy for the past two decades, and will continue to do so for decades to come.”

“While other states also offer resources to help companies close the skills gap, WEDnetPA is unique in its approach,” said Statewide Director Thomas Venditti. “By networking with 25 colleges and universities, we leverage their existing professional workforce development staff to create a highly cost-effective way to provide help with incumbent worker training.”

Workforce training grants align with Governor Wolf’s PAsmart workforce development initiative. The governor launched the innovative PAsmart initiative last year and secured a $10 million increase to $40 million for the program this year. PAsmart provides $20 million for science and technology education, $10 million for apprenticeships and job training, and new this year, an additional $10 million for career and technical education.

This year’s annual report is available online.

For more information about the Wolf Administration’s commitment to workforce development or DCED, visit the DCED website, and be sure to stay up-to-date with all of our agency news on Facebook, Twitter, and LinkedIn.

J.J. Abbott, Governor’s Office, 717.783.1116
Casey Smith, DCED, 717.783.1132

# # #

Go to Source
Author: Marketing998

IEEE Spectrum

The Latest Techniques in Power Supply Test – Get the App Note

DC Electronic Loads are becoming more popular in test systems as more electronic devices convert or store energy. Learn about Keysight’s next-generation electronic loads, allowing for a complete DC power conversion solution on the popular N6700 modular power system.