Categories
ScienceDaily

New printing process advances 3D capabilities

More durable prosthetics and medical devices for patients and stronger parts for airplanes and automobiles are just some of the products that could be created through a new 3D printing technology invented by a UMass Lowell researcher.

Substances such as plastics, metals and wax are used in 3D printers to make products and parts for larger items, as the practice has disrupted the prototyping and manufacturing fields. Products created through the 3D printing of plastics include everything from toys to drones. While the global market for 3D plastics printers is estimated at $4 billion and growing, challenges remain in ensuring the printers create objects that are produced quickly, retain their strength and accurately reflect the shape desired, according to UMass Lowell’s David Kazmer, a plastics engineering professor who led the research project.

Called injection printing, the technology Kazmer pioneered is featured in the academic journal Additive Manufacturing posted online last week.

The invention combines elements of 3D printing and injection molding, a technique through which objects are created by filling mold cavities with molten materials. The marriage of the two processes increases the production rate of 3D printing, while enhancing the strength and properties of the resulting products. The innovation typically produces objects about three times faster than conventional 3D printing, which means jobs that once took about nine hours now only take three, according to Kazmer, who lives in Georgetown.

“The invention greatly improves the quality of the parts produced, making them fully dense with few cracks or voids, so they are much stronger. For technical applications, this is game-changing. The new process is also cost-effective because it can be used in existing 3D printers, with only new software to program the machine needed,” Kazmer said.

The process took about 18 months to develop. Austin Colon of Plymouth, a UMass Lowell Ph.D. candidate in plastics engineering, helped validate the technology alongside Kazmer, who teaches courses in product design, prototyping and process control, among other topics. He has filed for a patent on the new technology.

Story Source:

Materials provided by University of Massachusetts Lowell. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

University of Dayton researchers create lower cost method of 3D printing nanoscale structures 

Researchers from the University of Dayton have published research describing an enhanced and more cost-effective method of 3D printing nanoscale structures.  The Opto-Thermo-Mechanical (OTM) nano-printing technique was found to be capable of printing on a nanoscale level of fewer than 100 nanometers (nm), or a thousand times smaller than a human hair. What’s more, because […]

Go to Source
Author: Paul Hanaphy

Categories
ProgrammableWeb

Why COVID-19 Makes App Security More Important than Ever

The United States is still hoping to fully reopen, but COVID-19 is more prevalent than ever, with the nation and many states reporting record daily infection rates. Even though employment has recovered somewhat, the country is still facing a more than 10% unemployment rate, and while many restaurants, stores and other physical businesses have reopened, people are not yet returning to them in the same numbers as before the pandemic.  

The pandemic will undoubtedly transform many aspects of our country, and some of these changes are already apparent. Organizations that had depended on a physical location to interact with and draw customers have had to change their business models to emphasize contactless or near contactless transactions, where goods are delivered or picked up curbside. Contactless transactions typically involve online communications.

According to the Pew Research Center, 74% of households own a computer and 84% have a smartphone. But when it comes to usage, mobile dominates. More than half of worldwide Internet traffic last year came from mobile devices, and U.S. consumers spent about 40% more time using their smartphones than they did their desktops and laptops. 

The long-term trend of growing mobile usage combined with the pressure for contactless transactions due to the pandemic has made creating and enhancing mobile apps not just a nice marketing tool for businesses, but a necessary task for survival. To compete with other businesses and draw formerly casual mobile users to their apps, development teams are under more pressure than ever to deliver new and updated apps even more quickly than before.

Features trump security … until they don’t

This does not bode well for mobile app security, especially since the situation was not very good prior to COVID-19. According to the Verizon Mobile Security Index 2020, 43% of app developers said they knew they were cutting corners on security to “get the job done,” and that survey was conducted well before the pandemic arrived.

Unless they are very technologically savvy, consumers have no real way to assess the security of the mobile apps they use, so they make decisions about which apps to deploy based on  features, functionality and ease of use. Naturally, that’s where developers focus their attention. What’s more, implementing security is expensive and time consuming, potentially breaking the development budget and delaying delivery schedules. Even if development teams are committed to implementing security, iOS and Android security specialists are hard to find and in high demand.

But while focusing on features at the expense of security may be a good strategy for short-term adoption, the potential long-term consequences can be devastating for consumers and developers alike. Cybercriminals are just as aware as developers are about the growing importance of mobile apps, and they are developing increasingly sophisticated attacks targeting them.

A good example is the EventBot malware that appeared in April. This Android-based trojan looks and feels like Adobe Flash or Microsoft Word, but its real purpose is to steal unprotected data in banking, bitcoin and other financial apps. The trojan is sophisticated enough to intercept two-factor authentication codes sent via SMS so it can use them to take over accounts. 

It’s a perfect example of the importance of good security. If app developers encrypt all data stored on the device, they won’t be in danger of theft from trojans like EventBot. Likewise, it illustrates why it’s critical to obfuscate and shield apps from reverse engineering. Not only can malicious actors create trojans from popular brands’ apps, they can also make buggy, badly performing fake apps that will give the genuine app a bad reputation.

Additionally, because the pandemic is causing such a large increase in app usage and adoption, security flaws that had previously gone unnoticed may start causing problems for users. 

Zoom, for example, saw millions of new users sign up essentially overnight after they were forced to work from home due to lockdown orders. This rush of new users exposed security flaws that hackers used to “zoom bomb” meetings. Zoom took quick action to resolve the issues, but it had to endure significant damage to its reputation.

Solutions to the security development challenge 

If your team plans to implement security into mobile apps on its own, first make sure you have the skills required to do so. Android and iOS differ significantly, and a security expert in one OS isn’t necessarily qualified to implement security for the other. 

Assuming you have developers qualified to implement security, the next step is to plan what, specifically, your team will focus on to harden your apps’ security. It’s not a simple question — after all, a hacker only has to find a single vulnerability to exploit, and there’s an enormous number of possible weaknesses. But a good place to start is to ensure each app is protected against the The Open Web Application Security Project (OWASP) Mobile Top Ten vulnerabilities, a list of the most common exploits cybercriminals use. 

Other development teams may decide to integrate security software development kits (SDKs) into their apps, which is a more efficient option than manual security implementation and can be done without having to hire security specialists. That said, it’s critical to thoroughly vet SDKs before integration. Not only are rogue SDKs a serious problem in the mobile app industry, but SDKs, themselves, may contain vulnerabilities.

Organizations can also leverage AI to automate security for mobile apps. It’s fast, can secure an app without any coding, and, compared to manual coding, is inexpensive as well. But, just as you must vet SDKs, conduct thorough due diligence to ensure that the AI platform provides comprehensive security and does not, itself, introduce vulnerabilities.

Mobile apps have never been more important to businesses, and cybercriminals are responding with more advanced, targeted attacks. Developers cannot afford to deliver full-featured apps that lack proper security — in the long run, the potential damage to customers and an enterprise itself is far too great a risk. So, as you race to provide an engaging, intuitive app for customers, pay as much attention to their safety as their experience. It’s no longer necessary to implement security manually, so there’s no excuse for putting customers at risk with a vulnerable app.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">tomtovar</a>

Categories
ScienceDaily

Artificial intelligence predicts which planetary systems will survive

Why don’t planets collide more often? How do planetary systems — like our solar system or multi-planet systems around other stars — organize themselves? Of all of the possible ways planets could orbit, how many configurations will remain stable over the billions of years of a star’s life cycle?

Rejecting the large range of unstable possibilities — all the configurations that would lead to collisions — would leave behind a sharper view of planetary systems around other stars, but it’s not as easy as it sounds.

“Separating the stable from the unstable configurations turns out to be a fascinating and brutally hard problem,” said Daniel Tamayo, a NASA Hubble Fellowship Program Sagan Fellow in astrophysical sciences at Princeton. To make sure a planetary system is stable, astronomers need to calculate the motions of multiple interacting planets over billions of years and check each possible configuration for stability — a computationally prohibitive undertaking.

Astronomers since Isaac Newton have wrestled with the problem of orbital stability, but while the struggle contributed to many mathematical revolutions, including calculus and chaos theory, no one has found a way to predict stable configurations theoretically. Modern astronomers still have to “brute-force” the calculations, albeit with supercomputers instead of abaci or slide rules.

Tamayo realized that he could accelerate the process by combining simplified models of planets’ dynamical interactions with machine learning methods. This allows the elimination of huge swaths of unstable orbital configurations quickly — calculations that would have taken tens of thousands of hours can now be done in minutes. He is the lead author on a paper detailing the approach in the Proceedings of the National Academy of Sciences. Co-authors include graduate student Miles Cranmer and David Spergel, Princeton’s Charles A. Young Professor of Astronomy on the Class of 1897 Foundation, Emeritus.

For most multi-planet systems, there are many orbital configurations that are possible given current observational data, of which not all will be stable. Many configurations that are theoretically possible would “quickly” — that is, in not too many millions of years — destabilize into a tangle of crossing orbits. The goal was to rule out those so-called “fast instabilities.”

“We can’t categorically say ‘This system will be OK, but that one will blow up soon,'” Tamayo said. “The goal instead is, for a given system, to rule out all the unstable possibilities that would have already collided and couldn’t exist at the present day.”

Instead of simulating a given configuration for a billion orbits — the traditional brute-force approach, which would take about 10 hours — Tamayo’s model instead simulates for 10,000 orbits, which only takes a fraction of a second. From this short snippet, they calculate 10 summary metrics that capture the system’s resonant dynamics. Finally, they train a machine learning algorithm to predict from these 10 features whether the configuration would remain stable if they let it keep going out to one billion orbits.

“We called the model SPOCK — Stability of Planetary Orbital Configurations Klassifier — partly because the model determines whether systems will ‘live long and prosper,'” Tamayo said.

SPOCK determines the long-term stability of planetary configurations about 100,000 times faster than the previous approach, breaking the computational bottleneck. Tamayo cautioned that while he and his colleagues haven’t “solved” the general problem of planetary stability, SPOCK does reliably identify fast instabilities in compact systems, which they argue are the most important in trying to do stability constrained characterization.

“This new method will provide a clearer window into the orbital architectures of planetary systems beyond our own,” Tamayo said.

But how many planetary systems are there? Isn’t our solar system the only one?

In the past 25 years, astronomers have found more than 4,000 planets orbiting other stars, of which almost half are in multi-planet systems. But since small exoplanets are extremely challenging to detect, we still have an incomplete picture of their orbital configurations.

“More than 700 stars are now known to have two or more planets orbiting around them,” said Professor Michael Strauss, chair of Princeton’s Department of Astrophysical Sciences. “Dan and his colleagues have found a fundamentally new way to explore the dynamics of these multi-planet systems, speeding up the computer time needed to make models by factors of 100,000. With this, we can hope to understand in detail the full range of solar system architectures that nature allows.”

SPOCK is especially helpful for making sense of some of the faint, far-distant planetary systems recently spotted by the Kepler telescope, said Jessie Christiansen, an astrophysicist with the NASA Exoplanet Archive who was not involved in this research. “It’s hard to constrain their properties with our current instruments,” she said. “Are they rocky planets, ice giants, or gas giants? Or something new? This new tool will allow us to rule out potential planet compositions and configurations that would be dynamically unstable — and it lets us do it more precisely and on a substantially larger scale than was previously available.”

Go to Source
Author:

Categories
ScienceDaily

New materials for extra thin computer chips

Ever smaller and ever more compact — this is the direction in which computer chips are developing, driven by industry. This is why so-called 2D materials are considered to be the great hope: they are as thin as a material can possibly be, in extreme cases they consist of only one single layer of atoms. This makes it possible to produce novel electronic components with tiny dimensions, high speed and optimal efficiency.

However, there is one problem: electronic components always consist of more than one material. 2D materials can only be used effectively if they can be combined with suitable material systems — such as special insulating crystals. If this is not considered, the advantage that 2D materials are supposed to offer is nullified. A team from the Faculty of Electrical Engineering at the TU Wien (Vienna) is now presenting these findings in the journal Nature Communications.

Reaching the End of the Line on the Atomic Scale

“The semiconductor industry today uses silicon and silicon oxide,” says Prof. Tibor Grasser from the Institute of Microelectronics at the TU Wien. “These are materials with very good electronic properties. For a long time, ever thinner layers of these materials were used to miniaturize electronic components. This worked well for a long time — but at some point we reach a natural limit.”

When the silicon layer is only a few nanometers thick, so that it only consists of a few atomic layers, then the electronic properties of the material deteriorate very significantly. “The surface of a material behaves differently from the bulk of the material — and if the entire object is practically only made up of surfaces and no longer has a bulk at all, it can have completely different material properties.”

Therefore, one has to switch to other materials in order to create ultra-thin electronic components. And this is where the so-called 2D materials come into play: they combine excellent electronic properties with minimal thickness.

Thin layers need Thin Insulators

“As it turns out, however, these 2D materials are only the first half of the story,” says Tibor Grasser. “The materials have to be placed on the appropriate substrate, and an insulator layer is also needed on top of it — and this insulator also hast to be extremely thin and of extremely good quality, otherwise you have gained nothing from the 2D materials. It’s like driving a Ferrari on muddy ground and wondering why you don’t set a speed record.”

A team at the TU Wien around Tibor Grasser and Yury Illarionov has therefore analysed how to solve this problem. “Silicon dioxide, which is normally used in industry as an insulator, is not suitable in this case,” says Tibor Grasser. “It has a very disordered surface and many free, unsaturated bonds that interfere with the electronic properties in the 2D material.”

It is better to look for a well-ordered structure: The team has already achieved excellent results with special crystals containing fluorine atoms. A transistor prototype with a calcium fluoride insulator has already provided convincing data, and other materials are still being analysed.

“New 2D materials are currently being discovered. That’s nice, but with our results we want to show that this alone is not enough,” says Tibor Grasser. “These new electrically conductive 2D materials must also be combined with new types of insulators. Only then can we really succeed in producing a new generation of efficient and powerful electronic components in miniature format.”

Story Source:

Materials provided by Vienna University of Technology. Original written by Florian Aigner. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Discovery makes microscopic imaging possible in dark conditions

Curtin University researchers have discovered a new way to more accurately analyse microscopic samples by essentially making them ‘glow in the dark’, through the use of chemically luminescent molecules.

Lead researcher Dr Yan Vogel from the School of Molecular and Life Sciences said current methods of microscopic imaging rely on fluorescence, which means a light needs to be shining on the sample while it is being analysed. While this method is effective, it also has some drawbacks.

“Most biological cells and chemicals generally do not like exposure to light because it can destroy things — similar to how certain plastics lose their colours after prolonged sun exposure, or how our skin can get sunburnt,” Dr Vogel said.

“The light that shines on the samples is often too damaging for the living specimens and can be too invasive, interfering with the biochemical process and potentially limiting the study and scientists’ understanding of the living organisms.

“Noting this, we set out to find a different way to analyse samples, to see if the process could successfully be completed without using any external lights shining on the sample.”

The research team successfully found a way to use chemical stimuli to essentially make user-selected areas of the samples ‘glow in the dark,’ allowing them to be analysed without adding any potentially damaging external light.

Research co-author, Curtin University ARC Future Fellow Dr Simone Ciampi said that up until now, exciting a dye with chemical stimuli, instead of using high energy light, was not technically viable.

“Before discovering our new method, two-dimensional control of chemical energy conversion into light energy was an unmet challenge, mainly due to technical limitations,” Dr Ciampi said.

“There are few tools available that allow scientists to trigger transient chemical changes at a specific microscopic site. Of the tools that are available, such as photoacids and photolabile protecting groups, direct light input or physical probes are needed to activate them, which are intrusive to the specimen.

“Our new method however, only uses external light shining on the back of an electrode to generate localised and microscopic oxidative hot-spots on the opposite side of the electrode.

“Basically, the light shines on an opaque substrate, while the other side of the sample in contact with the specimen does not have any exposure to the external light at all. The brief light exposure activates the chemicals and makes the sample ‘glow in the dark’.

“This ultimately addresses two of the major drawbacks of the fluorescence method — namely the interference of the light potentially over-exciting the samples, and the risk of damaging light-sensitive specimens.”

Story Source:

Materials provided by Curtin University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Coordinating complex behaviors between hundreds of robots

In one of the more memorable scenes from the 2002 blockbuster film Minority Report, Tom Cruise is forced to hide from a swarm of spider-like robots scouring a towering apartment complex. While most viewers are likely transfixed by the small, agile bloodhound replacements, a computer engineer might marvel instead at their elegant control system.

In a building several stories tall with numerous rooms, hundreds of obstacles and thousands of places to inspect, the several dozen robots move as one cohesive unit. They spread out in a search pattern to thoroughly check the entire building while simultaneously splitting tasks so as to not waste time doubling back on their own paths or re-checking places other robots have already visited.

Such cohesion would be difficult for human controllers to achieve, let alone for an artificial controller to compute in real-time.

“If a control problem has three or four robots that live in a world with only a handful of rooms, and if the collaborative task is specified by simple logic rules, there are state-of-the-art tools that can compute an optimal solution that satisfies the task in a reasonable amount of time,” said Michael M. Zavlanos, the Mary Milus Yoh and Harold L. Yoh, Jr. Associate Professor of Mechanical Engineering and Materials Science at Duke University.

“And if you don’t care about the best solution possible, you can solve for a few more rooms and more complex tasks in a matter of minutes, but still only a dozen robots tops,” Zavlanos said. “Any more than that, and current algorithms are unable to overcome the sheer volume of possibilities in finding a solution.”

In a new paper published online on April 29 in the International Journal of Robotics Research, Zavlanos and his recent PhD graduate student, Yiannis Kantaros, who is now a postdoctoral researcher at the University of Pennsylvania, propose a new approach to this challenge called STyLuS*, for large-Scale optimal Temporal Logic Synthesis, that can solve problems massively larger than what current algorithms can handle, with hundreds of robots, tens of thousands of rooms and highly complex tasks, in a small fraction of the time.

To understand the basis of the new approach, one must first understand linear temporal logic, which is not nearly as scary as it sounds. Suppose you wanted to program a handful of robots to collect mail from a neighborhood and deliver it to the post office every day. Linear temporal logic is a way of writing down the commands needed to complete this task.

For example, these commands might include to visit each house in sequential order, return back to the post office and then wait for someone to retrieve the collected mail before setting out again. While this might be easy to say in English, it’s more difficult to express mathematically. Linear temporal logic can do so by using its own symbols which, although might look like Klingon to the common observer, they’re extremely useful for expressing complex control problems.

“The term linear is used because points in time have a unique successor based on discrete linear model of time, and temporal refers to the use of operators such as until, next, eventually and always,” said Kantaros. “Using this mathematical formalism, we can build complex commands such as ‘visit all the houses except house two,’ ‘visit houses three and four in sequential order,’ and ‘wait until you’ve been to house one before going to house five.’ “

To find robot controllers that satisfy such complex tasks, the location of each robot is mapped into a discrete data point called a “node.” Then, from each node, there exist multiple other nodes that are a potential next step for the robot.

A traditional controller searches through each one of these nodes and the potential paths to take between them before figuring out the best way to navigate its way through. But as the number of robots and locations to visit increase, and as the logic rules to follow become more sophisticated, the solution space becomes incredibly large in a very short amount of time.

A simple problem with five robots living in a world with ten houses could contain millions of nodes, capturing all possible robot locations and behaviors towards achieving the task. This requires a lot of memory to store and processing power to search through.

To skirt around this issue, the researchers propose a new method that, rather than constructing these incredibly large graphs in their entirety, instead creates smaller approximations with a tree structure. At every step of the process, the algorithm randomly selects one node from the large graph, adds it to the tree, and rewires the existing paths between the nodes in the tree to find more direct paths from start to finish.

“This means that as the algorithm progresses, this tree that we incrementally grow gets closer and closer to the actual graph, which we never actually construct,” said Kantaros. “Since our incremental graph is much smaller, it is easy to store in memory. Moreover, since this graph is a tree, graph search, which otherwise has exponential complexity, becomes very easy because now we only need to trace the sequence of parent nodes back to the root to find the desired path.”

It had been long accepted that growing trees could not be used to search the space of possible solutions for these types of robot control problems. But in the paper, Zavlanos and Kantaros show that they can make it work by implementing two clever tricks. First, the algorithm chooses the next node to add based on information about the task at hand, which allows the tree to quickly approximate a good solution to the problem. Second, even though the algorithm grows trees, it can still detect cycles in the original graph space that capture solutions to such temporal logic tasks.

The researchers show that this method will always find an answer if there is one, and it will always eventually find the best one possible. They also show that this method can arrive at that answer exponentially fast. Working with a problem of 10 robots searching through a 50-by-50 grid space — 250 houses to pick up mail — current state-of-the-art algorithms take 30 minutes to find an optimal solution.

STyLuS* does it in about 20 seconds.

“We have even solved problems with 200 robots that live on a 100-by-100 grid world, which is far too large for today’s algorithms to handle,” said Zavlanos. “While there currently aren’t any systems that use 200 robots to do something like deliver packages, there might be in the future. And they would need a control framework like STyLuS* to be able to deliver them while satisfying complex logic-based rules.”

Go to Source
Author:

Categories
ScienceDaily

Implants: Can special coatings reduce complications after implant surgery?

New coatings on implants could help make them more compatible. Researchers at the Martin Luther University Halle-Wittenberg (MLU) have developed a new method of applying anti-inflammatory substances to implants in order to inhibit undesirable inflammatory reactions in the body. Their study was recently published in the International Journal of Molecular Sciences.

Implants, such as pacemakers or insulin pumps, are a regular part of modern medicine. However, it is not uncommon for complications to arise after implantation. The immune system identifies the implant as a foreign body and attempts to remove it. “This is actually a completely natural and useful reaction by the immune system,” says Professor Thomas Groth, a biophysicist at MLU. It helps to heal wounds and kills harmful pathogens. If this reaction does not subside on its own after a few weeks, it can lead to chronic inflammation and more serious complications. “The immune system attracts various cells that try to isolate or remove the foreign entity. These include macrophages, a type of phagocyte, and other types of white blood cells and connective tissue cells,” explains Groth. Implants can become encapsulated by connective tissue, which can be very painful for those affected. In addition, the implant is no longer able to function properly. Drugs that suppress the immune response in a systemic manner are often used to treat chronic inflammation, but may have undesired side effects.

Thomas Groth’s team was looking for a simple way to modify the immune system’s response to an implant in advance. “This is kind of tricky, because we obviously do not want to completely turn off the immune system as its processes are vital for healing wounds and killing pathogens. So, in fact we only wanted to modulate it,” says the researcher. To do this, his team developed a new coating for implants that contains anti-inflammatory substances. For their new study, the team used two substances that are already known to have an anti-inflammatory effect: heparin and hyaluronic acid.

In the laboratory, the scientists treated a surface with the two substances by applying a layer that was only a few nanometres thick. “The layer is so thin that it does not affect how the implant functions. However, it must contain enough active substance to control the reaction of the immune system until the inflammatory reaction has subsided,” adds Groth. In cell experiments, the researchers observed how the two substances were absorbed by the macrophages, thereby reducing inflammation in the cell cultures. The untreated cells showed clear signs of a pronounced inflammatory reaction. This is because the active substances inside the macrophages interfere with a specific signalling pathway that is crucial for the immune response and cell death. “Both heparin and hyaluronic acid prevent the release of certain pro-inflammatory messenger substances. Heparin is even more effective because it can be absorbed by macrophage cells,” Groth concludes.

So far, the researchers have only tested the method on model surfaces and in cell cultures. Further studies on real implants and in model organisms are to follow.

Story Source:

Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Far-UVC light safely kills airborne coronaviruses, study finds

More than 99.9% of seasonal coronaviruses present in airborne droplets were killed when exposed to a particular wavelength of ultraviolet light that is safe to use around humans, a new study at Columbia University Irving Medical Center has found.

“Based on our results, continuous airborne disinfection with far-UVC light at the current regulatory limit could greatly reduce the level of airborne virus in indoor environments occupied by people,” says the study’s lead author David Brenner, PhD, Higgins Professor of Radiation Biophysics at Columbia University Vagelos College of Physicians and Surgeons and director of the Center for Radiological Research at Columbia University Irving Medical Center.

The research was published today in Scientific Reports.

Background

Conventional germicidal UVC light (254 nm wavelength) can be used to disinfect unoccupied spaces such as empty hospital rooms or empty subway cars, but direct exposure to these conventional UV lamps is not possible in occupied public spaces, as this could be a health hazard.

To continuously and safely disinfect occupied indoor areas, researchers at Columbia University Irving Medical Center have been investigating far-UVC light (222 nm wavelength). Far-UVC light cannot penetrate the tear layer of the eye or the outer dead-cell layer of skin so it cannot reach or damage living cells in the body.

The researchers had previously shown that far-UVC light can safely kill airborne influenza viruses.

The new paper extends their research to seasonal coronaviruses, which are structurally similar to the SARS-CoV-2 virus that causes COVID-19.

Study details

In the study, the researchers used a misting device to aerosolize two common coronaviruses. The aerosols containing coronavirus were then flowed through the air in front of a far-UVC lamp. After exposure to far-UVC light, the researchers tested to see how many of the viruses were still alive.

The researchers found that more than 99.9% of the exposed virus had been killed by a very low exposure to far-UVC light.

Based on their results, the researchers estimate that continuous exposure to far-UVC light at the current regulatory limit would kill 90% of airborne viruses in about 8 minutes, 95% in about 11 minutes, 99% in about 16 minutes, and 99.9% in about 25 minutes.

Using far-UVC light in occupied indoor spaces

The sensitivity of the coronaviruses to far-UVC light suggests that it may be feasible and safe to use overhead far-UVC lamps in occupied indoor public places to markedly reduce the risk of person-to-person transmission of coronaviruses, as well as other viruses such as influenza.

Ongoing studies in SARS-CoV-2

In a separate ongoing study, the researchers are testing the efficacy of far-UVC light against airborne SARS-CoV-2. Preliminary data suggest that far-UVC light is just as effective at killing SARS-CoV-2.

“Far-UVC light doesn’t really discriminate between coronavirus types, so we expected that it would kill SARS-CoV-2 in just the same way,” Brenner says. “Since SARS-CoV-2 is largely spread via droplets and aerosols that are coughed and sneezed into the air it’s important to have a tool that can safely inactivate the virus while it’s in the air, particularly while people are around.”

Brenner continues, “Because it’s safe to use in occupied spaces like hospitals, buses, planes, trains, train stations, schools, restaurants, offices, theaters, gyms, and anywhere that people gather indoors, far-UVC light could be used in combination with other measures, like wearing face masks and washing hands, to limit the transmission of SARS-CoV-2 and other viruses.”

More information

The paper is titled, “Far-UVC light (222-nm) efficiently and safely inactivates airborne coronaviruses.”

The other authors (all CUIMC) are Manuela Buonnano, David Welch, and Igor Shuryak.

The study was funded by the Shostack Foundation and the NIH (grant R42-AI125006-03).

The authors declare that the Trustees of Columbia University in the City of New York have a pending patent on the technology: “Apparatus, method and system for selectively affecting and/or killing a virus.”

The authors declare no additional financial or other conflicts of interest.

Go to Source
Author:

Categories
DCED

Gov. Wolf: Funding Awarded to Support Affordable Housing Projects in 17 Counties – PA Department of Community & Economic Development

Harrisburg, PA — Today, Governor Tom Wolf announced more than $10 million in funding through the federal HOME Investment Partnerships Program (HOME) to support affordable housing projects across the commonwealth.

“Being able to provide affordable, safe, and livable spaces for lower-income Pennsylvanians across the commonwealth remains a high priority for my administration. Especially as Pennsylvanians continue to feel the financial impact of the COVID-19 public health crisis, ensuring that there are good housing options for those who need it is critical,” Gov. Wolf said. “HOME funding helps individuals acquire and preserve reliable and safe housing and ensures that opportunity is available to any eligible Pennsylvania homeowner or renter.”

The HOME program provides federal funding to assist municipalities and local governments in expanding and preserving a supply of affordable housing for low and very low-income Pennsylvanians. HOME funds can be used in a variety of ways to address critical housing needs, including market-oriented approaches that offer opportunities such as homeownership or rental activities to revitalize communities with new investment. HOME program funds are provided to the Department of Community and Economic Development (DCED) from the U.S. Department of Housing and Urban Development (HUD) through the annual entitlement appropriation process.

The funding will be distributed to projects in the following 17 counties:

Cameron County

Cameron County was approved for $500,000 for the rehabilitation of 10 existing owner-occupied homes. The county plans to rehabilitate homes owned by HUD income-eligible elderly and disabled residents.

Centre County

State College Borough was approved for $280,000 to acquire, renovate, and sell a single property to one low-income household, administered by the borough.

Clearfield County

Clearfield County was approved for $257,580 to rehabilitate three existing owner-occupied homes.

Columbia County

Columbia County was approved for $1,926,679 to rehabilitate and convert a church in Bloomsburg into nine units of affordable rental housing for individuals or families at or below 50 percent of the median family income.

Franklin County

Franklin County was approved for $515,506 in funding to acquire, demolish, construct, and sell two three-bedroom homes to first-time homebuyers in the Borough of Waynesboro. The units will be marketed and sold to first-time low-income homebuyers.

Indiana County

Indiana County was approved for $300,000 to rehabilitate five existing owner-occupied homes.

Lackawanna County

Lackawanna County was approved for $750,000 to rehabilitate an occupied six-unit low-moderate income apartment building. The funding will support exterior building rehabilitation, which includes noise reduction, siding, gutters, and the rehabilitation scope will consist of converting a one-bedroom unit into a two-bedroom unit and supports site work, which involves resurfacing the parking lot, parking lot painting, and landscaping.

Lawrence County

Lawrence County was approved for $750,000 to rehabilitate 18 existing owner-occupied homes.

Shenango Township was approved for $500,000 funds for rehabilitation of 12 existing owner-occupied homes to be administered by Lawrence County Community Service (LCCS).

Lebanon County

The City of Lebanon was approved for $250,000 to rehabilitate six owner-occupied homes. The funding will support community efforts to improve the city, which has a high incidence of renter-occupied properties and single-family units which have undergone conversion into multi-family buildings.

Lehigh County

The City of Allentown was approved for $500,000 to construct four new properties for sale, to be administered by City of Allentown’s Community and Economic Development Department, and the developer will be Housing Association and Development Corporation (HADC). The property sites are located in a neighborhood with a poverty rate of 40 percent.

Lycoming County

South Williamsport Borough was approved for $500,000 for the rehabilitation of nine existing owner-occupied homes to be administered by the SEDA-Council of Governments.

Montour County

Montour County was approved for $500,000 for the rehabilitation of nine existing owner-occupied homes to be administered by the SEDA-Council of Governments.

Northumberland County

The City of Sunbury was approved for $500,000 to rehabilitate nine owner-occupied homes to be administered by the SEDA-Council of Governments.

Milton Borough was approved for $500,000 for the rehabilitation of nine existing owner-occupied homes to be administered by the SEDA-Council of Governments.

Schuylkill County

St. Clair Borough was approved for $500,000 to rehabilitate 14 owner-occupied homes to be administered by the borough secretary and Mullin & Lonergan Associates, Inc. The program will be available to all low-income borough residents but will target low-income elderly residents.

Union County

Union County was approved for $500,000 to rehabilitate 15 owner-occupied homes to be administered by the Union County Housing Authority.

York County

The City of York was approved for $500,000 to construct six new townhomes for low-income, first-time homebuyers. York Habitat for Humanity is pairing this construction project with its Critical Home Repair and Aging in Place programs to provide services to neighbors.

For more information about the Department of Community and Economic Development, visit the DCED website, and be sure to stay up-to-date with all of our agency news on Facebook, Twitter, and LinkedIn.

MEDIA CONTACT:

Lyndsay Kensinger, Governor’s Office, [email protected]

Casey Smith, DCED, [email protected]

# # #


Go to Source
Author: Marketing998