Categories
ScienceDaily

Simple fluorescent surfactants produced for medicine, manufacturing

Laboratories use surfactants to separate things, and fluorescent dyes to see things. Rice University chemists have combined the two to simplify life for scientists everywhere.

The Wiess School of Natural Sciences lab of chemist Angel Martí introduced a lineup of eight fluorescent surfactants in Pure and Applied Chemistry. They’re examples of what he believes will be a modular set of fluorescent surfactants for labs and industry.

Martí and Rice graduate student and lead author Ashleigh Smith McWilliams developed the compounds primarily to capture images of single nanotubes or cells as simply as possible.

“We can stain cells or carbon nanotubes with these surfactants,” Martí said. “They stick to cells or nanotubes and now you can use fluorescent microscopy to visualize them.”

Soaps and detergents are common surfactants. They are two-part molecules with water-attracting heads and water-avoiding tails. Put enough of them in water and they will form micelles, with the heads facing outward and the tails inward. (Similar structures form the protective, porous barriers around cells.)

McWilliams produced the surfactants by reacting fluorescent dyes with alcohol-based, nonpolar tails, which made the heads glow when triggered by visible light. When the compounds wrap around carbon nanotubes in a solution, they not only keep the nanotubes from aggregating but make them far easier to see under a microscope.

“Surfactants have been used for many different applications for years, but we’ve made them special by converting them to image things you can generally not see,” Martí said.

“Fluorescent surfactants have been studied before, but the novel part of ours is their versatility and relative simplicity,” McWilliams said. “We use common dyes and plan to produce these surfactants with an array of colors and fluorescent properties for specific applications.”

Those could be far-reaching, Martí said.

“These can go well beyond imaging applications,” he said. “For instance, clothing manufacturers use surfactants and dyes. In theory, they could combine those; instead of using two different chemicals, they could use one.

“I can also envision using these for water purification, where surfactant dyes can be tuned to trap pollutants and destroy them using visible light,” Martí said. “For biomedical applications, they can be tuned to target specific cells and kill only those you radiate with light. That would allow for a localized way to treat, say, skin cancer.”

Martí said his lab was able to confirm fluorescent surfactants are the real deal. “We were able to characterize the critical micelle concentration, the concentration at which micelles start forming,” he said. “So we are 100% sure these molecules are surfactants.”

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
svchamber.org

Shenango Valley Career Opportunities Aug. 5

Check out the great job opportunities from Shenango Honda; O’Neill Coffee Company; Nick Strimbu Inc.; WKBN 27 Youngstown OH; Farmaceutical RX; Laurel Technical Institute Sharon; Roemer Industries, Inc.; Westminster College & more! Plus great career development opportunities with Gannon University Small Business Development Center; and some great workforce development support from the PA Department of Labor & Industry; Pathstone and PA CareerLink Mercer County

Go to Source
Author: Sherris Moreira

Categories
Hackster.io

An Arduino Helps This Sound Projector Track Individuals

We tend to think of sound as omnidirectional, and that’s because it normally is. If you drop a glass on the floor, the sound waves from it shattering are emitted in all directions. But that doesn’t mean it has to be — sound is, after all, just waves vibrating through the air. Your stereo, for example, sounds much different if you stand behind the speakers than if you’re standing in front of them. That can even be narrowed down and focused much further to create a “sound projector,” and researchers from the University of Sussex in England have developed the first sound projector that can track individuals.

The projector can track a moving individual and deliver an acoustic message. (📷: University of Sussex)

The concept of sound projection has been around for a long time, and essentially relies on emitting a narrow beam of sound waves at a target. Anyone outside of that beam is unable to hear the sound. That can be further reduced to a small pocket of volume by aiming two beams at a single point. Neither beam actually carries the complete sound “picture,” but where the sound waves modify each other where they intersect to create a small space where the audio can be heard clearly. It’s also possible to direct those sound waves in a way similar to optical techniques that have been used with light for a long time.

Current version of the acoustic project. (📷: University of Sussex)

What makes this research interesting is how the sound projection is being directed. An inexpensive commercial webcam is used for facial tracking in order to target a specific individual. An Arduino-controlled acoustic telescope is then aimed at that person. That telescope not only points the sound waves at the person’s head, but also focuses them just like you would focus the lens on a camera. That makes it possible to create a small pocket around a person’s head with audio that only they can hear, which opens up all kinds of interesting possibilities for entertainment and even personal alerts.

Go to Source
Author: Cameron Coward

Categories
ScienceDaily

Using quantum dots and a smartphone to find killer bacteria

A combination of off-the-shelf quantum dot nanotechnology and a smartphone camera soon could allow doctors to identify antibiotic-resistant bacteria in just 40 minutes, potentially saving patient lives.

Staphylococcus aureus (golden staph), is a common form of bacterium that causes serious and sometimes fatal conditions such as pneumonia and heart valve infections. Of particular concern is a strain that does not respond to methicillin, the antibiotic of first resort, and is known as methicillin-resistant S. aureus, or MRSA.

Recent reports estimate that 700,000 deaths globally could be attributed to antimicrobial resistance, such as methicillin-resistance. Rapid identification of MRSA is essential for effective treatment, but current methods make it a challenging process, even within well-equipped hospitals.

Soon, however, that may change, using nothing except existing technology.

Researchers from Macquarie University and the University of New South Wales, both in Australia, have demonstrated a proof-of-concept device that uses bacterial DNA to identify the presence of Staphylococcus aureus positively in a patient sample — and to determine if it will respond to frontline antibiotics.

In a paper published in the international peer-reviewed journal Sensors and Actuators B: Chemical the Macquarie University team of Dr Vinoth Kumar Rajendran, Professor Peter Bergquist and Associate Professor Anwar Sunna with Dr Padmavathy Bakthavathsalam (UNSW) reveal a new way to confirm the presence of the bacterium, using a mobile phone and some ultra-tiny semiconductor particles known as quantum dots.

“Our team is using Synthetic Biology and NanoBiotechnology to address biomedical challenges. Rapid and simple ways of identifying the cause of infections and starting appropriate treatments are critical for treating patients effectively,” says Associate Professor Anwar Sunna, head of the Sunna Lab at Macquarie University.

“This is true in routine clinical situations, but also in the emerging field of personalised medicine.”

The researchers’ approach identifies the specific strain of golden staph by using a method called convective polymerase chain reaction (or cPCR). This is a derivative of a widely -employed technique in which a small segment of DNA is copied thousands of times, creating multiple samples suitable for testing.

Vinoth Kumar and colleagues then subject the DNA copies to a process known as lateral flow immunoassay — a paper-based diagnostic tool used to confirm the presence or absence of a target biomarker. The researchers use probes fitted with quantum dots to detect two unique genes, that confirms the presence of methicillin resistance in golden staph

A chemical added at the PCR stage to the DNA tested makes the sample fluoresce when the genes are detected by the quantum dots — a reaction that can be captured easily using the camera on a mobile phone.

The result is a simple and rapid method of detecting the presence of the bacterium, while simultaneously ruling first-line treatment in or out.

Although currently at proof-of-concept stage, the researchers say their system which is powered by a simple battery is suitable for rapid detection in different settings.

“We can see this being used easily not only in hospitals, but also in GP clinics and at patient bedsides,” says lead author, Macquarie’s Vinoth Kumar Rajendran.

Story Source:

Materials provided by Macquarie University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Lessons of conventional imaging let scientists see around corners

Along with flying and invisibility, high on the list of every child’s aspirational superpowers is the ability to see through or around walls or other visual obstacles. That capability is now a big step closer to reality as scientists from the University of Wisconsin-Madison and the Universidad de Zaragoza in Spain, drawing on the lessons of classical optics, have shown that it is possible to image complex hidden scenes using a projected “virtual camera” to see around barriers.

The technology is described in a report today (Aug. 5, 2019) in the journal Nature. Once perfected, it could be used in a wide range of applications, from defense and disaster relief to manufacturing and medical imaging. The work has been funded largely by the military through the U.S. Defense Department’s Advanced Research Projects Agency (DARPA) and by NASA, which envisions the technology as a potential way to peer inside hidden caves on the moon and Mars.

Technologies to achieve what scientists call “non-line-of-sight imaging” have been in development for years, but technical challenges have limited them to fuzzy pictures of simple scenes. Challenges that could be overcome by the new approach include imaging far more complex hidden scenes, seeing around multiple corners and taking video.

“This non-line-of sight imaging has been around for a while,” says Andreas Velten, a professor of biostatistics and medical informatics in the UW School of Medicine and Public Health and the senior author of the new Nature study. “There have been a lot of different approaches to it.”

The basic idea of non-line of-sight imaging, Velten says, revolves around the use of indirect, reflected light, a light echo of sorts, to capture images of a hidden scene. Photons from thousands of pulses of laser light are reflected off a wall or another surface to an obscured scene and the reflected, diffused light bounces back to sensors connected to a camera. The recaptured light particles or photons are then used to digitally reconstruct the hidden scene in three dimensions.

“We send light pulses to a surface and see the light coming back, and from that we can see what’s in the hidden scene,” Velten explains.

Recent work by other research groups has focused on improving the quality of scene regeneration under controlled conditions using small scenes with single objects. The work presented in the new Nature report goes beyond simple scenes and addresses the primary limitations to existing non-line-of-sight imaging technology, including varying material qualities of the walls and surfaces of the hidden objects, large variations in brightness of different hidden objects, complex inter-reflection of light between objects in a hidden scene, and the massive amounts of noisy data used to reconstruct larger scenes.

Together, those challenges have stymied practical applications of emerging non-line-of-sight imaging systems.

Velten and his colleagues, including Diego Gutierrez of the Universidad de Zaragoza, turned the problem around, looking at it through a more conventional prism by applying the same math used to interpret images taken with conventional line-of-sight imaging systems. The new method surmounts the use of a single reconstruction algorithm and describes a new class of imaging algorithms that share unique advantages.

Conventional systems, notes Gutierrez, interpret diffracted light as waves, which can be shaped into images by applying well known mathematical transformations to the light waves propagating through the imaging system.

In the case of non-line-of-sight imaging, the challenge of imaging a hidden scene, says Velten, is resolved by reformulating the non-line-of-sight imaging problem as a wave diffraction problem and then using well-known mathematical transforms from other imaging systems to interpret the waves and reconstruct an image of a hidden scene. By doing this, the new method turns any diffuse wall into a virtual camera.

“What we did was express the problem using waves,” says Velten, who also holds faculty appointments in UW-Madison’s Department of Electrical and Computer Engineering and the Department of Biostatistics and Medical Informatics, and is affiliated with the Morgridge Institute for Research and the UW-Madison Laboratory for Optical and Computational Instrumentation. “The systems have the same underlying math, but we found that our reconstruction is surprisingly robust, even using really bad data. You can do it with fewer photons.”

Using the new approach, Velten’s team showed that hidden scenes can be imaged despite the challenges of scene complexity, differences in reflector materials, scattered ambient light and varying depths of field for the objects that make up a scene.

The ability to essentially project a camera from one surface to another suggests that the technology can be developed to a point where it is possible to see around multiple corners: “This should allow us to image around an arbitrary number of corners,” says Velten. “To do so, light has to undergo multiple reflections and the problem is how do you separate the light coming from different surfaces? This ‘virtual camera’ can do that. That’s the reason for the complex scene: there are multiple bounces going on and the complexity of the scene we image is greater than what’s been done before.”

According to Velten, the technique can be applied to create virtual projected versions of any imaging system, even video cameras that capture the propagation of light through the hidden scene. Velten’s team, in fact, used the technique to create a video of light transport in the hidden scene, enabling visualization of light bouncing up to four or five times, which, according to the Wisconsin scientist, can be the basis for cameras to see around more than one corner.

The technology could be further and more dramatically improved if arrays of sensors can be devised to capture the light reflected from a hidden scene. The experiments described in the new Nature paper depended on just a single detector.

In medicine, the technology holds promise for things like robotic surgery. Now, the surgeon’s field of view is restricted when doing sensitive procedures on the eye, for example, and the technique developed by Velten’s team could provide a more complete picture of what’s going on around a procedure.

In addition to helping resolve many of the technical challenges of non-line-of-sight imaging, the technology, Velten notes, can be made to be both inexpensive and compact, meaning real-world applications are just a matter of time.

Go to Source
Author:

Categories
ScienceDaily

Google maps for tissues

Modern light microscopic techniques provide extremely detailed insights into organs, but the terabytes of data they produce are usually nearly impossible to process. New software, developed by a team led by MDC scientist Dr. Stephan Preibisch and now presented in Nature Methods, is helping researchers make sense of these reams of data.

It works almost like a magic wand. With the help of a few chemical tricks and ruses, scientists have for a few years now been able to render large structures like mouse brains and human organoids transparent. CLARITY is perhaps the most well-known of the many different sample clearing techniques, with which almost any object of study can be made nearly as transparent as water. This enables researchers to investigate cellular structures in ways they could previously only dream of.

And that’s not all. In 2015 another conjuring trick — called expansion microscopy — was presented in the journal Science. A research team at Massachusetts Institute of Technology (MIT) in Cambridge discovered that it was possible to expand ultrathin slices of mouse brains nearly five times their original volume, thereby allowing samples to be examined in even greater detail.

The software brings orders to the data chaos

“With the aid of modern light-sheet microscopes, which are now found in many labs, large samples processed by these methods can be rapidly imaged,” says Dr. Stephan Preibisch, head of the research group on Microscopy, Image Analysis & Modeling of Developing Organisms at MDC’s Berlin Institute for Medical Systems Biology (BIMSB). “The problem, however, is that the procedure generates such large quantities of data — several terabytes — that researchers often struggle to sift through and organize the data.”

To create order in the chaos, Preibisch and his team have now developed a software program that after complex reconstructing the data resembles somewhat Google Maps in 3D mode. “One can not only get an overview of the big picture, but can also zoom in to specifically examine individual structures at the desired resolution,” explains Preibisch, who has christened the software “BigStitcher.” Now, the computer program, which any interested scientist can use, has been presented in the scientific journal Nature Methods.

A team of twelve researchers from Berlin, Munich, the United Kingdom, and the United States were involved in the development. The paper’s two lead authors are David Hoerl, from Ludwig-Maximilians-Universitaet Muenchen, the Berlin Institute for Medical Systems Biology (BIMSB) of the MDC, as well as the MDC researcher Dr. Fabio Rojas Rusak. The researchers show in their paper that algorithms can be used to reconstruct and scale the data acquired by light-sheet microscopy in such a way that renders a supercomputer unnecessary. “Our software runs on any standard computer,” says Preibisch. “This allows the data to be easily shared across research teams.”

Data quality is also determined

The development of BigStitcher began about ten years ago. “At that time, I was still a PhD student and was thinking a lot about how to best handle very large amounts of data,” recalls Preibisch. “The frameworks we created back then have helped us to successfully tackle a very current problem.” But, of course, he adds, many new algorithms were also incorporated into the software.

BigStitcher can visualize on screen the previously imaged samples in any level of detail desired, but it can also do much more. “The software automatically assesses the quality of the acquired data,” says Preibisch. This is usually better in some parts of the object being studied than in others. “Sometimes, for example, clearing doesn’t work so well in particular area, meaning that fewer details are captured there,” explains the MDC researcher.

“The brighter a particular region of, say, a mouse brain or a human organ is displayed on screen, the higher the validity and reliability of the acquired data,” says Preibisch, describing this additional feature of his software. And because even the best clearing techniques never achieve 100 percent transparency of the sample, the software lets users rotate and turn the image captured by the microscope in any direction on screen. It is thus possible to view the sample from any angle. “This is another new feature of our software,” says Preibisch.

Anyone can download the software for free

The zoom function allows biologists to find answers to many questions, such as: Where in the brain is cell division currently taking place? Where is RNA expressed? Or where do particular neuronal projections end? “In order to find all this out, it is first necessary to get an overview of the entire object of study, but then to be able to zoom in to view the smallest of details in high resolution,” explains Preibisch. Therefore, many labs today have a need for software like BigStitcher. The program is distributed within the Fiji framework, where any interested scientist can download and use the plug-in free of charge.

Go to Source
Author:

Categories
ScienceDaily

How electrons in transition metals get redistributed

The distribution of electrons in transition metals, which represent a large part of the periodic table of chemical elements, is responsible for many of their interesting properties used in applications. The magnetic properties of some of the members of this group of materials are, for example, exploited for data storage, whereas others exhibit excellent electrical conductivity. Transition metals also have a decisive role for novel materials with more exotic behaviour that results from strong interactions between the electrons. Such materials are promising candidates for a wide range of future applications.

In their experiment, whose results they report in a paper published today in Nature Physics, Mikhail Volkov and colleagues in the Ultrafast Laser Physics group of Prof. Ursula Keller exposed thin foils of the transition metals titanium and zirconium to short laser pulses. They observed the redistribution of the electrons by recording the resulting changes in optical properties of the metals in the extreme ultraviolet (XUV) domain. In order to be able to follow the induced changes with sufficient temporal resolution, XUV pulses with a duration of only few hundred attoseconds (10^-18 s) were employed in the measurement. By comparing the experimental results with theoretical models, developed by the group of Prof. Angel Rubio at the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg, the researchers established that the change unfolding in less than a femtosecond (10^-15 s) is due to a modification of the electron localization in the vicinity of the metal atoms. The theory also predicts that in transition metals with more strongly filled outer electron shells an opposite motion — that is, a delocalization of the electrons — is to be expected.

Ultrafast control of material properties

The electron distribution defines the microscopic electric fields inside a material, which do not only hold a solid together but also to a large extent determine its macroscopic properties. By changing the distribution of electrons, one can thus steer the characteristics of a material as well. The experiment of Volkov et al. demonstrates that this is possible on time scales that are considerably shorter than the oscillation cycle of visible light (around two femtoseconds). Even more important is the finding that the time scales are much shorter than the so-called thermalization time, which is the time within which the electrons would wash out the effects of an external control of the electron distribution through collisions between themselves and with the crystal lattice.

Initial surprise

Initially, it came as a surprise that the laser pulse would lead to an increased electron localization in titanium and zirconium. A general trend in nature is that if bound electrons are provided with more energy, they will become less localized. The theoretical analysis, which supports the experimental observations, showed that the increased localization of the electron density is a net effect resulting from the stronger filling of the characteristic partially filled d-orbitals of the transition-metal atoms. For transition metals that have d-orbitals which are already more than half filled (that is, elements more towards the right in the periodic table), the net effect is to the opposite and corresponds to a delocalization of the electronic density.

Towards faster electronic components

While the result now reported is of fundamental nature, the experiments demonstrate the possibility of a very fast modification of material properties. Such modulations are used in electronics and opto-electronics for the processing of electronic signals or the transmission of data. While present components process signal streams with frequencies in the gigahertz (10^9 Hz) range, the results of Volkov and co-workers indicate the possibility of signal processing at petahertz frequencies (10^15 Hz). These rather fundamental findings might therefore inform the development of the next generations of ever-faster components, and through this indirectly find their way into our daily life.

Story Source:

Materials provided by ETH Zurich Department of Physics. Original written by Lukas Gallmann. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

SYS Systems and Torus Group partner to produce 3D printed assembly for packaging industry

SYS Systems, the UK-based reseller and platinum partner of Stratasys, has announced that it will be bringing an unusual exhibit to the 2019 TCT Show in Birmingham, UK. Teamed up with Torus Group, a bespoke metrology specialist also headquartered in the UK, SYS Sytems will be showing a live demonstration of a novel 3D printing application for […]

Go to Source
Author: Anas Essop

Categories
3D Printing Industry

WATCH: Professional test ride of ROBOZE’s 3D printed skateboard

Italian 3D printer manufacturer ROBOZE has created a fully 3D printed skateboard using Carbon PA, PEEK, Flex (TPE) and PP. Made to demonstrate the applications of ROBOZE’s materials which “Print Strong Like Metal,” the board has been developed in partnership with Impact Surf Shop. Fabiano Lauciello, a professional skater, was selected to test it. “I was pleasantly surprised […]

Go to Source
Author: Tia Vialva

Categories
3D Printing Industry

Keselowski Advanced Manufacturing enter 3D printer material partnership with Elemetum 3D

North Carolina-based hybrid manufacturing company Keselowski Advanced Manufacturing (KAM) has announced a partnership with additive manufacturing material developer Elementum 3D, headquartered in Colorado. Under the collaboration, Elementum 3D will supply KAM with previously unprintable advanced materials for additive manufacturing. In turn, KAM will use the advanced materials on its industrial additive manufacturing systems, such as […]

Go to Source
Author: Anas Essop