Yubico’s Android SDK Promises to Bolster Mobile App Security

Yubico recently made its Android SDK available to developers, who can now build support for YubiKey directly into their Android apps. The Android SDK joins the company’s iOS SDK, which means developers can create a consistent log-in experience across their mobile apps no matter the platform. 

Yubico specializes in mobile security. The company makes the YubiKey, a hardware USB key that’s plugged into devices such as mobile phones and laptops for authentication purposes. It’s popular with enterprise IT departments that want to maintain two-factor or cryptography-based security protocols. 

The company says the arrival of its Android SDK means Android mobile app developers can easily add support for YubiKey via YubiOTP, OATH, and PIV authentication over USB and NFC connections. It should support all developers, regardless of the authentication method preferred. 

Yubico suggests that the YubiKey enjoys several advantages over competing authentication tools, such as SMS or Google Authenticator. For example, an external, single-purpose device minimizes the risk from malware or phishing attacks. It’s faster than dealing with copying/pasting one-time codes and provides a higher level of security for high-risk actions, such as large account transfers. YubiKey is an ideal security tool for the financial and health industries.

The SDK itself is a multi-module library with components such as the YubiKey module, and the OATH, PIV, OTP, and MGMT modules. Yubico says the library also includes a demo application implemented in the Kotlin playground, which should serve as a complete example of all the features in the library.

The best way to get started is to pick up a YubiKey and load the YubiKit Demo App to learn the integration basics. You can find more information at Yubico’s GitHub repo and developer guide.

Go to Source
Author: <a href="">EricZeman</a>


Nanodevices show how cells change with time, by tracking from the inside

For the first time, scientists have introduced minuscule tracking devices directly into the interior of mammalian cells, giving an unprecedented peek into the processes that govern the beginning of development.

This work on one-cell embryos is set to shift our understanding of the mechanisms that underpin cellular behaviour in general, and may ultimately provide insights into what goes wrong in ageing and disease.

The research, led by Professor Tony Perry from the Department of Biology and Biochemistry at the University of Bath, involved injecting a silicon-based nanodevice together with sperm into the egg cell of a mouse. The result was a healthy, fertilised egg containing a tracking device.

The tiny devices are a little like spiders, complete with eight highly flexible ‘legs’. The legs measure the ‘pulling and pushing’ forces exerted in the cell interior to a very high level of precision, thereby revealing the cellular forces at play and showing how intracellular matter rearranged itself over time.

The nanodevices are incredibly thin — similar to some of the cell’s structural components, and measuring 22 nanometres, making them approximately 100,000 times thinner than a pound coin. This means they have the flexibility to register the movement of the cell’s cytoplasm as the one-cell embryo embarks on its voyage towards becoming a two-cell embryo.

“This is the first glimpse of the physics of any cell on this scale from within,” said Professor Perry. “It’s the first time anyone has seen from the inside how cell material moves around and organises itself.”


The activity within a cell determines how that cell functions, explains Professor Perry. “The behaviour of intracellular matter is probably as influential to cell behaviour as gene expression,” he said. Until now, however, this complex dance of cellular material has remained largely unstudied. As a result, scientists have been able to identify the elements that make up a cell, but not how the cell interior behaves as a whole.

“From studies in biology and embryology, we know about certain molecules and cellular phenomena, and we have woven this information into a reductionist narrative of how things work, but now this narrative is changing,” said Professor Perry. The narrative was written largely by biologists, who brought with them the questions and tools of biology. What was missing was physics. Physics asks about the forces driving a cell’s behaviour, and provides a top-down approach to finding the answer.

“We can now look at the cell as a whole, not just the nuts and bolts that make it.”

Mouse embryos were chosen for the study because of their relatively large size (they measure 100 microns, or 100-millionths of a metre, in diameter, compared to a regular cell which is only 10 microns [10-millionths of a metre] in diameter). This meant that inside each embryo, there was space for a tracking device.

The researchers made their measurements by examining video recordings taken through a microscope as the embryo developed. “Sometimes the devices were pitched and twisted by forces that were even greater than those inside muscle cells,” said Professor Perry. “At other times, the devices moved very little, showing the cell interior had become calm. There was nothing random about these processes — from the moment you have a one-cell embryo, everything is done in a predictable way. The physics is programmed.”

The results add to an emerging picture of biology that suggests material inside a living cell is not static, but instead changes its properties in a pre-ordained way as the cell performs its function or responds to the environment. The work may one day have implications for our understanding of how cells age or stop working as they should, which is what happens in disease.

The study is published this week in Nature Materials and involved a trans-disciplinary partnership between biologists, materials scientists and physicists based in the UK, Spain and the USA.

Go to Source


Codugh Pays Developers for Every API Call

Codugh wants developers to directly earn money for the APIs they create. The company is building a marketplace where developers publish APIs. As those APIs are used, regardless of integrated application, the API developer gets paid. It’s a pay per call model, and Codugh has partnered with Bitcoin SV (BSV) to make it happen.

BSV facilitates unlimited scaling and microtransactions, which makes it an ideal platform to roll out Codugh’s marketplace. Developers can set their own rates, but it will likely be a few cents, or less, per call. Developers who have success on the platform will rely on thousands, if not millions, of microtransactions to build their compensation. In addition to getting paid in Bitcoin, developers can leverage other platform features including performance badges, ratings, and user feedback.

Getting started with Codugh is three simple steps for developers. First, developers need to create and deploy the API. Second, developers upload the API endpoints into the Codugh marketplace. Third, each time the API is called, the developer is paid in real-time.

Codugh has not let launched. For early access, sign up at the Codugh site. At the site, Codugh also addresses intellectual property rights, authentication, and other frequently asked questions. 

Go to Source
Author: <a href="">ecarter</a>


GitHub Discusses Growth of Actions API

GitHub Actions puts powerful CI/CD and automation directly into the developer workflow, and it became generally available just six months ago. Since then, we’ve continued to improve with artifact and dependency caching for faster workflows, self-hosted runners for greater flexibility, and the Actions API for extensibility. The community response has been amazing and we’re excited to share some new and upcoming enhancements.

Community-powered CI/CD and workflows

At its core, GitHub Actions allows you to automate any development workflow. Just like how our projects build on each other’s work through open source packages, GitHub Actions does the same thing for workflow automation. You can quickly reuse existing workflows or build on the thousands of free actions in the GitHub Marketplace. There’s an action for almost everything, including: Kubernetes deployments, linting, SMS alerts, or automatically assigning and labeling an issue—and it keeps growing every day.

  • GitHub Marketplace now has over 3,200 Actions, representing a 500% increase in less than six months. We’re especially grateful to everyone who took place in the recent GitHub Actions Hackathon, with over 700 submissions. Check out a few of our favorites.
  • There’s a growing and extensive ecosystem, thanks to our partners. It’s great to see how they’re expanding the ways developers can deploy their code, with new Actions from DigitalOcean, Tencent Cloud, HashiCorp, and Docker.

We’re thrilled to see how GitHub Actions is helping developers, teams, enterprises, and the open source community automate their workflows—so developers can spend more time writing code.

Scaling GitHub Actions for teams and enterprises

The community momentum behind GitHub Actions has been tremendous, and we’ve seen quick adoption with teams and enterprise customers. We want to highlight new, enterprise-focused features for GitHub Actions.

Share self-hosted runners across an organization

Now, organizations can share and manage self-hosted runners across their organization using new policies and labels. This enables large teams and enterprises to centralize the management of their core infrastructure.

  • Organization self-hosted runners make it easier for multiple repositories to reuse a set of runners. This ensures runner environments are correctly configured for the organization, resources are used efficiently, and errors resulting in extra work are avoided.
  • Custom runner labels are used to route workflows to particular runners. For example, compute-intensive workflows may use custom labels to ensure they’re always run on virtual machines with eight cores.

Improvements to the daily experience

Small changes can have a big impact on the daily experience for developers. Here are a few improvements that we recently shipped:

  • Run defaults allow users to set shell and working directory defaults in their workflows, streamlining workflow files and decreasing the likelihood of errors.
  • Explicit include matrix gives more flexibility to customer-specific legs of a parallel set of jobs, commonly used for platform-specific customization.
  • Job outputs allow a workflow to easily pass data to downstream jobs, adding flexibility to how developers automate their work.

Go to Source
Author: <a href="">ProgrammableWeb PR</a>


In a first, NASA measures wind speed on a brown dwarf

For the first time, scientists have directly measured wind speed on a brown dwarf, an object larger than Jupiter (the largest planet in our solar system) but not quite massive enough to become a star. To achieve the finding, they used a new method that could also be applied to learn about the atmospheres of gas-dominated planets outside our solar system.

Described in a paper in the journal Science, the work combines observations by a group of radio telescopes with data from NASA’s recently retired infrared observatory, the Spitzer Space Telescope, managed by the agency’s Jet Propulsion Laboratory in Southern California.

Officially named 2MASS J10475385+2124234, the target of the new study was a brown dwarf located 32 light-years from Earth — a stone’s throw away, cosmically speaking. The researchers detected winds moving around the planet at 1,425 mph (2,293 kph). For comparison, Neptune’s atmosphere features the fastest winds in the solar system, which whip through at more than 1,200 mph (about 2,000 kph).

Measuring wind speed on Earth means clocking the motion of our gaseous atmosphere relative to the planet’s solid surface. But brown dwarfs are composed almost entirely of gas, so “wind” refers to something slightly different. The upper layers of a brown dwarf are where portions of the gas can move independently. At a certain depth, the pressure becomes so intense that the gas behaves like a single, solid ball that is considered the object’s interior. As the interior rotates, it pulls the upper layers — the atmosphere -along so that the two are almost in synch.

In their study, the researchers measured the slight difference in speed of the brown dwarf’s atmosphere relative to its interior. With an atmospheric temperature of over 1,100 degrees Fahrenheit (600 degrees Celsius), this particular brown dwarf radiates a substantial amount of infrared light. Coupled with its close proximity to Earth, this characteristic made it possible for Spitzer to detect features in the brown dwarf’s atmosphere as they rotate in and out of view. The team used those features to clock the atmospheric rotation speed.

To determine the speed of the interior, they focused on the brown dwarf’s magnetic field. A relatively recent discovery found that the interiors of brown dwarfs generate strong magnetic fields. As the brown dwarf rotates, the magnetic field accelerates charged particles that in turn produce radio waves, which the researchers detected with the radio telescopes in the Karl G. Jansky Very Large Array in New Mexico.

Planetary Atmospheres

The new study is the first to demonstrate this comparative method for measuring wind speed on a brown dwarf. To gauge its accuracy, the group tested the technique using infrared and radio observations of Jupiter, which is also composed mostly of gas and has a physical structure similar to a small brown dwarf. The team compared the rotation rates of Jupiter’s atmosphere and interior using data that was similar to what they were able to collect for the much more distant brown dwarf. They then confirmed their calculation for Jupiter’s wind speed using more detailed data collected by probes that have studied Jupiter up close, thus demonstrating that their approach for the brown dwarf worked.

Scientists have previously used Spitzer to infer the presence of winds on exoplanets and brown dwarfs based on variations in the brightness of their atmospheres in infrared light. And data from the High Accuracy Radial velocity Planet Searcher (HARPS) — an instrument on the European Southern Observatory’s La Silla telescope in Chile — has been used to make a direct measurement of wind speeds on a distant planet.

But the new paper represents the first time scientists have directly compared the atmospheric speed with the speed of a brown dwarf’s interior. The method employed could be applied to other brown dwarfs or to large planets if the conditions are right, according to the authors.

“We think this technique could be really valuable to providing insight into the dynamics of exoplanet atmospheres,” said lead author Katelyn Allers, an associate professor of physics and astronomy at Bucknell University in Lewisburg, Pennsylvania. “What’s really exciting is being able to learn about how the chemistry, the atmospheric dynamics and the environment around an object are interconnected, and the prospect of getting a really comprehensive view into these worlds.”

The Spitzer Space Telescope was decomissioned on Jan. 30, 2020, after more than 16 years in space. JPL managed Spitzer mission operations for NASA’s Science Mission Directorate in Washington. Spitzer science data continue to be analyzed by the science community via the Spitzer data archive located at the Infrared Science Archive housed at IPAC at Caltech. Science operations were conducted at the Spitzer Science Center at IPAC at Caltech in Pasadena. Spacecraft operations were based at Lockheed Martin Space in Littleton, Colorado. Caltech manages JPL for NASA.

For more information about Spitzer, visit:

Go to Source


How big is the neutron?

The size of neutrons cannot be measured directly: it can only be determined from experiments involving other particles. While such calculations have so far been made in a very indirect way using old measurements with heavy atoms, a team at the Institute of Theoretical Physics at Ruhr-Universität Bochum (RUB) has taken a different approach. By combining their very accurate calculations with recent measurements on light nuclei, the researchers have arrived at a more direct methodology.

Their results, which differ significantly from previous ones, are described by the researchers headed by Professor Evgeny Epelbaum in the journal Physical Review Letters from 25. February 2020.

Neutrons and protons, jointly referred to as nucleons, form atomic nuclei and are therefore among the most common particles in our universe. The nucleons themselves consist of strongly interacting quarks and gluons and have a complex internal structure, the precise understanding of which is the subject of active research. One of the fundamental properties of nucleons is their size as determined by charge distribution. “Inside, there are positive and negative charge regions which, when taken together, result in zero total charge for the neutron,” explains Evgeny Epelbaum. “The neutron’s radius can be thought of as the spatial extension of the charge distribution. It thus determines the size of the neutrons.”

A very indirect method

To date, determinations of this quantity were based on scattering experiments with extremely low-energy neutrons on an electron shell of heavy atoms such as bismuth. “Researchers would direct such a neutron beam at a target of heavy isotopes carrying many electrons and determine how many neutrons passed through,” says Bochum-based physicist Dr. Arseniy Filin. This allowed one to extract the size of the neutrons. “This is a very indirect method,” points out the physicist.

In their current project, the group has for the first time determined the neutron charge radius from the lightest atomic nuclei. In a theoretical study, they have succeeded in calculating the deuteron radius with high accuracy. The deuteron is one of the simplest atomic nuclei and consists of one proton and one neutron. Since the two nucleons in the deuteron are relatively far apart, the deuteron turns out to be much larger than its two constituents. “Our accurate prediction of the deuteron radius, combined with high-precision spectroscopic measurements of the deuteron-proton radius difference, yielded a value for the neutron radius that is about 1.7 standard deviations off the previous determinations,” concludes Dr. Vadim Baru from the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn. Accordingly, the previously assumed value for the size of a neutron is to be corrected.

Story Source:

Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length.

Go to Source


Making 3-D printing smarter with machine learning

3-D printing is often touted as the future of manufacturing. It allows us to directly build objects from computer-generated designs, meaning industry can manufacture customized products in-house, without outsourcing parts. But 3-D printing has a high degree of error, such as shape distortion. Each printer is different, and the printed material can shrink and expand in unexpected ways. Manufacturers often need to try many iterations of a print before they get it right.

What happens to the unusable print jobs? They must be discarded, presenting a significant environmental and financial cost to industry.

A team of researchers from USC Viterbi School of Engineering is tackling this problem, with a new set of machine learning algorithms and a software tool called PrintFixer, to improve 3-D printing accuracy by 50 percent or more, making the process vastly more economical and sustainable.

The work, recently published in IEEE Transactions on Automation Science and Engineering, describes a process called “convolution modeling of 3-D printing.” It’s among a series of 15 journal articles from the research team covering machine learning for 3-D printing.

The team, led by Qiang Huang, associate professor of industrial and systems engineering, chemical engineering and materials science, along with Ph.D. students Yuanxiang Wang, Nathan Decker, Mingdong Lyu, Weizhi Lin and Christopher Henson has so far received $1.4M funding support, including a recent $350,000 NSF grant. Their objective is to develop an AI model that accurately predicts shape deviations for all types of 3-D printing and make 3-D printing smarter.

“What we have demonstrated so far is that in printed examples the accuracy can improve around 50 percent or more,” Huang said. “In cases where we are producing a 3-D object similar to the training cases, overall accuracy improvement can be as high as 90 percent.”

“It can actually take industry eight iterative builds to get one part correct, for various reasons,” Huang said, “and this is for metal, so it’s very expensive.”

Every 3-D printed object results in some slight deviation from the design, whether this is due to printed material expanding or contracting when printed, or due to the way the printer behaves.

PrintFixer uses data gleaned from past 3-D printing jobs to train its AI to predict where the shape distortion will happen, in order to fix print errors before they occur.

Huang said that the research team had aimed to create a model that produced accurate results using the minimum amount of 3-D printing source data.

“From just five to eight selected objects, we can learn a lot of useful information,” Huang said. “We can leverage small amounts of data to make predictions for a wide range of objects.”

The team has trained the model to work with the same accuracy across a variety of applications and materials — from metals for aerospace manufacturing, to thermal plastics for commercial use. The researchers are also working with a dental clinic in Australia on the 3-D printing of dental models.

“So just like a when a human learns to play baseball, you’ll learn softball or some other related sport much quicker,” said Decker, who leads the software development effort development in Huang’s group. “In that same way, our AI can learn much faster when it has seen it a few times.”

“So you can look at it,” said Decker, “and see where there are going to be areas that are greater than your tolerances, and whether you want to print it.”

He said that users could opt to print with a different, higher-quality printer and use the software to predict whether that would provide a better result.

“But if you don’t want to change the printer, we also have incorporated functionality into the software package allowing the user to compensate for the errors and change the object’s shape — to take the parts that are too small and increase their size, while decreasing the parts that are too big,” Decker said. “And then, when they print, they should print with the correct size the first time.”

The team’s objective is for the software tool to be available to everyone, from large scale commercial manufacturers to 3-D printing hobbyists. Users from around the world will also be able to contribute to improving the software AI through sharing of print output data in a database.

“Say I’m working with a MakerBot 3-D printer using PLA (a bioplastic used in 3-D Printing), I can put that in the database, and somebody using the same model and material could take my data and learn from it,” Decker said.

“Once we get a lot of people around the world using this, all of a sudden, you have a really incredible opportunity to leverage a lot of data, and that could be a really powerful thing,” he said.

Go to Source


A close look at thin ice

On frigid days, water vapor in the air can transform directly into solid ice, depositing a thin layer on surfaces such as a windowpane or car windshield. Though commonplace, this process is one that has kept physicists and chemists busy figuring out the details for decades.

In a new Nature paper, an international team of scientists describe the first-ever visualization of the atomic structure of two-dimensional ice as it formed. Insights from the findings, which were driven by computer simulations that inspired experimental work, may one day inform the design of materials that make ice removal a simpler and less costly process.

“One of the things that I find very exciting is that this challenges the traditional view of how ice grows,” says Joseph S. Francisco, an atmospheric chemist at the University of Pennsylvania and an author on the paper.

“Knowing the structure is very important,” adds coauthor Chongqin Zhu, a postdoctoral fellow in Francisco’s group who led much of the computational work for the study. “Low-dimensional water is ubiquitous in nature and plays a critical role in an incredibly broad spectrum of sciences, including materials science, chemistry, biology, and atmospheric science.

“It also has practical significance. For example, removing ice is critical when it comes to things like wind turbines, which cannot function when they are covered in ice. If we understand the interaction between water and surfaces, then we might be able to develop new materials to make this ice removal easier.”

In recent years, Francisco’s lab has devoted considerable attention to studying the behavior of water, and specifically ice, at the interface of solid surfaces. What they’ve learned about ice’s growth mechanisms and structures in this context helps them understand how ice behaves in more complex scenarios, like when interacting with other chemicals and water vapor in the atmosphere.

“We’re interested in the chemistry of ice at the transition with the gas phase, as that’s relevant to the reactions that are happening in our atmosphere,” Francisco explains.

To understand basic principles of ice growth, researchers have entered this area of study by investigating two-dimensional structures: layers of ice that are only several water molecules thick.

In previous studies of two-dimensional ice, using computational methods and simulations, Francisco, Zhu, and colleagues showed that ice grows differently depending on whether a surface repels or attracts water, and the structure of that surface.

In the current work, they sought real-world verification of their simulations, reaching out to a team at Peking University to see if they could obtain images of two-dimensional ice.

The Peking team employed super-powerful atomic force microscopy, which uses a mechanical probe to “feel” the material being studied, translating the feedback into nanoscale-resolution images. Atomic force microscopy is capable of capturing structural information with a minimum of disruption to the material itself, allowing the scientists to identify even unstable intermediate structures that arose during the process of ice formation.

Virtually all naturally occurring ice on Earth is known as hexagonal ice for its six-sided structure. This is why snowflakes all have six-fold symmetry. One plane of hexagonal ice has a similar structure to that of two-dimensional ice and can terminate in two types of edges — “zigzag” or “armchair.” Usually this plane of natural ice terminates with a zigzag edges.

However, when ice is grown in two dimensions, researchers find that the pattern of growth is different. The current work, for the first time, shows that the armchair edges can be stabilized and that their growth follows a novel reaction pathway.

“This is a totally different mechanism from what was known,” Zhu says.

Although the zigzag growth patterns were previously believed to only have six-membered rings of water molecules, both Zhu’s calculations and the atomic force microscopy revealed an intermediate stage where five-membered rings were present.

This result, the researchers say, may help explain the experimental observations reported in their 2017 PNAS paper, which found that ice could grow in two different ways on a surface, depending on the properties of that surface.

In addition to lending insight into future design of materials conducive to ice removal, the techniques used in the work are also applicable to probe the growth of a large family of two-dimensional materials beyond two-dimensional ices, thus opening a new avenue of visualizing the structure and dynamics of low-dimensional matter.

For chemist Jeffrey Saven, a professor in Penn Arts & Sciences who was not directly involved in the current work, the collaboration between the theorists in Francisco’s group and their colleagues in China called to mind a parable he learned from a mentor during his training.

“An experimentalist is talking with theorists about data collected in the lab. The mediocre theorist says, ‘I can’t really explain your data.’ The good theorist says, ‘I have a theory that fits your data.’ The great theorist says, ‘That’s interesting, but here is the experiment you should be doing and why.'”

To build on this successful partnership, Zhu, Francisco, and their colleagues are embarking on theoretical and experimental work to begin to fill in the gaps related to how two-dimensional ice builds into three dimensions.

“The two-dimensional work is fundamental to laying the background,” says Francisco. “And having the calculations verified by experiments is so good, because that allows us to go back to the calculations and take the next bold step toward three dimensions.”

“Looking for features of three-dimensional ice will be the next step,” Zhu says, “and should be very important in looking for applications of this work.”

Go to Source


Pitfalls with nanocontainers for drug delivery

Nanocapsules and other containers can transport drugs through a patient’s body directly to the origin of the disease and release them there in a controlled manner. Such sophisticated systems are occasionally used in cancer therapy. Because they work very specifically, they have fewer side effects than drugs that are distributed throughout the entire organism.

This method is known in science as drug delivery. Chemistry professor Ann-Christin Pöppler from Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany, is convinced that this method still has great development potential. She analyzes the molecular capsules that enclose drugs like a container and transport them to the site of action: “My group wants to understand in as much detail as possible how the container molecules and the active substances arrange and what properties result from this,” she says.

Polymeric micelles as research objects

The junior professor is mainly investigating polymeric micelles. These consist of many chains of molecules, which assemble into spherical structures. Such micelles are already on the market as drug containers. They are used in cancer therapies as well as in cosmetic products such as make-up remover lotions. When they come into contact with fat-soluble substances, they arrange themselves on their surface and at the end surround them like a coat of hair. This forms a container with a “water-loving” outer shell and a “fat-loving” core.

“Little is known about the molecular origin of the properties of these structures,” says Pöppler. In the scientific journal Angewandte Chemie, the researcher and co-authors from JMU recently described an effect that is important for the design of future drug delivery systems: If increasing amounts of active ingredients are packed into the polymeric micelles, their dissolution suffers — the release of the active ingredients then becomes increasingly difficult.

Active ingredients glue the micelles together

The Würzburg research team found the reason for the decreasing solubility through a set of different experiments: As the container is loaded more and more, the active substances no longer settle exclusively in the core but also on the container surface. There they can almost glue the individual micelle hairs together. These molecular interactions reduce the solubility of the entire structure.

Next, the team hopes to find out whether the dissolution of the container can be improved by structural changes to the micelles. One of the goals of drug delivery is to ensure that a container absorbs as much active substance as possible and dissolves as well as possible in the body.

Polymer chemistry and pharmacy involved

Ann-Christin Pöppler cooperated with two other JMU groups in this work. The polymeric micelles were produced by Robert Luxenhofer, Professor of Polymer Functional Materials. The dissolution tests were carried out in the team of Professor Lorenz Meinel who heads the Chair of Pharmaceutical Technology and Biophysics.

The polymeric micelles used were compounds from the substance classes poly(2-oxazoline)s and poly(2-oxazine)s. Curcumin was used as model for an active substance because this ingredient of turmeric, a spice plant, is very easy to visualise spectroscopically. The structures of the containers loaded with different amounts of curcumin were determined by solid-state NMR spectroscopy and other analytical methods.

Story Source:

Materials provided by University of Würzburg. Note: Content may be edited for style and length.

Go to Source


New hunt for dark matter

Dark matter is only known by its effect on massive astronomical bodies, but has yet to be directly observed or even identified. A theory about what dark matter might be suggests that it could be a particle called an axion and that these could be detectable with laser-based experiments that already exist. These laser experiments are gravitational-wave observatories.

The hunt is on for dark matter. There are many theories as to what manner of thing it might turn out to be, but many physicists believe dark matter is a weakly interacting massive particle, or WIMP. What this means is that it does not interact easily with ordinary matter. We know this to be true because it hasn’t been seen directly yet. But it must also have at least some mass as its presence can be inferred by its gravitational attraction.

There have been enormous efforts to detect WIMP dark matter, including with the Large Hadron Collider in Switzerland, but WIMPs haven’t been observed yet. An alternative candidate particle gaining attention is the axion.

“We assume the axion is very light and barely interacts with our familiar kinds of matter. Therefore, it is considered as a good candidate for dark matter,” said Assistant Professor Yuta Michimura from the Department of Physics at the University of Tokyo. “We don’t know the mass of axions, but we usually think it has a mass less than that of electrons. Our universe is filled with dark matter and it’s estimated there are 500 grams of dark matter within the Earth, about the mass of a squirrel.”

Axions seem like a good candidate for dark matter, but since they may only interact very weakly with ordinary matter, they are extraordinarily difficult to detect. So physicists devise increasingly intricate ways to compensate for this lack of interaction in the hope of revealing the telltale signature of dark matter, which makes up over a quarter of the visible universe.

“Our models suggest axion dark matter modulates light polarization, which is the orientation of the oscillation of electromagnetic waves,” explained Koji Nagano, a graduate student at the Institute for Cosmic Ray Research at the University of Tokyo. “This polarization modulation can be enhanced if the light is reflected back and forth many times in an optical cavity composed of two parallel mirrors apart from each other. The best-known examples of these kinds of cavities are the long tunnel arms of gravitational-wave observatories.”

Dark matter research does not get as much attention or funding as other more applicable areas of scientific research, so great efforts are made to find ways to make the hunt cost-effective. This is relevant as other theoretical ways to observe axions involve extremely strong magnetic fields which incur great expense. Here, researchers suggest that existing gravitational-wave observatories such as the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the USA, Virgo in Italy or KAGRA in Japan could be cheaply modified to hunt for axions without detriment to their existing functions.

“With our new scheme, we could search for axions by adding some polarization optics in front of photodiode sensors in gravitational-wave detectors,” described Michimura. “The next step I would like to see is the implementation of optics to a gravitational-wave detector like KAGRA.”

This idea has promise because the upgrades to the gravitational-wave facilities would not reduce the sensitivity they rely on for their primary function, which is to detect distant gravitational waves. Attempts have been made with experiments and observations to find the axion, but thus far no positive signal has been found. The researchers’ proposed method would be far more precise.

“There is overwhelming astrophysical and cosmological evidence that dark matter exists, but the question ‘What is dark matter?’ is one of the biggest outstanding problems in modern physics,” said Nagano. “If we can detect axions and say for sure they are dark matter, it would be a truly exciting event indeed. It’s what physicists like us dream for.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source