Categories
ScienceDaily

Planet found orbiting small, cool star

Using the supersharp radio “vision” of the National Science Foundation’s continent-wide Very Long Baseline Array (VLBA), astronomers have discovered a Saturn-sized planet closely orbiting a small, cool star 35 light-years from Earth. This is the first discovery of an extrasolar planet with a radio telescope using a technique that requires extremely precise measurements of a star’s position in the sky, and only the second planet discovery for that technique and for radio telescopes.

The technique has long been known, but has proven difficult to use. It involves tracking the star’s actual motion in space, then detecting a minuscule “wobble” in that motion caused by the gravitational effect of the planet. The star and the planet orbit a location that represents the center of mass for both combined. The planet is revealed indirectly if that location, called the barycenter, is far enough from the star’s center to cause a wobble detectable by a telescope.

This technique, called the astrometric technique, is expected to be particularly good for detecting Jupiter-like planets in orbits distant from the star. This is because when a massive planet orbits a star, the wobble produced in the star increases with a larger separation between the planet and the star, and at a given distance from the star, the more massive the planet, the larger the wobble produced.

Starting in June of 2018 and continuing for a year and a half, the astronomers tracked a star called TVLM 513-46546, a cool dwarf with less than a tenth the mass of our Sun. In addition, they used data from nine previous VLBA observations of the star between March 2010 and August 2011.

Extensive analysis of the data from those time periods revealed a telltale wobble in the star’s motion indicating the presence of a planet comparable in mass to Saturn, orbiting the star once every 221 days. This planet is closer to the star than Mercury is to the Sun.

Small, cool stars like TVLM 513-46546 are the most numerous stellar type in our Milky Way Galaxy, and many of them have been found to have smaller planets, comparable to Earth and Mars.

“Giant planets, like Jupiter and Saturn, are expected to be rare around small stars like this one, and the astrometric technique is best at finding Jupiter-like planets in wide orbits, so we were surprised to find a lower mass, Saturn-like planet in a relatively compact orbit. We expected to find a more massive planet, similar to Jupiter, in a wider orbit,” said Salvador Curiel, of the National Autonomous University of Mexico. “Detecting the orbital motions of this sub-Jupiter mass planetary companion in such a compact orbit was a great challenge,” he added.

More than 4,200 planets have been discovered orbiting stars other than the Sun, but the planet around TVLM 513-46546 is only the second to be found using the astrometric technique. Another, very successful method, called the radial velocity technique, also relies on the gravitational effect of the planet upon the star. That technique detects the slight acceleration of the star, either toward or away from Earth, caused by the star’s motion around the barycenter.

“Our method complements the radial velocity method which is more sensitive to planets orbiting in close orbits, while ours is more sensitive to massive planets in orbits further away from the star,” said Gisela Ortiz-Leon of the Max Planck Institute for Radio Astronomy in Germany. “Indeed, these other techniques have found only a few planets with characteristics such as planet mass, orbital size, and host star mass, similar to the planet we found. We believe that the VLBA, and the astrometry technique in general, could reveal many more similar planets.”

A third technique, called the transit method, also very successful, detects the slight dimming of the star’s light when a planet passes in front of it, as seen from Earth.

The astrometric method has been successful for detecting nearby binary star systems, and was recognized as early as the 19th Century as a potential means of discovering extrasolar planets. Over the years, a number of such discoveries were announced, then failed to survive further scrutiny. The difficulty has been that the stellar wobble produced by a planet is so small when seen from Earth that it requires extraordinary precision in the positional measurements.

“The VLBA, with antennas separated by as much as 5,000 miles, provided us with the great resolving power and extremely high precision needed for this discovery,” said Amy Mioduszewski, of the National Radio Astronomy Observatory. “In addition, improvements that have been made to the VLBA’s sensitivity gave us the data quality that made it possible to do this work now,” she added.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Go to Source
Author:

Categories
ProgrammableWeb

Zappar Provides SDK Access to AR Libraries

Zappar, augmented reality platform provider, announced SDK access to its computer vision libraries. Zappar had already helped democratize access to AR technology through web browser access, and that is now extended by allowing developers to integrate AR into their own systems and applications. SDKs are currently available or Unity, JavaScript, Three.js, A-Frame, and C/C++.

“The release of Universal AR is a landmark moment for us,” Connell Gauld, Zappar Co-Founder and CTO, commented. “Since we first released our WebAR offering last year we have been excited about the browser as a medium for delivering AR to the widest number of users as possible, and now for the first time, content developers can utilize our computer vision in whichever platforms and creative tools they are most familiar with.”

Zappar’s SDK version is called Universal AR. It includes all the major types of tracking: face, image, and instant world track. Developers can choose the type of tracking type desired through the SDK.

There is no need for third party libraries.

The company has already seen major interest in SDK access. It has also published sample repositories and bootstrap projects to help developers get started easily. To learn more, and get started, check out the Universal AR site.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ecarter</a>

Categories
ScienceDaily

Researchers help give robotic arms a steady hand for surgeries

Steady hands and uninterrupted, sharp vision are critical when performing surgery on delicate structures like the brain or hair-thin blood vessels. While surgical cameras have improved what surgeons see during operative procedures, the “steady hand” remains to be enhanced — new surgical technologies, including sophisticated surgeon-guided robotic hands, cannot prevent accidental injuries when operating close to fragile tissue.

In a new study published in the January issue of the journal Scientific Reports, researchers at Texas A&M University show that by delivering small, yet perceptible buzzes of electrical currents to fingertips, users can be given an accurate perception of distance to contact. This insight enabled users to control their robotic fingers precisely enough to gently land on fragile surfaces.

The researchers said that this technique might be an effective way to help surgeons reduce inadvertent injuries during robot-assisted operative procedures.

“One of the challenges with robotic fingers is ensuring that they can be controlled precisely enough to softly land on biological tissue,” said Hangue Park, assistant professor in the Department of Electrical and Computer Engineering. “With our design, surgeons will be able to get an intuitive sense of how far their robotic fingers are from contact, information they can then use to touch fragile structures with just the right amount of force.”

Robot-assisted surgical systems, also known as telerobotic surgical systems, are physical extensions of a surgeon. By controlling robotic fingers with movements of their own fingers, surgeons can perform intricate procedures remotely, thus expanding the number of patients that they can provide medical attention. Also, the tiny size of the robotic fingers means that surgeries are possible with much smaller incisions since surgeons need not make large cuts to accommodate for their hands in the patient’s body during operations.

To move their robotic fingers precisely, surgeons rely on live streaming of visual information from cameras fitted on telerobotic arms. Thus, they look into monitors to match their finger movements with those of the telerobotic fingers. In this way, they know where their robotic fingers are in space and how close these fingers are to each other.

However, Park noted that just visual information is not enough to guide fine finger movements, which is critical when the fingers are in the close vicinity of the brain or other delicate tissue.

“Surgeons can only know how far apart their actual fingers are from each other indirectly, that is, by looking at where their robotic fingers are relative to each other on a monitor,” Park said. “This roundabout view diminishes their sense of how far apart their actual fingers are from each other, which then affects how they control their robotic fingers.”

To address this problem, Park and his team came up with an alternate way to deliver distance information that is independent of visual feedback. By passing different frequencies of electrical currents onto fingertips via gloves fitted with stimulation probes, the researchers were able to train users to associate the frequency of current pulses with distance, that is, increasing current frequencies indicated the closing distance from a test object. They then compared if users receiving current stimulation along with visual information about closing distance on their monitors did better at estimating proximity than those who received visual information alone.

Park and his team also tailored their technology according to the user’s sensitivity to electrical current frequencies. In other words, if a user was sensitive to a wider range of current frequencies, the distance information was delivered with smaller steps of increasing currents to maximize the accuracy of proximity estimation.

The researchers found that users receiving electrical pulses were more aware of the proximity to underlying surfaces and could lower their force of contact by around 70%, performing much better than the other group. Overall, they observed that proximity information delivered through mild electric pulses was about three times more effective than the visual information alone.

Park said their novel approach has the potential to significantly increase maneuverability during surgery while minimizing risks of unintended tissue damage. He also said their technique would add little to the existing mental load of surgeons during operative procedures.

“Our goal was to come up with a solution that would improve the accuracy in proximity estimation without increasing the burden of active thinking needed for this task,” he said. “When our technique is ready for use in surgical settings, physicians will be able to intuitively know how far their robotic fingers are from underlying structures, which means that they can keep their active focus on optimizing the surgical outcome of their patients.”

Story Source:

Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Long spaceflights affect astronaut brain volume

Extended periods in space have long been known to cause vision problems in astronauts. Now a new study in the journal Radiology suggests that the impact of long-duration space travel is more far-reaching, potentially causing brain volume changes and pituitary gland deformation.

More than half of the crew members on the International Space Station (ISS) have reported changes to their vision following long-duration exposure to the microgravity of space. Postflight evaluation has revealed swelling of the optic nerve, retinal hemorrhage and other ocular structural changes.

Scientists have hypothesized that chronic exposure to elevated intracranial pressure, or pressure inside the head, during spaceflight is a contributing factor to these changes. On Earth, the gravitational field creates a hydrostatic gradient, a pressure of fluid that progressively increases from your head down to your feet while standing or sitting. This pressure gradient is not present in space.

“When you’re in microgravity, fluid such as your venous blood no longer pools toward your lower extremities but redistributes headward,” said study lead author Larry A. Kramer, M.D., from the University of Texas Health Science Center at Houston. Dr. Kramer further explained, “That movement of fluid toward your head may be one of the mechanisms causing changes we are observing in the eye and intracranial compartment.”

To find out more, Dr. Kramer and colleagues performed brain MRI on 11 astronauts, including 10 men and one woman, before they traveled to the ISS. The researchers followed up with MRI studies a day after the astronauts returned, and then at several intervals throughout the ensuing year.

MRI results showed that the long-duration microgravity exposure caused expansions in the astronauts’ combined brain and cerebrospinal fluid (CSF) volumes. CSF is the fluid that flows in and around the hollow spaces of the brain and spinal cord. The combined volumes remained elevated at one-year postflight, suggesting permanent alteration.

“What we identified that no one has really identified before is that there is a significant increase of volume in the brain’s white matter from preflight to postflight,” Dr. Kramer said. “White matter expansion in fact is responsible for the largest increase in combined brain and cerebrospinal fluid volumes postflight.”

MRI also showed alterations to the pituitary gland, a pea-sized structure at the base of the skull often referred to as the “master gland” because it governs the function of many other glands in the body. Most of the astronauts had MRI evidence of pituitary gland deformation suggesting elevated intracranial pressure during spaceflight.

“We found that the pituitary gland loses height and is smaller postflight than it was preflight,” Dr. Kramer said. “In addition, the dome of the pituitary gland is predominantly convex in astronauts without prior exposure to microgravity but showed evidence of flattening or concavity postflight. This type of deformation is consistent with exposure to elevated intracranial pressures.”

The researchers also observed a postflight increase in volume, on average, in the astronauts’ lateral ventricles, spaces in the brain that contain CSF. However, the overall resulting volume would not be considered outside the range of healthy adults. The changes were similar to those that occur in people who have spent long periods of bed rest with their heads tilted slightly downward in research studies simulating headward fluid shift in microgravity.

Additionally, there was increased velocity of CSF flow through the cerebral aqueduct, a narrow channel that connects the ventricles in the brain. A similar phenomenon has been seen in normal pressure hydrocephalus, a condition in which the ventricles in the brain are abnormally enlarged. Symptoms of this condition include difficulty walking, bladder control problems and dementia. To date, these symptoms have not been reported in astronauts after space travel.

The researchers are studying ways to counter the effects of microgravity. One option under consideration is the creation of artificial gravity using a large centrifuge that can spin people in either a sitting or prone position. Also under investigation is the use of negative pressure on the lower extremities as a way to counteract the headward fluid shift due to microgravity.

Dr. Kramer said the research could also have applications for non-astronauts.

“If we can better understand the mechanisms that cause ventricles to enlarge in astronauts and develop suitable countermeasures, then maybe some of these discoveries could benefit patients with normal pressure hydrocephalus and other related conditions,” he said.

Go to Source
Author:

Categories
ProgrammableWeb

Google’s Cloud Vision API Removes Gender Labels

The Google Cloud Vision API will no longer provide gender determination as part of its image analysis functionality. The announcement directly affects the ‘LABEL_DETECTION’ function of the API which prior to today would return labels such as ‘woman’ or ‘man’. 

The API update was announced to developers via email and then subsequently shared on Twitter by Journalist Sriram Sharma. Google’s email noted that it is impossible to determine gender based on appearance and any attempt to do so could lead to the creation or reinforcement of unfair bias.

Going forward the API will return a label such as ‘person’ in place of the previously gendered responses. The email update was sent to all Google Cloud Vision API users that had implemented the ‘LABEL_DETECTION’ function in the past 6 months. Additionally, the email urged developers to test this new functionality and noted the possibility that the new implementation may fail to label people correctly. Google is recommending that in the event that labeling fails developers may need to find other options, such as training custom models using AutoML Vision. 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
Hackster.io

Smart Potato @ CES 2020

Nicolas Baldeck had a vision: to create the first artificially intelligent, wireless, brainwave-enabled smart potato. We got the lowdown at CES 2020.

// Read the interview: https://www.hackster.io/news/smart-potato-seen-at-ces-94e958ac89db
// Back the potato: https://www.indiegogo.com/projects/the-world-s-first-smart-potato-smartpotato#/

Categories
Hackster.io

INTERVIEW: Smart Potato

Nicolas Baldeck had a vision: to create the first artificially intelligent, wireless, brainwave-enabled smart potato. We got the lowdown at CES 2020.

// Read the article: https://www.hackster.io/news/smart-potato-seen-at-ces-94e958ac89db
// Back the potato: https://www.indiegogo.com/projects/the-world-s-first-smart-potato-smartpotato#/

Categories
3D Printing Industry

Engineering fashion: GE Additive engineers on the haute-couture of 3D printing

In May this year, 3D printing made its red carpet debut. The vision of New York fashion designer Zac Posen, several unique fashion pieces caused a stir at the 2019 Met Gala. Worn by British supermodel Jourdan Dunn and Canadian actor Nina Dobrev, arguably the most iconic pieces in Posen’s collection were the so-called “rose gown,” […]

Go to Source
Author: Beau Jackson

Categories
ProgrammableWeb

Google Enhances Vision AI Portfolio with Object and Logo Detection

Google has announced a number of updates to its Vision AI portfolio. Specific updates include enhancements to AutoML Vision Edge, AutoML Video, and the Video Intelligence API. All three machine learning products continue to grow in feature sets and the ability to readily impact the use cases where they are implemented.

The AutoML Vision Edge can now detect objects. Edge devices such as connected sensors and cameras can now utilize AutoML Vision Edge to detect changes, anomalies, failures, and other trigger events based on object detection. AutoML Vision Edge can both classify images and detect certain objects directly from the edge device (no need to rely on core technology for decision making). Hardware currently supported includes those that use NVIDIA, ARM, Android, iOS, and some other chipsets.

AutoML Video Intelligence can also detect objects now. Users can leverage this functionality to train models to create labels for certain content within a video. This allows users to track the movement of labeled objects from frame to frame. Specific use cases include traffic management, sports analytics, robotic navigation and other scenarios where objects move beyond the scope of a single frame.

The Video Intelligence API can now track and recognize logos. Out of the box, it recognizes over 100,000 popular business and organization logos. Logo identification adds to the existing recognition of objects, scenes, and actions. To learn more about any of these new features, visit the Vision website.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ecarter</a>

Categories
Hackster.io

FlexLED 2.0 Uses Flexible PCBs for POV Display

Persistence of vision displays have been around for quite a while, and use LED light and rapid motion to create an optical illusion that acts like a display that can show anything from words to animated animals and people. While there are a ton of POV displays on the market, others prefer to design their own, including engineer Carl Bugeja, who is looking to improve on his original FlexLED. His original design featured a single LED on a flexible PCB, which was actuated back and forth using a magnet to create the POV effect.

The FlexLED 2.0 features LEDs on a flexible PCB and uses a magnet to create a persistence of vision (POV) display. (📷: Carl Bugeja)

For his FlexLED 2.0, Bugeja is creating a larger version with more LEDs, which will use a pattern of traces that make up a pair of coils that interact with magnetic fields. The original only used one coil, but it only needed to produce the POV effect using a single LED. By using the right amount of current, the flexible PCB can flap up and down when in the presence of a neodymium magnet.

The FlexLED 2.0 features 24 RGB LEDs, along with an LED driver, an H-bridge to drive the coils, and a microcontroller that controls both and communicates with the electronics via UART. Bugeja took to Altium Designer to design the PCB and place all the hardware components neatly before sending it over to a manufacturer. He also created a 3D-printed case this time to keep the electronics secure and also houses the magnet that actuates the flexible PCB. While the FlexLED 2.0 isn’t yet completed, you can see Bugeja’s design process in the video above.

Go to Source
Author: Cabe Atwell