Medical robotic hand? Rubbery semiconductor makes it possible

A medical robotic hand could allow doctors to more accurately diagnose and treat people from halfway around the world, but currently available technologies aren’t good enough to match the in-person experience.

Researchers report in Science Advances that they have designed and produced a smart electronic skin and a medical robotic hand capable of assessing vital diagnostic data by using a newly invented rubbery semiconductor with high carrier mobility.

Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston and corresponding author for the work, said the rubbery semiconductor material also can be easily scaled for manufacturing, based upon assembly at the interface of air and water.

That interfacial assembly and the rubbery electronic devices described in the paper suggest a pathway toward soft, stretchy rubbery electronics and integrated systems that mimic the mechanical softness of biological tissues, suitable for a variety of emerging applications, said Yu, who also is a principal investigator at the Texas Center for Superconductivity at UH.

The smart skin and medical robotic hand are just two potential applications, created by the researchers to illustrate the discovery’s utility.

In addition to Yu, authors on the paper include Ying-Shi Guan, Anish Thukral, Kyoseung Sim, Xu Wang, Yongcao Zhang, Faheem Ershad, Zhoulyu Rao, Fengjiao Pan and Peng Wang, all of whom are affiliated with UH. Co-authors Jianliang Xiao and Shun Zhang are affiliated with the University of Colorado.

Traditional semiconductors are brittle, and using them in otherwise stretchable electronics has required special mechanical accommodations. Previous stretchable semiconductors have had drawbacks of their own, including low carrier mobility — the speed at which charge carriers can move through a material — and complicated fabrication requirements.

Yu and collaborators last year reported that adding minute amounts of metallic carbon nanotubes to the rubbery semiconductor of P3HT — polydimethylsiloxane composite — improves carrier mobility, which governs the performances of semiconductor transistors.

Yu said the new scalable manufacturing method for these high performance stretchable semiconducting nanofilms and the development of fully rubbery transistors represent a significant step forward.

The production is simple, he said. A commercially available semiconductor material is dissolved in a solution and dropped on water, where it spreads; the chemical solvent evaporates from the solution, resulting in improved semiconductor properties.

It is a new way to create the high quality composite films, he said, allowing for consistent production of fully rubbery semiconductors.

Electrical performance is retained even when the semiconductor is stretched by 50%, the researchers reported. Yu said the ability to stretch the rubbery electronics by 50% without degrading the performance is a notable advance. Human skin, he said, can be stretched only about 30% without tearing.

Story Source:

Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length.

Go to Source


Assembling the NVIDIA Jetson Nano “JetBot”

We don’t always have exactly what we need on hand, but that won’t stop us from building our little AI robot! Based on the NVIDIA Jetson Nano, the JetBot is a useful platform for developing your own AI applications.
Have you worked with the Jetson Nano? Take the developer survey now!



From 3D to 2D and back: Reversible conversion of lipid spheres into ultra-thin sheets

An astonishing number of recent technological advances and novel engineering applications go hand in hand with progress in the field of materials science. The design and manipulation of materials at the nanoscale (that is, on the order of billionths of a meter) has become a hot topic. In particular, nanosheets, which are ultra-thin 2D planar structures with a surface ranging from several micrometers to millimeters, have recently attracted much attention because of their outstanding mechanical, electrical, and optical properties. For example, organic nanosheets have great potential as biomedical or biotechnological tools, while inorganic nanosheets could be useful for energy storage and harvesting.

But what if we could go from a 2D nanosheet structure to a molecular 3D structure in a controllable and reversible way? Scientists from the Tokyo Tech and The University of Tokyo have conducted a study on such a reversible 2D-3D conversion process, motivated by its potential applications. In their study, published in Advanced Materials, they first focused on converting spherical lipid vesicles (bubble-like structures) into 2D nanosheets through the cooperative action of two compounds: a membrane-disruptive acidic peptide called E5 and a cationic copolymer called poly(allylamine)-graft-dextran (or PAA-g-Dex, for short). They then attempted to revert the lipid nanosheets back to their 3D vesicle form by modifying specific conditions, such as pH, or using an enzyme, and found that the reaction was reversible.

Thus, through various experiments, the scientists elucidated the mechanisms and molecular interactions that make this reversible conversion possible. In aqueous media, planar lipid bilayers tend to be unstable because some of their hydrophobic (water-repelling) tails are exposed on the edges, leading to the formation of vesicles, which are much more stable. However, peptide E5, when folded into a helical structure with the aid of PAA-g-Dex, can disrupt the membrane of these vesicles to form 2D nanosheets. This pair of compounds combine into a belt-like structure on the edges of the nanosheets, in a process that is key to stabilizing them. Professor Atsushi Maruyama, who led this research, explains “In the sheet structures observed in the presence of E5 and PAA-g-Dex, the assembly of E5 and the copolymer at the sheet edges likely prevents the exposure of the hydrophobic edges to the water phase, thus stabilizing the nanosheets.”  The sheets can be converted back to spherical vesicles by disrupting the belt-like structure. This can be done by, for example, adding the sodium salt of poly(vinylsulfonic acid), which alters the helical shape of E5.

The scientists’ experiments showed them that the nanosheet is very stable, flexible, and thin; these are properties that are valuable in biomembrane studies and applications. For instance, the 2D-3D conversion process can be used to encapsulate molecules, such as drugs, in the vesicles by converting them into sheets and then back into spheres. “Lipid vesicles are used for both basic studies and practical applications in pharmaceutical, foods, and cosmetic sciences. The ability to control the formation of nanosheets and vesicles will be useful in these fields,” concludes Prof. Maruyama. Undoubtedly, improving our ability to manipulate the nanoscopic world will bring about positive macroscopic changes to our lives.

Story Source:

Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length.

Go to Source

IEEE Spectrum

An Internet of Tires? Pirelli Marries 5G And Automobile Wheels

The term “5G” typically makes people think of the smartphone in their hand, not the tires on their car. But Pirelli has developed the Cyber Tire, a smart tire that reads the road surface and transmits key data — including the potential risk of hydroplaning — along a 5G communications network. 

Pirelli demonstrated its Cyber Tire (also known as the Cyber Tyre) at a conference hosted by the 5G Automotive Association, atop the architect Renzo Piano’s reworking of the landmark Lingotto Building in Turin, Italy. That’s the former Fiat factory where classic models such as the Torpedo and 500 (the latter known as Topolino, or “Little Mouse”) barrelled around its banked, three-quarter-mile rooftop test track beginning in the 1920’s. Engineers of that era, of course, couldn’t begin to fathom how digital technology would transform automobiles, let alone the revolution in tires that has dramatically boosted their performance, durability and safety. 

Using an Audi A8 as its test car, Pirelli’s network-enabled tires sent real-time warnings of slippery conditions to a following Audi Q8, taking advantage of the ultra-high bandwidth and low latency of 5G. Corrado Rocca, head of Cyber R&D for Pirelli, said that an accelerometer mounted within the tire itself—rather than the wheel rims that send familiar tire-pressure readouts in many modern cars—precisely measures handling forces along three axes. That includes the ability to sense water, ice or other low-coefficient of friction roadway conditions.

The sensor data can be used to the immediate benefit of safety and autonomous systems onboard a car. It can also be used in the growing realm of vehicle-to-vehicle (V2V) or vehicle-to-x communications (V2X), which means the once-humble tire could become a critical player in a wider ecosystem of networked safety and traffic management. Obvious scenarios include a car on the freeway that suddenly encounters ice, with tires that instantly send visual or audio hazard warnings not only to that car but also to nearby vehicle and pedestrians, as well as to networked roadway signs that announce the potential danger, or adjust prevailing speed limits accordingly.   

“No other element of a car is as connected to the road as the tire,” Rocca reminds us. “There are many modern sensors; lidar, sonar, cameras, but nothing on the ‘touching’ side of the car.’” 

Virtually every new car is equipped with anti-lock brakes (ABS) and electronic stability control (ESC) systems, which also spring into action when a car’s wheels begin to slip, or when a car begins to slide off the driver’s intended course. But the Cyber Tire could further improve those systems, Rocca said, allowing a car to proactively adjust those safety systems, or automatically slow itself down in response to changing roadway conditions. 

“Because we’re sensing the ground constantly, we can warn of the risk of hydroplaning well before you lose control,” Rocca says. “The warning could appear on a screen, or the car could automatically decide to correct it with ABS or ESC.” 

Aside from data on dynamic loads, the Cyber Tire’s internal sensor might also communicate in-car information specific to that tire model, or the kilometers of travel it has absorbed. 

Pirelli is also developing the technology for race circuits and driving enthusiasts, with its Italia Track Adrenaline tire. With tire temperatures dramatically affecting traction, wear and safety, this version monitors temperatures, pressure and handling forces in real time. That combines with onboard GPS and telemetry data to help drivers improve their on-track skills. The system could deliver simple real-time instructions — such as color-coded screen readouts as a tire rises to or beyond optimal operating temperature—or using popular telemetry tools, a granular analysis of the tire’s performance after a lapping session. (At the highest levels of Formula One racing, cars are equipped with roughly 140 sensors, which collect 20 to 30 megabytes of telemetry data every lap). 

With 5G, V2V and V2X systems still in the development phase,  Pirelli can’t say when it sensor-enabled hunks of rubber will reach the market. Automakers ultimately lead the adoption of new tire technology, and many are leery of new tech until they’re sure consumers will pay for it. Car companies are also cautious about ceding the networked space in their cars to outside suppliers—witness their glacial, grudging adoption of Apple CarPlay and Android Auto. But Pirelli says it’s working with major automakers on integrating the technology. And Rocca says that, like ABS in its nascent stages, smart tires could become common on vehicles within a decade.  It’s almost enough to get us wishing for a winter storm to try them out.

IEEE Spectrum

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong

Editor’s note: This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House.

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.

Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”

Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.

Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:

Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.

Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.

No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.

Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled…. This new danger…is certainly something which can give us anxiety.

Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.

Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.

Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.

Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.

Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.

This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.

The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”

To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.

What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.

If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.

For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.

Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.

Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.

Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.

The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:

Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.

And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:

If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.

The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.

The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.

Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.

Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:

AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.

Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:

There is no reason for AIs to have self-preservation instincts, jealousy, etc…. AIs will not have these destructive “emotions” unless we build these emotions into them.

Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.

A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:

Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.

By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.

The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”

Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.

In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.

Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.

This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”

About the Author

Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. This month, Viking Press is publishing Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control, on which this article is based. He is also active in the movement against autonomous weapons, and he instigated the production of the highly viewed 2017 video Slaughterbots.


Using WebUSB with Arduino and TinyUSB

Adafruit recently uploaded a hand guide on how to use Google’s WebUSB API with the TinyUSB stack and Arduino-based projects and how to interact with them in Chrome. Googles WebUSB platform is a JavaScript programming interface that allows users to interact with USB devices via web pages. The TinyUSB library enables Arduino-based development boards to appear as USB devices, making it easy to connect them to the WebUSB standard.

The guide shows users how to use Google’s WebUSB with TinyUSB/Arduino projects and interact with them in a web browser. (📷: Adafruit)

“Adafruit has worked to ensure TinyUSB works with WebUSB. Together, they allow Adafruit and compatible microcontrollers to work with WebUSB browsers like Chrome with no drivers on the host computer / tablet / phone /Chromebook. Super simple and this works well in environments like schools.”

The guide lays out the necessary computer and mobile hardware that’s compatible with the project, which includes Chromebook and PCs that run Windows 10, OSX, and Linux, along with both Android and iPhone smartphones. Of course, all of those require a USB port (Apple devices with a Lightning port will need an adapter).

Adafruit has tested the compatibility of their platform with several boards, including the Circuit Playground.

Adafruit tested several boards for compatibility with the platform, but most seem to be limited to those outfitted with Microchip’s SAM D21 and SAM D51 microcontrollers, as well as Nordic’s nRF52840 SoCs. For the guide, Adafruit chose the Circuit Playground Express, as it packs a host of hardware, including sensors, buttons, NeoPixel LEDs, and USB connectivity.

The rest of the guide provides instructions on installing the Arduino IDE, libraries, the TinyUSB stack, and interfacing with WebUSB. The write-up provides an in-depth walkthrough from start to finish, and even those who are new to Arduino boards and coding will be able to complete the project example.

Go to Source
Author: Cabe Atwell


Electronic glove offers ‘humanlike’ features for prosthetic hand users

People with hand amputations experience difficult daily life challenges, often leading to lifelong use of a prosthetic hands and services.

An electronic glove, or e-glove, developed by Purdue University researchers can be worn over a prosthetic hand to provide humanlike softness, warmth, appearance and sensory perception, such as the ability to sense pressure, temperature and hydration. The technology is published in the Aug. 30 edition of NPG Asia Materials.

While a conventional prosthetic hand helps restore mobility, the new e-glove advances the technology by offering the realistic human hand-like features in daily activities and life roles, with the potential to improve their mental health and wellbeing by helping them more naturally integrate into social contexts.

The e-glove uses thin, flexible electronic sensors and miniaturized silicon-based circuit chips on the commercially available nitrile glove. The e-glove is connected to a specially designed wristwatch, allowing for real-time display of sensory data and remote transmission to the user for post-data processing.

Chi Hwan Lee, an assistant professor in Purdue’s College of Engineering, in collaboration with other researchers at Purdue, the University of Georgia and the University of Texas, worked on the development of the e-glove technology.

“We developed a novel concept of the soft-packaged, sensor-instrumented e-glove built on a commercial nitrile glove, allowing it to seamlessly fit on arbitrary hand shapes,” Lee said. “The e-glove is configured with a stretchable form of multimodal sensors to collect various information such as pressure, temperature, humidity and electrophysiological biosignals, while simultaneously providing realistic human hand-like softness, appearance and even warmth.”

Lee and his team hope that the appearance and capabilities of the e-glove will improve the well-being of prosthetic hand users by allowing them to feel more comfortable in social contexts. The glove is available in different skin tone colors, has lifelike fingerprints and artificial fingernails.

“The prospective end user could be any prosthetic hand users who have felt uncomfortable wearing current prosthetic hands, especially in many social contexts,” Lee said.

The fabrication process of the e-glove is cost-effective and manufacturable in high volume, making it an affordable option for users unlike other emerging technologies with mind, voice and muscle control embedded within the prosthetic at a high cost. Additionally, these emerging technologies do not provide the humanlike features that the e-glove provides.

Lee and Min Ku Kim, an engineering doctoral student at Purdue and a co-author on the paper, have worked to patent the technology with the Purdue Research Foundation Office of Technology Commercialization. The team is seeking partners to collaborate in clinical trials or experts in the prosthetics field to validate the use of the e-glove and to continue optimizing the design of the glove.

A video about the technology is available at

Story Source:

Materials provided by Purdue University. Note: Content may be edited for style and length.

Go to Source