With ultracold chemistry, researchers get first look at exactly what happens during a chemical reaction

The coldest chemical reaction in the known universe took place in what appears to be a chaotic mess of lasers. The appearance deceives: Deep within that painstakingly organized chaos, in temperatures millions of times colder than interstellar space, Kang-Kuen Ni achieved a feat of precision. Forcing two ultracold molecules to meet and react, she broke and formed the coldest bonds in the history of molecular couplings.

“Probably in the next couple of years, we are the only lab that can do this,” said Ming-Guang Hu, a postdoctoral scholar in the Ni lab and first author on their paper published today in Science. Five years ago, Ni, the Morris Kahn Associate Professor of Chemistry and Chemical Biology and a pioneer of ultracold chemistry, set out to build a new apparatus that could achieve the lowest temperature chemical reactions of any currently available technology. But they couldn’t be sure their intricate engineering would work.

Now, they not only performed the coldest reaction yet, they discovered their new apparatus can do something even they did not predict. In such intense cold — 500 nanokelvin or just a few millionths of a degree above absolute zero — their molecules slowed to such glacial speeds, Ni and her team could see something no one has been able to see before: the moment when two molecules meet to form two new molecules. In essence, they captured a chemical reaction in its most critical and elusive act.

Chemical reactions are responsible for literally everything: breathing, cooking, digesting, creating energy, pharmaceuticals, and household products like soap. So, understanding how they work at a fundamental level could help researchers design combinations the world has never seen. With an almost infinite number of new combinations possible, these new molecules could have endless applications from more efficient energy production to new materials like mold-proof walls and even better building blocks for quantum computers.

In her previous work, Ni used colder and colder temperatures to work this chemical magic: forging molecules from atoms that would otherwise never react. Cooled to such extremes, atoms and molecules slow to a quantum crawl, their lowest possible energy state. There, Ni can manipulate molecular interactions with utmost precision. But even she could only see the start of her reactions: two molecules go in, but then what? What happened in the middle and the end was a black hole only theories could try to explain.

Chemical reactions occur in just millionths of a billionth of a second, better known in the scientific world as femtoseconds. Even today’s most sophisticated technology can’t capture something so short-lived, though some come close. In the last twenty years, scientists have used ultra-fast lasers like fast-action cameras, snapping rapid images of reactions as they occur. But they can’t capture the whole picture. “Most of the time,” Ni said, “you just see that the reactants disappear and the products appear in a time that you can measure. There was no direct measurement of what actually happened in these chemical reactions.” Until now.

Ni’s ultracold temperatures force reactions to a comparatively numbed speed. “Because [the molecules] are so cold,” Ni said, “now we kind of have a bottleneck effect.” When she and her team reacted two potassium rubidium molecules — chosen for their pliability — the ultracold temperatures forced the molecules to linger in the intermediate stage for microseconds. Microseconds — mere millionths of a second — may seem short, but that’s millions of times longer than usual and long enough for Ni and her team to investigate the phase when bonds break and form, in essence, how one molecule turns into another.

With this intimate vision, Ni said she and her team can test theories that predict what happens in a reaction’s black hole to confirm if they got it right. Then, her team can craft new theories, using actual data to more precisely predict what happens during other chemical reactions, even those that take place in the mysterious quantum realm.

Already, the team is exploring what else they can learn in their ultracold test bed. Next, for example, they could manipulate the reactants, exciting them before they react to see how their heightened energy impacts the outcome. Or, they could even influence the reaction as it occurs, nudging one molecule or the other. “With our controllability, this time window is long enough, we can probe,” Hu said. “Now, with this apparatus, we can think about this. Without this technique, without this paper, we cannot even think about this.”

Story Source:

Materials provided by Harvard University. Original written by Caitlin McDermott-Murphy. Note: Content may be edited for style and length.

Go to Source


Virtual human hand simulation holds promise for prosthetics

Whatever our hands do — reaching, grabbing or manipulating objects — it always appears simple. Yet your hands are one of the most complicated, and important, parts of the body.

Despite this, little is understood about the complexity of the hand’s underlying anatomy and, as such, animating human hands has long been considered one of the most challenging problems in computer graphics.

That’s because it has been impossible to capture the internal movement of the hand in motion — until now.

Using magnetic resonance imaging (MRI) and a technique inspired by the visual effects industry, a team of USC researchers, comprising two computer scientists and a radiologist, has developed the world’s most realistic model of the human hand’s musculoskeletal system in motion.

The musculoskeletal system includes muscles, bones, tendons and joints. The breakthrough has implications not only for computer graphics, but also prosthetics, medical education, robotics and virtual reality.

“The hand is very complicated, but prior to this work, nobody had built a precise computational model for how anatomical structures inside the hand actually move as it is articulated,” said study co-author Jernej Barbic, an Andrew and Erna Viterbi Early Career Chair and Associate Professor of Computer Science.

Designing better prosthetics

To tackle this problem, Barbic, a computer animation and physically-based simulation expert, and his PhD student, Bohan Wang, the study’s lead author, teamed up with George Matcuk, MD, an associate professor of clinical radiology at Keck School of Medicine of USC. The result: the most precise anatomically based model of the hand in motion.

“This is currently the most accurate hand animation model available and the first to combine laser scanning of the hand’s surface features and to incorporate an underlying bone rigging model based on MRI,” said Matcuk.

In addition to creating more realistic hands for computer games and CGI movies, where hands are often exposed, this system could also be used in prosthetics, to design better fingers and hand prostheses.

“Understanding the motion of internal hand anatomy opens the door for biologically-inspired robotic hands that look and behave like real hands,” said Barbic.

“In the not-so-distant future, the work may contribute to the development of anatomically realistic hands and improved hand prosthetics.”

The study, titled Hand Modelling and Simulation using Stabilized Magnetic Resonance Imaging, was presented at ACM SIGGRAPH.

A long-standing challenge

To improve realism, virtual hands should be modeled similarly to biological hands, which requires building precise anatomical and kinematic models of real human hands. But we still know surprisingly little about how bones and muscles move inside the hand.

One of the reasons is that, until now, there have been no methods to systematically acquire the motion of internal hand anatomy. Although MRI scanners can provide anatomical details, a previously unaddressed practical challenge exists: the hand must be kept perfectly still in the scanner for around 10 minutes.

“Holding the hand still in a fixed pose for 10 minutes is practically impossible,” said Barbic. “A fist is easier to hold steady, but try semi-closing your hand and you’ll find you start to shake after about a minute or two. You can’t hold it still for 10 minutes.”

To overcome this challenge, the researchers developed a manufacturing process using lifecasting materials from the special effects industry to stabilize the hand during the MRI scanning process. Lifecasting involves making a mold of the human form and then reproducing it in various media, including plastic or silicone.

Barbic, who worked on the Oscar-nominated film The Hobbit: the Desolation of Smaug, landed on the idea after seeing an inexpensive hand-cloning product in a visual effects store in Los Angeles while working on a previous project. “That was the eureka moment,” said Barbic, who has long pondered a solution for creating more realistic virtual human hands.

First, the team used the life-casting material to create a plastic replica of the model’s hand. This replica captures extremely detailed features, down to individual pores and tiny lines on the hand surface, which were then scanned using a laser scanner.

Then, the lifecasting process was used again, this time on the plastic hand, to create a negative 3D mold of the hand out of a rubber-like elastic material. The mold stabilizes the hand in the required pose. The mold was cut in two parts, and then the subject placed their real hand into the mold for MRI scanning.

“As we refine this work, I think this could be an excellent teaching tool for my students and other doctors who need an understanding of the complex anatomy and biomechanics of the hand.” George Matcuk

With assistance from radiology expert Matcuk, a practicing medical doctor at USC, the hand was then scanned by the MRI scanner for 10 minutes. This procedure was repeated 12 times, each time in a different hand pose. Two subjects, one male and one female, were captured in this way. Now, for every pose, the researchers knew exactly where the bones, muscles and tendons were positioned.

After discussing the anatomical features of the MRI scans with Matcuk, Barbic and Wang set to work building a data-driven skeleton kinematic model that captures complex real-world rotations and translations of bones in any pose.

They then added soft tissue simulation, using the finite element method (FEM) to compute the motion of the hand’s muscles, tendons and the fat tissue, consistent with the bone motion. This model, combined with surface detail allowed them to create a highly realistic moving hand. The hand can be animated in any motion, even movement which is very different from the captured poses.

Going forward

The team, which recently received a grant from the National Science Foundation to take their work to the next stage, plans to build a public dataset of multi-pose hand MRI scans, for 10 subjects over the next three years. This will be the first dataset of its kind and will enable researchers from around the world to better simulate, model and re-create human hands. The team also plans to integrate the research into education, to train PhD students at USC and for K-12 outreach programs.

“As we refine this work, I think this could be an excellent teaching tool for my students and other doctors who need an understanding of the complex anatomy and biomechanics of the hand,” said Matcuk.

The team is currently working on adding better awareness of muscles and tendons into the model and making it real-time. Right now, it takes the computer about an hour to create a minute-long simulation. Barbic and Wang hope to make the system faster, without losing quality.

Go to Source


First systematic review and meta-analysis suggests artificial intelligence may be as effective as health professionals at diagnosing disease

Artificial intelligence (AI) appears to detect diseases from medical imaging with similar levels of accuracy as health-care professionals, according to the first systematic review and meta-analysis, synthesising all the available evidence from the scientific literature published in The Lancet Digital Health journal.

Nevertheless, only a few studies were of sufficient quality to be included in the analysis, and the authors caution that the true diagnostic power of the AI technique known as deep learning — the use of algorithms, big data, and computing power to emulate human learning and intelligence — remains uncertain because of the lack of studies that directly compare the performance of humans and machines, or that validate AI’s performance in real clinical environments.

“We reviewed over 20,500 articles, but less than 1% of these were sufficiently robust in their design and reporting that independent reviewers had high confidence in their claims. What’s more, only 25 studies validated the AI models externally (using medical images from a different population), and just 14 studies actually compared the performance of AI and health professionals using the same test sample,” explains Professor Alastair Denniston from University Hospitals Birmingham NHS Foundation Trust, UK, who led the research. 

“Within those handful of high-quality studies, we found that deep learning could indeed detect diseases ranging from cancers to eye diseases as accurately as health professionals. But it’s important to note that AI did not substantially out-perform human diagnosis.” 

With deep learning, computers can examine thousands of medical images to identify patterns of disease. This offers enormous potential for improving the accuracy and speed of diagnosis. Reports of deep learning models outperforming humans in diagnostic testing has generated much excitement and debate, and more than 30 AI algorithms for healthcare have already been approved by the US Food and Drug Administration.

Despite strong public interest and market forces driving the rapid development of these technologies, concerns have been raised about whether study designs are biased in favour of machine learning, and the degree to which the findings are applicable to real-world clinical practice.

To provide more evidence, researchers conducted a systematic review and meta-analysis of all studies comparing the performance of deep learning models and health professionals in detecting diseases from medical imaging published between January 2012 and June 2019. They also evaluated study design, reporting, and clinical value.

In total, 82 articles were included in the systematic review. Data were analysed for 69 articles which contained enough data to calculate test performance accurately. Pooled estimates from 25 articles that validated the results in an independent subset of images were included in the meta-analysis.

Analysis of data from 14 studies comparing the performance of deep learning with humans in the same sample found that at best, deep learning algorithms can correctly detect disease in 87% of cases, compared to 86% achieved by health-care professionals.

The ability to accurately exclude patients who don’t have disease was also similar for deep learning algorithms (93% specificity) compared to health-care professionals (91%).

Importantly, the authors note several limitations in the methodology and reporting of AI-diagnostic studies included in the analysis. Deep learning was frequently assessed in isolation in a way that does not reflect clinical practice. For example, only four studies provided health professionals with additional clinical information that they would normally use to make a diagnosis in clinical practice. Additionally, few prospective studies were done in real clinical environments, and the authors say that to determine diagnostic accuracy requires high-quality comparisons in patients, not just datasets. Poor reporting was also common, with most studies not reporting missing data, which limits the conclusions that can be drawn.

“There is an inherent tension between the desire to use new, potentially life-saving diagnostics and the imperative to develop high-quality evidence in a way that can benefit patients and health systems in clinical practice,” says Dr Xiaoxuan Liu from the University of Birmingham, UK. “A key lesson from our work is that in AI — as with any other part of healthcare — good study design matters. Without it, you can easily introduce bias which skews your results. These biases can lead to exaggerated claims of good performance for AI tools which do not translate into the real world. Good design and reporting of these studies is a key part of ensuring that the AI interventions that come through to patients are safe and effective.”

“Evidence on how AI algorithms will change patient outcomes needs to come from comparisons with alternative diagnostic tests in randomised controlled trials,” adds Dr Livia Faes from Moorfields Eye Hospital, London. “So far, there are hardly any such trials where diagnostic decisions made by an AI algorithm are acted upon to see what then happens to outcomes which really matter to patients, like timely treatment, time to discharge from hospital, or even survival rates.”

Writing in a linked Comment, Dr Tessa Cook from the University of Pennsylvania, USA, discusses whether AI can be effectively compared to the human physician working in the real world, where data are “messy, elusive, and imperfect.” She writes: “Perhaps the better conclusion is that, the narrow public body of work comparing AI to human physicians, AI is no worse than humans, but the data are sparse and it may be too soon to tell.”

Story Source:

Materials provided by The Lancet. Note: Content may be edited for style and length.

Go to Source


Bird droppings defy expectations

For every question about bird poop, uric acid appears to be the answer.

Why are bird droppings so hard to remove from buildings? Uric acid.

Why are they white and pasty? Uric acid.

Why are they corrosive to car paint and metal structures? Uric acid.

These answers are based on the prevailing wisdom that ranks uric acid as the primary ingredient in bird “poop,” which is comprised mostly of urine. (Birds release both solid and liquid waste at the same time. The white substance is the urine).

But according to Nick Crouch, a scientist at The University of Texas at Austin, uric acid can’t be the answer. That’s because there is no uric acid in excreted bird urine.

And after analyzing the excretions from six different bird species — from the Great Horned Owl to the humble chicken — he’s pretty positive of that statement.

“It was easy to tell that what we had and that it was not uric acid,” Crouch said.

The results were published in the Journal of Ornithology in August 2019. The study’s co-authors are Julia Clarke, a professor at the Jackson School of Geosciences, where Crouch is currently a postdoctoral researcher, and Vincent Lynch a chemist and research scientist at the UT College of Natural Science.

Crouch studies bird evolution and biodiversity — the chemistry of bird waste is not his usual research wheelhouse. However, Crouch decided to investigate the uric acid question after a conversation in 2018 with the late Jackson School Professor Bob Folk, who claimed that bird waste didn’t contain uric acid.

“Sometimes you just get presented with a really weird question and you want to know the answer,” Crouch said. “That was this — I had no idea if [Folk] was right or wrong beforehand, but I was really interested to have a look.”

Folk had looked into the question himself in the 1960s and found no sign of the substance in samples collected in 17 species.

“Bob folk was a creative and boundary pushing scientist who primarily was interested in rocks,” Clarke said. “It is a testament to his limitless creativity that he took on what he referred to as his ‘bird paper.'”

Folk published a paper in 1969 describing the X-ray diffraction workup and solubility tests that comprised his analysis. But his work was challenged by a 1971 paper that found evidence for uric acid in waste from Budgies, a type of parrot, using the same sort of X-ray diffraction analysis used by Folk.

Crouch said that he thought that running the analyses again using modern technology could help settle the question. Although X-ray diffraction hasn’t changed much over the past 50 years, the technology for analyzing its results — which consist of distinctive scattering patterns created when X-rays are deflected by different chemicals present in a substance — has become much more accurate and accessible over the decades.

As for the samples themselves, most came fresh from birds kept at the Austin Zoo, while the chicken waste sample came from a backyard flock owned by Crouch’s neighbors. All together, the samples covered a good swath of bird diversity — including species from the three major groupings of birds, a variety of diets and flightless species. But none of the samples produced an X-ray diffraction pattern consistent with uric acid. The analysis found ammonium urate, struvite and two unknown compounds.

Based on findings from other research, Crouch said that the substances are probably the result of bacteria inside the bird’s gut breaking down uric acid before it is excreted. Research conducted by other scientists having identified a diverse array of bacteria inside the digestive organs of birds that do just that.

Sushma Reddy, an associate professor and the Breckenridge Chair of Ornithology at the University of Minnesota, said she was surprised by the research findings and thinks they will spur more research into bird physiology.

“It goes against the old doctrine that we learn,” Reddy said. “It’s pretty incredible that we live in this time where we can reanalyze with incredible technologies these things that we took for granted.”

Crouch said that this research opens the door to new research questions, from the power of the bird microbiome to identifying the two unknown substances. He said that most of all, it shows the value of taking the time to question conventional wisdom.

“I had no idea I was going to work on bird pee,” Crouch said, “but I find myself with so many new questions about the avian microbiome, which shows how our research can take us in unexpected and exciting directions.”

Go to Source


Engineers develop ‘blackest black’ material to date

With apologies to “Spinal Tap,” it appears that black can, indeed, get more black.

MIT engineers report today that they have cooked up a material that is 10 times blacker than anything that has previously been reported. The material is made from vertically aligned carbon nanotubes, or CNTs — microscopic filaments of carbon, like a fuzzy forest of tiny trees, that the team grew on a surface of chlorine-etched aluminum foil. The foil captures more than 99.96 percent of any incoming light, making it the blackest material on record.

The researchers have published their findings today in the journal ACS-Applied Materials and Interfaces. They are also showcasing the cloak-like material as part of a new exhibit today at the New York Stock Exchange, titled “The Redemption of Vanity.”

The artwork, a collaboration between Brian Wardle, professor of aeronautics and astronautics at MIT, and his group, and MIT artist-in-residence Diemut Strebe, features a 16.78-carat natural yellow diamond, estimated to be worth $2 million, which the team coated with the new, ultrablack CNT material. The effect is arresting: The gem, normally brilliantly faceted, appears as a flat, black void.

Wardle says the CNT material, aside from making an artistic statement, may also be of practical use, for instance in optical blinders that reduce unwanted glare, to help space telescopes spot orbiting exoplanets.

“There are optical and space science applications for very black materials, and of course, artists have been interested in black, going back well before the Renaissance,” Wardle says. “Our material is 10 times blacker than anything that’s ever been reported, but I think the blackest black is a constantly moving target. Someone will find a blacker material, and eventually we’ll understand all the underlying mechanisms, and will be able to properly engineer the ultimate black.”

Wardle’s co-author on the paper is former MIT postdoc Kehang Cui, now a professor at Shanghai Jiao Tong University.

Into the void

Wardle and Cui didn’t intend to engineer an ultrablack material. Instead, they were experimenting with ways to grow carbon nanotubes on electrically conducting materials such as aluminum, to boost their electrical and thermal properties.

But in attempting to grow CNTs on aluminum, Cui ran up against a barrier, literally: an ever-present layer of oxide that coats aluminum when it is exposed to air. This oxide layer acts as an insulator, blocking rather than conducting electricity and heat. As he cast about for ways to remove aluminum’s oxide layer, Cui found a solution in salt, or sodium chloride.

At the time, Wardle’s group was using salt and other pantry products, such as baking soda and detergent, to grow carbon nanotubes. In their tests with salt, Cui noticed that chloride ions were eating away at aluminum’s surface and dissolving its oxide layer.

“This etching process is common for many metals,” Cui says. “For instance, ships suffer from corrosion of chlorine-based ocean water. Now we’re using this process to our advantage.”

Cui found that if he soaked aluminum foil in saltwater, he could remove the oxide layer. He then transferred the foil to an oxygen-free environment to prevent reoxidation, and finally, placed the etched aluminum in an oven, where the group carried out techniques to grow carbon nanotubes via a process called chemical vapor deposition.

By removing the oxide layer, the researchers were able to grow carbon nanotubes on aluminum, at much lower temperatures than they otherwise would, by about 100 degrees Celsius. They also saw that the combination of CNTs on aluminum significantly enhanced the material’s thermal and electrical properties — a finding that they expected.

What surprised them was the material’s color.

“I remember noticing how black it was before growing carbon nanotubes on it, and then after growth, it looked even darker,” Cui recalls. “So I thought I should measure the optical reflectance of the sample.

“Our group does not usually focus on optical properties of materials, but this work was going on at the same time as our art-science collaborations with Diemut, so art influenced science in this case,” says Wardle.

Wardle and Cui, who have applied for a patent on the technology, are making the new CNT process freely available to any artist to use for a noncommercial art project.

“Built to take abuse”

Cui measured the amount of light reflected by the material, not just from directly overhead, but also from every other possible angle. The results showed that the material absorbed greater than 99.995 percent of incoming light, from every angle. In essence, if the material contained bumps or ridges, or features of any kind, no matter what angle it was viewed from, these features would be invisible, obscured in a void of black.

The researchers aren’t entirely sure of the mechanism contributing to the material’s opacity, but they suspect that it may have something to do with the combination of etched aluminum, which is somewhat blackened, with the carbon nanotubes. Scientists believe that forests of carbon nanotubes can trap and convert most incoming light to heat, reflecting very little of it back out as light, thereby giving CNTs a particularly black shade.

“CNT forests of different varieties are known to be extremely black, but there is a lack of mechanistic understanding as to why this material is the blackest. That needs further study,” Wardle says.

Go to Source


Newly discovered comet is likely interstellar visitor

A newly discovered comet has excited the astronomical community this week because it appears to have originated from outside the solar system. The object — designated C/2019 Q4 (Borisov) — was discovered on Aug. 30, 2019, by Gennady Borisov at the MARGO observatory in Nauchnij, Crimea. The official confirmation that comet C/2019 Q4 is an interstellar comet has not yet been made, but if it is interstellar, it would be only the second such object detected. The first, ‘Oumuamua, was observed and confirmed in October 2017.

The new comet, C/2019 Q4, is still inbound toward the Sun, but it will remain farther than the orbit of Mars and will approach no closer to Earth than about 190 million miles (300 million kilometers).

After the initial detections of the comet, Scout system, which is located at NASA’s Jet Propulsion Laboratory in Pasadena, California, automatically flagged the object as possibly being interstellar. Davide Farnocchia of NASA’s Center for Near-Earth Object Studies at JPL worked with astronomers and the European Space Agency’s Near-Earth Object Coordination Center in Frascati, Italy, to obtain additional observations. He then worked with the NASA-sponsored Minor Planet Center in Cambridge, Massachusetts, to estimate the comet’s precise trajectory and determine whether it originated within our solar system or came from elsewhere in the galaxy.

The comet is currently 260 million miles (420 million kilometers) from the Sun and will reach its closest point, or perihelion, on Dec. 8, 2019, at a distance of about 190 million miles (300 million kilometers).

“The comet’s current velocity is high, about 93,000 mph [150,000 kph], which is well above the typical velocities of objects orbiting the Sun at that distance,” said Farnocchia. “The high velocity indicates not only that the object likely originated from outside our solar system, but also that it will leave and head back to interstellar space.”

Currently on an inbound trajectory, comet C/2019 Q4 is heading toward the inner solar system and will enter it on Oct. 26 from above at roughly a 40-degree angle relative to the ecliptic plane. That’s the plane in which the Earth and planets orbit the Sun.

C/2019 Q4 was established as being cometary due to its fuzzy appearance, which indicates that the object has a central icy body that is producing a surrounding cloud of dust and particles as it approaches the Sun and heats up. Its location in the sky (as seen from Earth) places it near the Sun — an area of sky not usually scanned by the large ground-based asteroid surveys or NASA’s asteroid-hunting NEOWISE spacecraft.

C/2019 Q4 can be seen with professional telescopes for months to come. “The object will peak in brightness in mid-December and continue to be observable with moderate-size telescopes until April 2020,” said Farnocchia. “After that, it will only be observable with larger professional telescopes through October 2020.”

Observations completed by Karen Meech and her team at the University of Hawaii indicate the comet nucleus is somewhere between 1.2 and 10 miles (2 and 16 kilometers) in diameter. Astronomers will continue collect observations to further characterize the comet’s physical properties (size, rotation, etc.) and also continue to better identify its trajectory.

The Minor Planet Center is hosted by the Harvard-Smithsonian Center for Astrophysics and is a sub-node of NASA’s Planetary Data System Small Bodies Node at the University of Maryland. JPL hosts the Center for Near-Earth Object Studies. All are projects of NASA’s Near-Earth Object Observations Program and elements of the agency’s Planetary Defense Coordination Office within NASA’s Science Mission Directorate.

More information about asteroids and near-Earth objects can be found at:

For more information about NASA’s Planetary Defense Coordination Office, visit:

For asteroid and comet news and updates, follow AsteroidWatch on Twitter:

Story Source:

Materials provided by NASA/Jet Propulsion Laboratory. Note: Content may be edited for style and length.

Go to Source

IEEE Spectrum

Mercedes Unveils Its Vision EQS Electric Super Car

The Vision EQS appears to have the design, luxury, and driving range to match anything in the electric space