Categories
ScienceDaily

Unraveling a spiral stream of dusty embers from a massive binary stellar forge

With almost two decades of mid-infrared (IR) imaging from the largest observatories around the world including the Subaru Telescope, a team of astronomers was able to capture the spiral motion of newly formed dust streaming from the massive and evolved binary star system Wolf-Rayet (WR) 112. Massive binary star systems, as well as supernova explosions, are regarded as sources of dust in the Universe from its early history, but the process of dust production and the amount of the ejected dust are still open questions. WR 112 is a binary system composed of a massive star in the very late stage of stellar evolution losing a large amount of mass and another massive star at the main sequence. Dust is expected to be formed in the region where stellar winds from these two stars are colliding. The study reveals the motion of the dusty outflow from the system and identifies WR 112 as a highly efficient dust factory that produces an entire Earth mass of dust every year.

Dust formation, which is typically seen in the gentle outflows from cool stars with a Sun-like mass, is somewhat unusual in the extreme environment around massive stars and their violent winds. However, interesting things happen when the fast winds of two massive stars in a binary interact.

“When the two winds collide, all Hell breaks loose, including the release of copious shocked-gas X-rays, but also the (at first blush surprising) creation of copious amounts of carbon-based aerosol dust particles in those binaries in which one of the stars has evolved to He-burning, which produces 40% C in their winds,” says co-author Anthony Moffat (University of Montreal). This dust formation process is exactly what is occurring in WR 112.

This binary dust formation phenomenon has been revealed in other systems such as WR 104 by co-author Peter Tuthill (University of Sydney). WR 104, in particular, reveals an elegant trail of dust resembling a ‘pinwheel’ that traces the orbital motion of the central binary star system.

However, the dusty nebula around WR 112 is far more complex than a simple pinwheel pattern. Decades of multi-wavelength observations presented conflicting interpretations of the dusty outflow and orbital motion of WR 112. After almost 20 years uncertainty on WR 112, images from the COMICS instrument on the Subaru Telescope taken in Oct 2019 provided the final — and unexpected — piece to the puzzle.

“We published a study in 2017 on WR 112 that suggested the dusty nebula was not moving at all, so I thought our COMICS observation would confirm this,” explained lead author Ryan Lau (ISAS/JAXA). “To my surprise, the COMCIS image revealed that the dusty shell had definitely moved since the last image we took with the VLT in 2016. It confused me so much that I couldn’t sleep after the observing run — I kept flipping through the images until it finally registered in my head that the spiral looked like it was tumbling towards us.”

Lau collaborated with researchers at the University of Sydney including Prof. Peter Tuthill and undergraduate Yinuo Han, who are experts at modeling and interpreting the motion of the dusty spirals from binary systems like WR 112. “I shared the images of WR 112 with Peter and Yinuo, and they were able to produce an amazing preliminary model that confirmed that the dusty spiral stream is revolving in our direction along our line of sight,” said Lau.

The animation above shows a comparison between the models of WR 112 created by the research team alongside the actual mid-IR observations. The appearance of the model images shows a remarkable agreement with the real images of WR 112. The models and the series of imaging observations revealed that the rotation period of this dusty “edge-on” spiral (and the orbital period of the central binary system) is 20 years.

With the revised picture of WR 112, the research team was able to deduce how much dust this binary system is forming. “Spirals are repetitive patterns, so since we understand how much time it takes to form one full dusty spiral turn (~20 years), we can actually trace the age of dust produced by the binary stars at the center of the spiral,” says Lau. He points out that “there is freshly formed dust at the very central core of the spiral, while the dust we see that’s 4 spiral turns away is about 80 years old. Therefore, we can essentially trace out an entire human lifetime along the dusty spiral stream revealed in our observations. So I could actually pinpoint on the images the dust that was formed when I was born (right now, it’s somewhere in between the first and second spiral turns).”

To their surprise, the team found WR 112 is a highly efficient dust factory that outputs dust at a rate of 3×10-6 solar mass per year, which is equivalent to producing an entire Earth mass of dust every year. This was unusual given WR 112’s 20-yr orbital period — the most efficient dust producers in this type of WR binary star system tend to have shorter orbital periods less than a year like WR 104 with its 220-day period. WR 112 therefore demonstrates the diversity of WR binary systems that are capable of efficiently forming dust and highlights their potential role as significant sources of dust not only in our Galaxy but galaxies beyond our own.

Lastly, these results demonstrate the discovery potential of multi-epoch mid-IR imaging with the MIMIZUKU instrument on the upcoming Tokyo Atacama Observatory (TAO). The mid-IR results from this study notably utilize the largest observatories in the world and set the stage for the next decade of astronomical discoveries with 30-m class telescopes and the upcoming James Webb Space Telescope.

Go to Source
Author:

Categories
3D Printing Industry

Formnext launches 2020 3D printing Start-up Challenge 

Formnext, 3D printing’s largest European-based trade show, has launched the 2020 edition of its Start-up Challenge. Since it was announced earlier this month that Formnext 2020 will go ahead as expected, albeit with updated safety precautions due to the ongoing COVID-19 pandemic, preparations have begun for this year’s contest.  The competition is once again inviting […]

Go to Source
Author: Paul Hanaphy

Categories
ScienceDaily

Rogue’s gallery of dusty star systems reveals exoplanet nurseries

Astronomers this month released the largest collection of sharp, detailed images of debris disks around young stars, showcasing the great variety of shapes and sizes of stellar systems during their prime planet-forming years. Surprisingly, nearly all showed evidence of planets.

The images were obtained over a period of four years by a precision instrument, the Gemini Planet Imager (GPI), mounted on the 8-meter Gemini South telescope in Chile. The GPI uses a state-of-the-art adaptive optics system to remove atmospheric blur, providing the sharpest images to date of many of these disks.

Ground-based instruments like GPI, which is being upgraded to conduct similar observations in the northern sky from the Gemini North Telescope in Hawaii, can be a way to screen stars with suspected debris disks to determine which are worth targeting by more powerful, but expensive, telescopes to find planets — in particular, habitable planets. Several 20-, 30- and 40-meter telescopes, such as the Giant Magellan Telescope and the Extremely Large Telescope, will come online in the next couple of decades, while the orbiting James Webb Space Telescope is expected to be launched in 2021.

“It is often easier to detect the dust-filled disk than the planets, so you detect the dust first and then you know to point your James Webb Space Telescope or your Nancy Grace Roman Space Telescope at those systems, cutting down the number of stars you have to sift through to find these planets in the first place,” said Tom Esposito, a postdoctoral fellow at the University of California, Berkeley.

Esposito is first author of a paper describing the results that appeared June 15 in The Astronomical Journal.

Comet belts around other stars

The debris disks in the images are the equivalent of the Kuiper Belt in our solar system, a frigid realm about 40 times farther from the sun than Earth — beyond the orbit of Neptune — and full of rocks, dust and ice that never became part of any planet in our solar system. Comets from the belt — balls of ice and rock — periodically sweep through the inner solar system, occasionally wreaking havoc on Earth, but also delivering life-related materials like water, carbon and oxygen.

Of the 26 images of debris disks obtained by the Gemini Planet Imager (GPI), 25 had “holes” around the central star that likely were created by planets sweeping up rocks and dust. Seven of the 26 were previously unknown; earlier images of the other 19 were not as sharp as those from GPI and often didn’t have the resolution to detect an inner hole. The survey doubles the number of debris disks imaged at such high resolution.

“One of the things we found is that these so-called disks are really rings with inner clearings,” said Esposito, who is also a researcher at the SETI Institute in Mountain View, California. “GPI had a clear view of the inner regions close to the star, whereas in the past, observations by the Hubble Space Telescope and older instruments from the ground couldn’t see close enough to the star to see the hole around it.”

The GPI incorporates a coronagraph that blocks the light from the star, allowing it to see as close as one astronomical unit (AU) from the star, or the distance of the Earth from our sun: 93 million miles.

The GPI targeted 104 stars that were unusually bright in infrared light, indicating they were surrounded by debris reflecting the light of the star or warmed by the star. The instrument recorded polarized near-infrared light scattered by small dust particles, about a thousandth of a millimeter (1 micron) in size, likely the result of collisions among larger rocks in a debris disk.

“There has been no systematic survey of young debris disks nearly this large, looking with the same instrument, using the same observing modes and methods,” Esposito said. “We detected these 26 debris disks with very consistent data quality, where we can really compare the observations, something that is unique in terms of debris disk surveys.”

The seven debris disks never before imaged in this manner were among 13 disks around stars moving together though the Milky Way, members of a group called the Scorpius-Centaurus stellar association, which is located between 100 and 140 parsecs from Earth, or some 400 light years.

“It is like the perfect fishing spot; our success rate was much greater than anything else we have ever done,” said Paul Kalas, a UC Berkeley adjunct professor of astronomy who is second author of the paper. Because all seven are around stars that were born in the same region at roughly the same time, “that group itself is a mini-laboratory where we can compare and contrast the architectures of many planetary nurseries developing simultaneously under a range of conditions, something that we really didn’t have before,” Esposito added.

Of the 104 stars observed, 75 had no disk of a size or density that GPI could detect, though they may well be surrounded by debris left over from planet formation. Three other stars were observed to host disks belonging to the earlier “protoplanetary” phase of evolution.

What did our solar system look like in its infancy?

The extent of the debris disks varied widely, but most ranged between 20 and 100 AU. These were around stars that ranged in age from tens of millions of years to a few hundred million years, a very dynamic period for the evolution of planets. Most were larger and brighter than the sun.

The one star, HD 156623, that did not have a hole in the center of the debris disk was one of the youngest in the group, which fits with theories of how planets form. Initially, the protoplanetary disk should be relatively uniform, but as the system ages, planets form and sweep out the inner part of the disk.

“When we look at younger circumstellar disks, like protoplanetary disks that are in an earlier phase of evolution, when planets are forming, or before planets have started to form, there is a lot of gas and dust in the areas where we find these holes in the older debris disks,” Esposito said. “Something has removed that material over time, and one of the ways you can do that is with planets.”

Because polarized light from debris disks can theoretically tell astronomers the composition of the dust, Esposito is hoping to refine models to predict the composition — in particular, to detect water, which is thought to be a condition for life.

Studies like these could help answer a lingering question about our own solar system, Kalas said.

“If you dial back the clock for our own solar system by 4.5 billion years, which one of these disks were we? Were we a narrow ring, or were we a fuzzy blob?” he said. “It would be great to know what we looked like back then to understand our own origins. That is the great unanswered question.”

Go to Source
Author:

Categories
ScienceDaily

Engineers develop new fuel cells with twice the operating voltage as hydrogen

Electrification of the transportation sector — one of the largest consumers of energy in the world — is critical to future energy and environmental resilience. Electrification of this sector will require high-power fuel cells (either stand alone or in conjunction with batteries) to facilitate the transition to electric vehicles, from cars and trucks to boats and airplanes.

Liquid-fueled fuel cells are an attractive alternative to traditional hydrogen fuel cells because they eliminate the need to transport and store hydrogen. They can help to power unmanned underwater vehicles, drones and, eventually, electric aircraft — all at significantly lower cost. These fuel cells could also serve as range-extenders for current battery-powered electric vehicles, thus advancing their adoption.

Now, engineers at the McKelvey School of Engineering at Washington University in St. Louis have developed high-power direct borohydride fuel cells (DBFC) that operate at double the voltage of conventional hydrogen fuel cells. Their research was published June 17 in the journal Cell Reports Physical Science.

The research team, led by Vijay Ramani, the Roma B. and Raymond H. Wittcoff Distinguished University Professor, has pioneered a reactant: identifying an optimal range of flow rates, flow field architectures and residence times that enable high power operation. This approach addresses key challenges in DBFCs, namely proper fuel and oxidant distribution and the mitigation of parasitic reactions.

Importantly, the team has demonstrated a single-cell operating voltage of 1.4 or greater, double that obtained in conventional hydrogen fuel cells, with peak powers approaching 1 watt/cm2. Doubling the voltage would allow for a smaller, lighter, more efficient fuel cell design, which translates to significant gravimetric and volumetric advantages when assembling multiple cells into a stack for commercial use. Their approach is broadly applicable to other classes of liquid/liquid fuel cells.

“The reactant-transport engineering approach provides an elegant and facile way to significantly boost the performance of these fuel cells while still using existing components,” Ramani said. “By following our guidelines, even current, commercially deployed liquid fuel cells can see gains in performance.”

The key to improving any existing fuel cell technology is reducing or eliminating side reactions. The majority of efforts to achieve this goal involve developing new catalysts that face significant hurdles in terms of adoption and field deployment.

“Fuel cell manufacturers are typically reluctant to spend significant capital or effort to adopt a new material,” said Shrihari Sankarasubramanian, a senior staff research scientist on Ramani’s team. “But achieving the same or better improvement with their existing hardware and components is a game changer.”

“Hydrogen bubbles formed on the surface of the catalyst have long been a problem for direct sodium borohydride fuel cells, and it can be minimized by the rational design of the flow field,” said Zhongyang Wang, a former member of Ramani’s lab who earned his PhD from WashU in 2019 and is now at the Pritzker School of Molecular Engineering at the University of Chicago. “With the development of this reactant-transport approach, we are on the path to scale-up and deployment.”

Ramani added: “This promising technology has been developed with the continuous support of the Office of Naval Research, which I acknowledge with gratitude. We are at the stage of scaling up our cells into stacks for applications in both submersibles and drones.”

The technology and its underpinnings are the subject of patent filing and are available for licensing.

Story Source:

Materials provided by Washington University in St. Louis. Original written by Shrihari Sankarasubramanian. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

CreditRegistry API Aims to Fight Dud Checks in Nigeria

CreditRegistry, Nigeria’s largest credit bureau, has launched an API to help financial institutions validate status checks on potential customers. The CreditRegistry Dud Cheque API should help lenders, retailers, and other businesses from accepting checks from serial dishonored (Dud) check issuers. The API will help lenders comply with new Nigerian directives and prevent check fraud which is expected to increase in the fallout from COVID-19.

“Everyone needs credit at some time,” CreditRegistry’s Chief Executive Officer, Mrs. Jameelah Sharrieff-Ayedun, commented. “This service simply helps to improve transparency to reduce the barrier so more people can access credit using post-dated cheques as a means for payment… Each institution can integrate the API to return a file that will specify the number of dud cheques written by any offender.”

The API-based solution is simple, but the product is praised as solving a big problem in the Nigerian economy. Dating back to 2009, institutions such as PwC have urged Nigeria to implement more stringent credit check requirements into the Nigerian economy. CreditRegistry has been a leader in credit solutions, and the API is the latest example.

Checks through the API are conducted in real-time. The API returns a dud check risk score for customers. The API can integrate directly into the financial institution systems so the end-to-end credit status process is greatly reduced in time. For more information, visit the CreditRegistry site.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ecarter</a>

Categories
ScienceDaily

Biomechanics of skin can perform useful tactile computations

As our body’s largest and most prominent organ, the skin also provides one of our most fundamental connections to the world around us. From the moment we’re born, it is intimately involved in every physical interaction we have.

Though scientists have studied the sense of touch, or haptics, for more than a century, many aspects of how it works remain a mystery.

“The sense of touch is not fully understood, even though it is at the heart of our ability to interact with the world,” said UC Santa Barbara haptics researcher Yon Visell. “Anything we do with our hands — picking up a glass, signing our name or finding keys in our bag — none of that is possible without the sense of touch. Yet we don’t fully understand the nature of the sensations captured by the skin or how they are processed in order to enable perception and action.”

We have better models for how our other senses, such as vision and hearing, work, but our understanding of how the sense of touch works is much less complete, he added.

To help fill that gap, Visell and his research team, including Yitian Shao and collaborator Vincent Hayward at the Sorbonne, have been studying the physics of touch sensation — how touching an object gives rise to signals in the skin that shape what we feel. In a study (link) published in the journal Science Advances, the group reveals how the intrinsic elasticity of the skin aids tactile sensing. Remarkably, they show that far from being a simple sensing material, the skin can also aid the processing of tactile information.

To understand this significant but little-known aspect of touch, Visell thinks it is helpful to think about how the eye, our visual organ, processes optical information.

“Human vision relies on the optics of the eye to focus light into an image on the retina,” he said. “The retina contains light-sensitive receptors that translate this image into information that our brain uses to decompose and interpret what we’re looking at.”

An analogous process unfolds when we touch a surface with our skin, Visell continued. Similar to the structures such as the cornea and iris that capture and focus light onto the retina, the skin’s elasticity distributes tactile signals to sensory receptors throughout the skin.

Building on previous work which used an array of tiny accelerometers worn on the hand to sense and catalog the spatial patterns of vibrations generated by actions such as tapping, sliding or grasping, the researchers here employed a similar approach to capture spatial patterns of vibration that are generated as the hand feels the environment.

“We used a custom device consisting of 30 three-axis sensors gently bonded to the skin,” explained lead author Shao. “And then we asked each participant in our experiments to perform many different touch interactions with their hands.” The research team collected a dataset of nearly 5000 such interactions, and analyzed that data to interpret how the transmission of touch-produced vibration patterns that were transmitted throughout the hand shaped information content in the tactile signals. The vibration patterns arose from the elastic coupling within the skin itself.

The team then analyzed these patterns in order to clarify how the transmission of vibrations in the hand shaped information in the tactile signals. “We used a mathematical model in which high-dimensional signals felt throughout the hand were represented as combinations of a small number of primitive patterns,” Shao explained. The primitive patterns provided a compact lexicon, or dictionary, that compressed the size of the information in the signals, enabling them to be encoded more efficiently.

This analysis generated a dozen or fewer primitive wave patterns — vibrations of the skin throughout the hand that could be used to capture information in the tactile signals felt by the hand. The striking feature of these primitive vibration patterns, Visell said, is that they automatically reflected the structure of the hand and the physics of wave transmission in the skin.

“Elasticity plays this very basic function in the skin of engaging thousands of sensory receptors for touch in the skin, even when contact occurs at a small skin area,” he explained. “This allows us to use far more sensory resources than would otherwise be available to interpret what it is that we’re touching.” The remarkable finding of their research is that this process also makes it possible to more efficiently capture information in the tactile signals, Visell said. Information processing of this kind is normally considered to be performed by the brain, rather than the skin.

The role played by mechanical transmission in the skin is in some respects similar to the role of the mechanics of the inner ear in hearing, Visell said. In 1961, von Bekesy received the Nobel Prize for his work showing how the mechanics of the inner ear facilitate auditory processing. By spreading sounds with different frequency content to different sensory receptors in the ear they aid the encoding of sounds by the auditory system. The team’s work suggests that similar processes may underly the sense of touch.

These findings, according to the researchers, not only contribute to our understanding of the brain, but may also suggest new approaches for the engineering of future prosthetic limbs for amputees that might be endowed with skin-like elastic materials. Similar methods also could one day be used to improve tactile sensing by next-generation robots.

Go to Source
Author:

Categories
ProgrammableWeb

Nium Opens Innovation Lab to Help Encourage Innovation in Fintech

NIUM, one of the world’s largest Global Financial Infrastructure Platform providers today announced the launch of “BOLT”, its unique platform to boost innovation in the global Fintech space and help entrepreneurs speed their products/services to market.

The platform has been engineered to act as a regional hub for budding Fintech entrepreneurs as well as seed-stage start-ups. They can now build on NIUM’s success and benefit from its core infrastructure.

Designed as an intense 26-week collaborative program, BOLT, the R&D Fintech Hub, is a first-of-its-kind independent platform. It offers entrepreneurs unrestricted access to the business ecosystem of NIUM. Fintech entrepreneurs can leverage the opportunity to collaborate with NIUM and connect to its existing API stack.

At the heart of the BOLT program is a unique innovation ecosystem led by Nium’s high-density Fintech expertise and the full range of NIUM’s ‘Send’ (55+ countries via direct ACH coverage), ‘Spend’ (card issuance in 32+ countries) and ‘Receive’ (30+ countries via direct ACH coverage) capabilities. Entrepreneurs with an idea for the next big FinTech solution can now gain complete access to Nium’s API network through BOLT.

Discussing the launch, Prajit Nanu, Co-founder, and CEO of NIUM said, “BOLT focuses on accelerating the translation of innovative ideas & concepts into tangible prototypes & products. We provide a truly independent platform with absolutely no predatory equity-in-lieu, or IP-in lieu, or even any lien on the future line of sight profits. Your success is yours alone, we are here only to enable your path to success.”

Co-located at NIUM’s global headquarters, BOLT offers world-class infrastructure spanning nearly 2,000 square feet of dedicated space in the heart of CBD, Singapore. Fully equipped with state-of-the-art infrastructure and amenities, BOLT’s vision is to be an enabler as well as a 360degree engagement and learning hotspot for fintech entrepreneurs.

Start-ups also benefit from BOLT’s location in Singapore – renowned globally as an international financial center with a pro-business environment that attracts businesses and start-ups from all over the world. The resultant pool of a highly skilled and cosmopolitan populace provides the perfect environment for cross-pollination of ideas and innovations.

BOLT provides other key components that most first-time entrepreneurs lack. These fundamental enablers of early-stage start-ups include resourcing and skillsets on UI / UX training, fundraising, and capital structuring as well as advise on the vast ocean of technologies available to enable quick time deployment.

BOLT enables start-ups scale-up rapidly by providing a ready platform for all the essential tools a Fintech start-up needs to grow rapidly. These include a ‘tinker-ready’ sand-box, API docs, easy connection to a range of its platform APIs and co-location alongside the NIUM workspace ecosystem.

Prajit said, “As an entrepreneur, I have journeyed through successes, failures, and complexities of setting up a successful Fintech company. InstaReM, the first product of NIUM, took five years to establish its network which now spans over 100 countries. With BOLT, entrepreneurs can now focus on their innovative products and solutions alone.”

“We believe in optimizing business processes and shortening the timeframe between ideation and execution,” he added.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ProgrammableWeb PR</a>

Categories
3D Printing Industry

Additive manufacturing at scale with the largest U.S. 3D printing facilities

What is the largest 3D printing facility in North America? It seems like a straightforward question, but depending on if you are speaking to the marketing or engineering department – the answer can differ. 3D Printing Industry takes a look behind heading grabbing phrases such as the world’s largest 3D printing factory and spoke to […]

Go to Source
Author: Tia Vialva

Categories
IEEE Spectrum

Cerebras’s Giant Chip Will Smash Deep Learning’s Speed Barrier

graphic link to special report landing page

Artificial intelligence today is much less than it could be, according to Andrew Feldman, CEO and cofounder of AI computer startup Cerebras Systems.

The problem, as he and his fellow Cerebras founders see it, is that today’s artificial neural networks are too time-consuming and compute-intensive to train. For, say, a self-driving car to recognize all the important objects it will encounter on the road, the car’s neural network has to be shown many, many images of all those things. That process happens in a data center where computers consuming tens or sometimes hundreds of kilowatts are dedicated to what is too often a weeks-long task. Assuming the resulting network can carry out the task with the needed accuracy, the many coefficients that define the strength of connections in the network are then downloaded to the car’s computer, which performs the other half of deep learning, called inference.

Cerebras’s customers—and it already has some, despite emerging from stealth mode only this past summer—complain that training runs for big neural networks on today’s computers can take as long as six weeks. At that rate, they are able to train only maybe six neural networks in a year. “The idea is to test more ideas,” says Feldman. “If you can [train a network] instead in 2 or 3 hours, you can run thousands of ideas.”

When IEEE Spectrum visited Cerebras’s headquarters in Los Altos, Calif., those customers and some potential new ones were already pouring their training data into four CS-1 computers through orange-jacketed fiber-optic cables. These 64-centimeter-tall machines churned away, while the heat exhaust of the 20 kilowatts being consumed by each blew out into the Silicon Valley streets through a hole cut into the wall.

The CS-1 computers themselves weren’t much to look at from the outside. Indeed, about three-quarters of each chassis is taken up with the cooling system. What’s inside that last quarter is the real revolution: a hugely powerful computer made up almost entirely of a single chip. But that one chip extends over 46,255 square millimeters—more than 50 times the size of any other processor chip you can buy. With 1.2 trillion transistors, 400,000 processor cores, 18 gigabytes of SRAM, and interconnects capable of moving 100 million billion bits per second, Cerebras’s Wafer Scale Engine (WSE) defies easy comparison with other systems.

The statistics Cerebras quotes are pretty astounding. According to the company, a 10-rack TPU2 cluster—the second of what are now three generations of Google AI computers—consumes five times as much power and takes up 30 times as much space to deliver just one-third of the performance of a single computer with the WSE. Whether a single massive chip is really the answer the AI community has been waiting for should start to become clear this year. “The [neural-network] models are becoming more complex,” says Mike Demler, a senior analyst with the Linley Group, in Mountain View, Calif. “Being able to quickly train or retrain is really important.”

Customers such as supercomputing giant Argonne National Laboratory, near Chicago, already have the machines on their premises, and if Cerebras’s conjecture is true, the number of neural networks doing amazing things will explode.

When the founders of Cerebras—veterans of Sea Micro, a server business acquired by AMD—began meeting in 2015, they wanted to build a computer that perfectly fit the nature of modern AI workloads, explains Feldman. Those workloads are defined by a few things: They need to move a lot of data quickly, they need memory that is close to the processing core, and those cores don’t need to work on data that other cores are crunching.

This suggested a few things immediately to the company’s veteran computer architects, including Gary Lauterbach, its chief technical officer. First, they could use thousands and thousands of small cores designed to do the relevant neural-network computations, as opposed to fewer more general-purpose cores. Second, those cores should be linked together with an interconnect scheme that moves data quickly and at low energy. And finally, all the needed data should be on the processor chip, not in separate memory chips.

The need to move data to and from these cores was, in large part, what led to the WSE’s uniqueness. The fastest, lowest-energy way to move data between two cores is to have them on the same silicon substrate. The moment data has to travel from one chip to another, there’s a huge cost in speed and power because distances are longer and the “wires” that carry the signals must be wider and less densely packed.

The drive to keep all communications on silicon, coupled with the desire for small cores and local memory, all pointed to making as big a chip as possible, maybe one as big as a whole silicon wafer. “It wasn’t obvious we could do that, that’s for sure,” says Feldman. But “it was fairly obvious that there were big benefits.”

For decades, engineers had assumed that a wafer-scale chip was a dead end. After all, no less a luminary than the late Gene Amdahl, chief architect of the IBM System/360 mainframe, had tried and failed spectacularly at it with a company called Trilogy Systems. But Lauterbach and Feldman say that any comparison with Amdahl’s attempt is laughably out-of-date. The wafers Amdahl was working with were one-tenth the size of today’s, and features that made up devices on those wafers were 30 times the size of today’s.

More important, Trilogy had no way of handling the inevitable errors that arise in chip manufacturing. Everything else being equal, the likelihood of there being a defect increases as the chip gets larger. If your chip is nearly the size of a sheet of letter-size paper, then you’re pretty much asking for it to have defects.

But Lauterbach saw an architectural solution: Because the workload they were targeting favors having thousands of small, identical cores, it was possible to fit in enough redundant cores to account for the defect-induced failure of even 1 percent of them and still have a very powerful, very large chip.

Of course, Cerebras still had to solve a host of manufacturing issues to build its defect-tolerant giganto chip. For example, photolithography tools are designed to cast their feature-defining patterns onto relatively small rectangles, and to do that over and over. That limitation alone would keep a lot of systems from being built on a single wafer, because of the cost and difficulty of casting different patterns in different places on the wafer.

But the WSE doesn’t require that. It resembles a typical wafer full of the exact same chips, just as you’d ordinarily manufacture. The big challenge was finding a way to link those pseudochips together. Chipmakers leave narrow edges of blank silicon called scribe lines around each chip. The wafer is typically diced up along those lines. Cerebras worked with Taiwan Semiconductor Manufacturing Co. (TSMC) to develop a way to build interconnects across the scribe lines so that the cores in each pseudochip could communicate.

With all communications and memory now on a single slice of silicon, data could zip around unimpeded, producing a core-to-core bandwidth of 1,000 petabits per second and an SRAM-to-core bandwidth of 9 petabytes per second. “It’s not just a little more,” says Feldman. “It’s four orders of magnitude greater bandwidth, because we stay on silicon.”

Scribe-line-crossing interconnects weren’t the only invention needed. Chip-manufacturing hardware had to be modified. Even the software for electronic design automation had to be customized for working on such a big chip. “Every rule and every tool and every manufacturing device was designed to pick up a normal-sized chocolate chip cookie, and [we] delivered something the size of the whole cookie sheet,” says Feldman. “Every single step of the way, we have to invent.”

Wafer-scale integration “has been dismissed for the last 40 years, but of course, it was going to happen sometime,” he says. Now that Cerebras has done it, the door may be open to others. “We think others will seek to partner with us to solve problems outside of AI.”

Indeed, engineers at the University of Illinois and the University of California, Los Angeles, see Cerebras’s chip as a boost to their own wafer-scale computing efforts using a technology called silicon-interconnect fabric [see “Goodbye, Motherboard. Hello, Silicon-Interconnect Fabric,” IEEE Spectrum, October 2019]. “This is a huge validation of the research we’ve been doing,” says the University of Illinois’s Rakesh Kumar. “We like the fact that there is commercial interest in something like this.”

The CS-1 is more than just the WSE chip, of course, but it’s not much more. That’s both by design and necessity. What passes for the motherboard is a power-delivery system that sits above the chip and a water-cooled cold plate below it. Surprisingly enough, it was the power-delivery system that was the biggest challenge in the computer’s development.

The WSE’s 1.2 trillion transistors are designed to operate at about 0.8 volts, pretty standard for a processor. There are so many of them, though, that in all they need 20,000 amperes of current. “Getting 20,000 amps into the wafer without significant voltage drop is quite an engineering challenge—much harder than cooling it or addressing the yield problems,” says Lauterbach.

Power can’t be delivered from the edge of the WSE, because the resistance in the interconnects would drop the voltage to zero long before it reached the middle of the chip. The answer was to deliver it vertically from above. Cerebras designed a fiberglass circuit board holding hundreds of special-purpose chips for power control. One million copper posts bridge the millimeter or so from the fiberglass board to points on the WSE.

Delivering power in this way might seem straightforward, but it isn’t. In operation, the chip, the circuit board, and the cold plate all warm up to the same temperature, but they expand when doing so by different amounts. Copper expands the most, silicon the least, and the fiberglass somewhere in between. Mismatches like this are a headache in normal-size chips because the change can be enough to shear away their connection to a printed circuit board or produce enough stress to break the chip. For a chip the size of the WSE, even a small percentage change in size translates to millimeters.

“The challenge of [coefficient of thermal expansion] mismatch with the motherboard was a brutal problem,” says Lauterbach. Cerebras searched for a material with the right intermediate coefficient of thermal expansion, something between those of silicon and fiberglass. Only that would keep the million power-delivery posts connected. But in the end, the engineers had to invent one themselves, an endeavor that took a year and a half to accomplish.

The WSE is obviously bigger than competing chips commonly used for neural-network calculations, like the Nvidia Tesla V100 graphics processing unit or Google’s Tensor Processing Unit. But is it better?

In 2018, Google, Baidu, and some top academic groups began working on benchmarks that would allow apples-to-apples comparisons among systems. The result, MLPerf, released training benchmarks in May 2018.

According to those benchmarks, the technology for training neural networks has made some huge strides in the last few years. On the ResNet-50 image-classification problem, the Nvidia DGX SuperPOD—essentially a 1,500-GPU supercomputer—finished in 80 seconds. It took 8 hours on Nvidia’s DGX-1 machine (circa 2017) and 25 days using the company’s K80 from 2015.

Cerebras hasn’t released MLPerf results or any other independently verifiable apples-to-apples comparisons. Instead the company prefers to let customers try out the CS-1 using their own neural networks and data.

This approach is not unusual, according to analysts. “Everybody runs their own models that they developed for their own business,” says Karl Freund, an AI analyst at Moor Insights. “That’s the only thing that matters to buyers.”

Early customer Argonne National Labs, for one, has some pretty intense needs. In training a neural network to recognize, in real time, different types of gravitational-wave events, scientists recently used one-quarter of the resources of Argonne’s megawatt-consuming Theta supercomputer, the 28th most powerful system in the world.

Cutting power consumption down to mere kilowatts seems like a key benefit in supercomputing. Unfortunately, Lauterbach doubts that this feature will be much of a selling point in data centers. “While a lot of data centers talk about [conserving] power, when it comes down to it…they don’t care,” he says. “They want performance.” And that’s something a processor nearly the size of a dinner plate can certainly provide.

This article appears in the January 2020 print issue as “Huge Chip Smashes Deep Learning’s Speed Barrier.”

Categories
ScienceDaily

CRISPR-Cas9 datasets analysis leads to largest genetic screen resource for cancer research

A comprehensive map of genes necessary for cancer survival is one step closer, following the validation of the two largest CRISPR-Cas9 genetic screens in 725 cancer models, across 25 different cancer types. Scientists at the Wellcome Sanger Institute and the Broad Institute of MIT and Harvard compared the consistency of the two datasets, independently verifying the methodology and findings.

The results, published today (20 December 2019) in Nature Communications, mean that the two datasets can be integrated to form the largest genetic screen of cancer cell lines to date, which will provide the basis for the Cancer Dependency Map in around 1,000 cancer models. The scale of this combined dataset will help to speed up the discovery and development of new cancer drugs.

The Cancer Dependency Map (Cancer DepMap) initiative aims to create a detailed rulebook of precision cancer treatments for patients. Two key elements of the Cancer DepMap are the mapping of the genes critical for the survival of cancer cells and analytics of the resulting datasets. Despite recent advances in cancer research, making precision medicine widely available to cancer patients requires many new drug targets.

To find these drug targets, Cancer DepMap researchers take tumour cells from patients to create cell lines that can be grown in the laboratory. They then use CRISPR-Cas9 technology to edit the genes in these cancer cells, turning them off one-by-one to measure how critical they are for the cancer to survive. The results of these experiments indicate which genes are the most likely to make viable drug targets.

In this new study, researchers analysed data from two recently published CRISPR-Cas9 genetic screens performed on cancer cell lines at the Broad and Sanger Institutes. Despite significant differences in experimental protocols, the team found that the screen results were consistent with one another. Crucially, the same genes essential to cancer survival — known as dependencies — were found in both datasets.

Dr Clare Pacini, a first author of the study from the Wellcome Sanger Institute and Open Targets, said: “The Sanger and Broad Institute CRISPR-Cas9 screens were created using slightly different protocols, such as cell growth duration and reagents used. To verify each Institute’s dataset, we have repeated CRISPR-Cas9 screens using the protocols originally employed at the other Institute. Importantly, we have found the same genetic dependencies in each, meaning the new drug targets originally identified are consistent.”

Aviad Tsherniak, of the Broad Institute of MIT and Harvard, said: “This is the first analysis of its kind and is really important for the whole cancer research community. Not only have we reproduced common and specific dependencies across the two datasets, but we have taken biomarkers of gene dependency found in one dataset and recovered them in the other. Our analysis has been unbiased, rigorous and proves the veracity of the approach and the drug targets identified.”

In 2013, results comparing two large pharmacogenomic datasets employing the cancer models used in this study raised concerns about the reproducibility of the experiments performed. Further independently-published analyses eventually proved the two resources to be reliable and consistent, restoring confidence in the robustness of large-scale drug screens, but the episode slowed the progress of cancer research.

This study validates the reproducibility of CRISPR-Cas9 functional genetic screens in order to remove any doubt about their efficacy. It sets rigorous standards for assessing these new types of dataset, facilitating the comparison and integration of large databases of cancer dependencies.

Dr Francesco Iorio, of the Wellcome Sanger Institute and Open Targets, said: “It is worth remembering that when these datasets were originally produced we were dealing with a new, unproven technology. This study is important because it demonstrates the validity of the experimental methods and the consistency of the data that they produce. It also means that two large cancer dependency datasets are compatible. By joining them together, we will have access to much greater statistical power to narrow down the list of targets for the next generation of cancer treatments.”

Story Source:

Materials provided by Wellcome Trust Sanger Institute. Note: Content may be edited for style and length.

Go to Source
Author: