Categories
Hackster.io

Loud Objects: “Broccoli” // What The… Wednesday?!

These 1-bit noise toys, from Loud Objects, are really something. Can we take a peek under the hood?
// http://www.physicaleditions.com/
// http://www.physicaleditions.com/product/noise_toys

Also mentioned:
// https://bela.io/products/
// https://www.pjrc.com/teensy/

Categories
ScienceDaily

New study provides maps, ice favorability index to companies looking to mine the moon

The 49ers who panned for gold during California’s Gold Rush didn’t really know where they might strike it rich. They had word of mouth and not much else to go on.

Researchers at the University of Central Florida want to give prospectors looking to mine the moon better odds of striking gold, which on the moon means rich deposits of water ice that can be turned into resources, like fuel, for space missions.

A team lead by planetary scientist Kevin Cannon created an Ice Favorability Index. The geological model explains the process for ice formation at the poles of the moon, and mapped the terrain, which includes craters that may hold ice deposits. The model, which has been published in the peer-reviewed journal Icarus, accounts for what asteroid impacts on the surface of the moon may do to deposits of ice found meters beneath the surface.

“Despite being our closest neighbor, we still don’t know a lot about water on the moon, especially how much there is beneath the surface,” Cannon says. “It’s important for us to consider the geologic processes that have gone on to better understand where we may find ice deposits and how to best get to them with the least amount of risk.”

The team was inspired by mining companies on Earth, which conduct detailed geological work, and take core samples before investing in costly extraction sites. Mining companies conduct field mappings, take core samples from the potential site and try to understand the geological reasons behind the formation of the particular mineral they are looking for in an area of interest. In essence they create a model for what a mining zone might look like before deciding to plunk down money to drill.

The team at UCF followed the same approach using data collected about the moon over the years and ran simulations in the lab. While they couldn’t collect core samples, they had data from satellite observations and from the first trip to the moon.

Why Mine the Moon

In order for humans to explore the solar system and beyond, spacecraft have to be able to launch and continue on their long missions. One of the challenges is fuel. There are no gas stations in space, which means spacecraft have to carry extra fuel with them for long missions and that fuel weighs a lot. Mining the moon could result in creating fuel , which would help ease the cost of flights since spacecraft wouldn’t have to haul the extra fuel.

Water ice can be purified and processed to produce both hydrogen and oxygen for propellent, according to several previously published studies. Sometime in the future, this process could be completed on the moon effectively producing a gas station for spacecraft. Asteroids may also provide similar resources for fuel.

Some believe a system of these “gas stations” would be the start of the industrialization of space.

Several private companies are exploring mining techniques to employ on the moon. Both Luxembourg and the United States have adopted legislation giving citizens and corporations ownership rights over resources mined in space, including the moon, according to the study.

“The idea of mining the moon and asteroids isn’t science fiction anymore,” says UCF physics Professor and co-author Dan Britt. “There are teams around the world looking to find ways to make this happen and our work will help get us closer to making the idea a reality.”

The study was supported by NASA’s Solar System Exploration Research Virtual Institute cooperative agreement with the Center for Lunar and Asteroid Surface Science (CLASS) based at UCF.

Story Source:

Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Downsizing the McMansion: Study gauges a sustainable size for future homes

What might homes of the future look like if countries were really committed to meeting global calls for sustainability, such as the recommendations advanced by the Paris Agreement and the U.N.’s 2030 Agenda for Sustainable Development?

Much wider adoption of smart design features and renewable energy for low- to zero-carbon homes is one place to start — the U.N. estimates households consume 29% of global energy and consequently contribute to 21% of resultant CO2 emissions, which will only rise as global population increases.

However, a new scholarly paper authored at New Jersey Institute of Technology (NJIT) assesses another big factor in the needed transformation of our living spaces toward sustainability — the size of our homes.

The paper published in the journal Housing, Theory & Society makes the case for transitioning away from the large, single-family homes that typify suburban sprawl, offering new conceptions for what constitutes a more sustainable and sufficient average home size in high-income countries going forward.

The article surveys more than 75 years of housing history and provides estimates for the optimal spatial dimensions that would align with an “environmentally tenable and globally equitable amount of per-person living area” today. It also spotlights five emerging cases of housing innovation around the world that could serve as models for effectively adopting more space-efficient homes of the future.

“There is no question that if we are serious about embracing our expressed commitments to sustainability, we will in the future need to live more densely and wisely,” said Maurie Cohen, the paper’s author and professor at NJIT’s Department of Humanities. “This will require a complete reversal in our understanding of what it means to enjoy a ‘good life’ and we will need to start with the centerpiece of the ‘American Dream,’ namely the location and scale of our homes.

“The notion of ‘bigger is better’ will need to be supplanted by the question of ‘how much is enough?’ Fortunately, we are beginning to see examples of this process unfolding in some countries around the world, including the United States.”

Reimagining “Sufficient” Size of Sustainable Homes

Cohen’s article explores the concept of “sufficiency limits” for the average contemporary home — or, a rough baseline metric of “enough” living space to meet one’s individual needs while considering various environmental and social factors, such as global resource availability and equitable material usage.

In the paper, Cohen reports that standardized building codes used in the United States and many other countries define minimally “sufficient” home size as 150 square feet for a single individual and 450 square feet for a four-person household.

However, from the standpoint of resource utilization and global equity, the maximally sufficient threshold is more significant.

Based on assessments of global resource availability and so-called total material consumption calculations developed by industrial ecologists and others, Cohen estimates that sustainability and equity considerations require that a home for a single person should be no larger than 215 square feet, and for a four-person family the maximum size should be approximately 860 square feet.

As a striking point of comparison, average home size in the U.S. today is 1,901 square feet — more than twice what could be considered sustainable.

Applying these sufficiency limits in the real world would mean a radical departure from the mindset that is common today in the American homebuilding industry: large cathedral-ceiling foyers, expansive porches, spare bedrooms, extra dining rooms, and a fundamental rethink of the McMansion-style homes that line the cul-de-sacs of the country’s suburbs in general. However, it could spur innovation in the design of more space-efficient homes, a trend gaining popularity particularly among younger generations.

“Lifestyle magazines and websites, television programs, and other media today regularly highlight the benefits of smaller homes,” said Cohen. “One of the most popular contemporary design trends focuses on minimalism and especially Millennials express a desire to live in cosmopolitan urban centers rather than car-dependent suburbs. In some cities, micro-luxury apartments are becoming a fashionable alternative.”

Along with making the critical transition toward greener technologies, Cohen says exploring sufficiency limits in the design of future homes would help to begin aligning infrastructure planning with global sustainability targets, and address two interrelated — and in many ways perplexing — trends in wealthy countries like the U.S. ongoing since the 1950s: home size has been increasing while household size has been declining.

Over the past seven decades, the average size of a newly built single-family home in the country nearly tripled from 983 square feet in 1950 to 2,740 square feet in 2015. Meanwhile, the average number of people per household has decreased 24% (3.3 persons to 2.52 persons) due to falling fertility rates and the fading of residential arrangements in which extended families lived under a single roof.

So, what would the average newly built U.S. home look like if architects and the building industry followed the numbers and adopted sufficiency limits?

In the U.S., average floor space per person would need to be reduced from 754 square feet to 215 square feet, which perhaps surprisingly, is roughly comparable to the amount of space available during the baby boom of the 1950s.

While Cohen acknowledges the myriad political, commercial and cultural challenges of imparting such a sufficiency ceiling on current housing practices, he highlights five examples that he asserts point to shifting sensibilities: the tiny-house movement in the United States; the niche market for substantially smaller houses and apartments in the Nordic countries; the construction of accessory dwelling units in west coast cities of North America; the growing popularity of micro-apartments in New York City and San Francisco; and the emergence of co-living/co-working facilities in Europe.

“Downsizing at such a radical scale may seem unrealistic today, but lifestyles are continually in flux and when looking back on our recent practices of spending such vast sums of money on overly large houses and creating vast separations between neighbors, thirty years from now we will in all likelihood be utterly dumbfounded,” said Cohen. “The idea of spending endless hours mindlessly driving around in cars to reach houses with rooms that we rarely use, we can only hope, will become a faint memory.”

Go to Source
Author:

Categories
ScienceDaily

Magnets for the second dimension

If you’ve ever tried to put several really strong, small cube magnets right next to each other on a magnetic board, you’ll know that you just can’t do it. What happens is that the magnets always arrange themselves in a column sticking out vertically from the magnetic board. Moreover, it’s almost impossible to join several rows of these magnets together to form a flat surface. That’s because magnets are dipolar. Equal poles repel each other, with the north pole of one magnet always attaching itself to the south pole of another and vice versa. This explains why they form a column with all the magnets aligned the same way.

Now, scientists at ETH Zurich have managed to create magnetic building blocks in the shape of cubes that — for the first time ever — can be joined together to form two-dimensional shapes. The new building blocks, which the scientists call modules, are not dipolar but quadrupolar, which means they each have two north poles and two south poles. Inside each of the modules, which are 3D printed in plastic, there are two small conventional dipole magnets with their equal poles facing each other (see picture). The building blocks can be assembled like little chess boards to form any two-dimensional shapes. It works like this: Because the south and north poles attract each other, a quadrupole building block with its two south poles facing left and right will attract, on each of its four sides, a building block that is rotated by 90 degrees so its north poles on face left and right.

Building on this principle, the scientists made coloured modules with an edge length of just over two millimetres. They assembled them into pixel art emojis to demonstrate what the modules can do. However, possible use cases go way beyond such gimmicks. “We’re particularly interested in applications in the field of soft robotics,” says Hongri Gu, a doctoral student in Professor Bradley Nelson’s group at ETH and lead author of the paper that the scientists recently published in Science Robotics.

Quadrupole and dipole in the same building block

The quadrupole dominates the magnetic properties of the modules. It is a little more complicated than that, though, because in addition to the strong quadrupole, the scientists also built a weak dipole into the building blocks. They achieved this by arranging the little magnets in the module at a slight angle to each other rather than parallel (see picture).

“This causes the modules to align themselves with an external magnetic field, like a compass needle does,” Gu explains. “With a variable magnetic field, we can then move the shapes we have built out of the modules. Add in some flexible connectors and it’s even possible to build robots that can be controlled by a magnetic field.”

Gu says that their work was initially about developing the new principle. It is size-independent, he says, meaning that there is no reason why much smaller quadrupole modules couldn’t be developed. The scientists are also studying how the modules could be used to combine a linear structure into a multidimensional object with the help of a magnetic field. This is something that could be of use in the medicine in the future: it is conceivable that objects such as stents could be formed from a thread consisting of such modules. The thread could be inserted into the body in a relatively simple, minimally invasive procedure through a tiny opening and then a magnetic field applied to assemble it into the final multidimensional structure inside the body.

Story Source:

Materials provided by ETH Zurich. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

2D antimony holds promise for post-silicon electronics

Not everything is bigger in Texas — some things are really, really small. A group of engineers at The University of Texas at Austin may have found a new material for manufacturing even smaller computer chips that could replace silicon and help overcome one of the biggest challenges facing the tech industry in decades: the inevitable end of Moore’s Law.

In 1965, Gordon Moore, founder of Intel, predicted the number of transistors that could fit on a computer chip would double every two years, while the cost of computers would be cut in half. Almost a quarter century later and Moore’s Law continues to be surprisingly accurate. Except for one glitch.

Silicon has been used in most electronic devices because of its wide availability and ideal semiconductor properties. But chips have shrunk so much that silicon is no longer capable of carrying more transistors. So, engineers believe the era of Moore’s Law may be coming to an end, for silicon at least. There simply isn’t enough room on existing chips to keep doubling the number of transistors.

Researchers in the Cockrell School of Engineering are searching for other materials with semiconducting properties that could form the basis for an alternative chip. Yuanyue Liu, an assistant professor in the Walker Department of Mechanical Engineering and a member of UT’s Texas Materials Institute, may have found that material.

In a paper published in the Journal of the American Chemical Society, Liu and his team, postdoctoral fellow Long Cheng and graduate student Chenmu Zhang, outline their discovery that, in its 2D form, the chemical element antimony may serve as a suitable alternative to silicon.

Antimony is a semi-metal that is already used in electronics for some semiconductor devices, such as infrared detectors. As a material, it is only a couple of atomic layers thick and has a high charge mobility — the speed a charge moves through a material when being pulled by an electric field. Antimony’s charge mobility is much higher than other semiconductors with similar size, including silicon. This property makes it promising as the building block for post-silicon electronics.

Liu has only demonstrated its potential through theoretical computational methods but is confident it can exhibit the same properties when tested with physical antimony samples, which is the team’s next step. But the findings have even broader significance than simply identifying a potential replacement for silicon in the race to maintain Moore’s Law into the future.

“More importantly, we have uncovered the physical origins of why antimony has a high mobility,” Liu said. “These findings could be used to potentially discover even better materials.”

Story Source:

Materials provided by University of Texas at Austin. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
IEEE Spectrum

Skydio’s Dock in a Box Enables Long-Term Autonomy for Drone Applications

The word “autonomy” in the context of drones (or really any other robot) can mean a whole bunch of different things. Skydio’s newest drone, which you can read lots more about here, is probably the most autonomous drone that we’ve ever seen, in the sense that it can fly itself while tracking subjects and avoiding obstacles. But as soon as the Skydio 2 lands, it’s completely helpless, dependent on a human to pick it up, pack it into a case, and take it back home to recharge.

For consumer applications, this is not a big deal. But for industry, a big part of the appeal of autonomy is being able to deliver results with a minimum of human involvement, since humans are expensive and almost always busy doing other things.

Today, Skydio is announcing the Skydio 2 Dock, a (mostly) self-contained home base that a Skydio 2 drone can snuggle up inside to relax and recharge in between autonomous missions, meaning that you can set it up almost anywhere and get true long-term full autonomy from your drone.

Categories
Hackster.io

576 Ping Pong Balls Form Gigantic “Hard Disk Defrag Visualiser”

Manoj Nathwani and his brother Dhiresh came up with a really neat display for EMF Camp 2016 using WS2811 LED strips, ping pong balls, and a wooden fiberboard backing with a huge number (i.e. 576) of holes in it to form a grid. Like many projects, actually documenting it was a to-do item for some time, but as 2019 crawls to its end, Nathwani decided to finally write it up.

The display was cleverly divided into two 16×8 halves for easy transportation, and after quite a bit of gluing, cutting, and wiring work, the brothers hooked each assembly up to a NodeMCU for testing. Things looked great, so they proceeded to camp where they would assembly the device for display in all its glory…

Unfortunately, they found that things don’t always work the same way in the real world as in a controlled environment, and after hooking it up, it didn’t behave as planned. It could start to run though a color wipe display, but at around 200 LEDs in, things got a bit wonky. Even after swapping out the NodeMCU for an Arduino Uno and checking connections, it still misbehaved, but after analysis, they realized that by daisy chaining all of these strips together they were transmitting on an effective cable length of 48 meters.

The solution would have been to add a new data cable incrementally throughout the setup, or to use a specialized LED strip driver such as FadeCandy from Adafruit. The good news, however, is that people still seemed to still love this glitchy display, especially after jokingly renaming it “Hard Disk Defrag Visualiser.”

Setting aside the issues around not being able to fully program the matrix to display what we wanted this was still a super fun project to work on! There’s something really enjoyable about building a large physical installation for a festival and sharing your creations with others who really take time to appreciate what you’ve built. There’s also something therapeutic about drilling 576 holes in a wooden board, carefully cutting an X onto a ping pong ball and then feeling an LED into it piece-by-piece. The end result was also really pretty to look at, especially outdoors at night and it really came out much better than we originally imagined. At some point in the future I’ll likely take out the LEDs strips again and get all 48 meters of it to correctly work!

Go to Source
Author: Jeremy S. Cook

Categories
IEEE Spectrum

FarmWise Raises $14.5 Million to Teach Giant Robots to Grow Our Food

We humans spend most of our time getting hungry or eating, which must be really inconvenient for the people who have to produce food for everyone. For a sustainable and tasty future, we’ll need to make the most of what we’ve got by growing more food with less effort, and that’s where the robots can help us out a little bit.

FarmWise, a California-based startup, is looking to enhance farming efficiency by automating everything from seeding to harvesting, starting with the worst task of all: weeding. And they’ve just raised US $14.5 million to do it.

Categories
Hackster.io

Use This Adapter to Connect a Retro Keyboard to Your Modern Computer

As far as the average person is concerned, the keyboard they use doesn’t really matter. But for those of us who take typing seriously, our keyboards are very important. While there are certainly many high-quality mechanical keyboards on the market today, many people still prefer retro keyboards like the legendary IBM Model M from the ’80s. However, those often require ports that are no longer in use today. Fortunately there is a solution, and you can use this XTiny adapter to connect XT/AT keyboards to a modern computer’s USB port.

Many vintage keyboards from the ’80s and ’90s, including the IBM Model and Model F keyboards, utilized either the XT or AT interface. Even if your computer today has a PS/2 port, it isn’t as simple as rewiring the connector — the protocol is also different. That’s why this XTiny adapter is necessary. It accepts the connector — a large 5-pin DIN — from a keyboard that has the XT or AT protocol, and has a micro USB-B port for output. All you have to do is plug your keyboard into the XTtiny, and then connect a USB keyboard from that to your computer and you can start typing in retro style.

To accomplish that, the XTtiny board utilizes a Microchip ATmega32u4 microcontroller — the same one found in the Arduino Micro and Leonardo development boards. That microcontroller processes the input from your vintage keyboard, and then repeats it out through the USB port to your computer. When connected to your computer, it will show up just like any other HID USB keyboard. If you want the XTtiny adapter, it costs just $19.99 through the Future Retro Tindie store. However, it is currently out of stock, so you’ll have to join the wait list to get one.

Go to Source
Author: Cameron Coward

Categories
ScienceDaily

Spreading light over quantum computers

Scientists at Linköping University have shown how a quantum computer really works and have managed to simulate quantum computer properties in a classical computer. “Our results should be highly significant in determining how to build quantum computers,” says Professor Jan-Åke Larsson.

The dream of superfast and powerful quantum computers has again been brought into focus, and large resources have been invested in research in Sweden, Europe and the world. A Swedish quantum computer is to be built within ten years, and the EU has designated quantum technology one of its flagship projects.

At the moment, few useful algorithms are available for quantum computers, but it is expected that the technology will be hugely significant in simulations of biological, chemical and physical systems that are far too complicated for even the most powerful computers currently available. A bit in a computer can take only the value one or zero, but a quantum bit can take all values in between. Simply put, this means that quantum computers do not need to take as many operations for each calculation they carry out.

Professor Jan-Åke Larsson and his doctoral student Niklas Johansson, in the Division for Information Coding at the Department of Electrical Engineering, Linköping University, have come to grips with what happens in a quantum computer and why it is more powerful than a classical computer. Their results have been published in the scientific journal Entropy.

“We have shown that the major difference is that quantum computers have two degrees of freedom for each bit. By simulating an additional degree of freedom in a classical computer, we can run some of the algorithms at the same speed as they would achieve in a quantum computer,” says Jan-Åke Larsson.

They have constructed a simulation tool, Quantum Simulation Logic, QSL, that enables them to simulate the operation of a quantum computer in a classical computer. The simulation tool contains one, and only one, property that a quantum computer has that a classical computer does not: one extra degree of freedom for each bit that is part of the calculation.

“Thus, each bit has two degrees of freedom: it can be compared with a mechanical system in which each part has two degrees of freedom — position and speed. In this case, we deal with computation bits — which carry information about the result of the function, and phase bits — which carry information about the structure of the function,” Jan-Åke Larsson explains.

They have used the simulation tool to study some of the quantum algorithms that manage the structure of the function. Several of the algorithms run as fast in the simulation as they would in a quantum computer.

“The result shows that the higher speed in quantum computers comes from their ability to store, process and retrieve information in one additional information-carrying degree of freedom. This enables us to better understand how quantum computers work. Also, this knowledge should make it easier to build quantum computers, since we know which property is most important for the quantum computer to work as expected,” says Jan-Åke Larsson.

Jan-Åke Larsson and his co-workers have also supplemented their theoretical simulations with a physical version built with electronic components. The gates are similar to those used in quantum computers, and the toolkit simulates how a quantum computer works. With its help students, for example, can simulate and understand how quantum cryptography and quantum teleportation works, and also some of the most common quantum computing algorithms, such as Shor’s algorithm for factorisation. (The algorithm works in the current version of the simulation but is equally fast — or slow — as in classical computers).

Story Source:

Materials provided by Linköping University. Note: Content may be edited for style and length.

Go to Source
Author: