Categories
ScienceDaily

New process for efficient removal of steroid hormones from water

Micropollutants contaminate the water worldwide. Among them are steroid hormones that cannot be eliminated efficiently by conventional processes. Researchers of Karlsruhe Institute of Technology (KIT) have now developed an innovative filtration system that combines a polymer membrane with activated carbon. As the size of the carbon particles is very small, it is possible to reach the reference value of 1 nanogram estradiol — the physiologically most effective estrogen — per liter drinking water proposed by the European Commission. The improved method is reported in Water Research.

Supplying people with clean water is one of the biggest challenges of the 21st century worldwide. Often, drinking water is contaminated with micropollutants. Among them are steroid hormones that are used as medical substances and contraceptives. Their concentration in one liter water, into which treated wastewater is fed, may be a few nanograms only, but this small amount may already damage human health and affect the environment. Due to the low concentration and small size of the molecules, steroid hormones not only are difficult to detect, but also difficult to remove. Conventional sewage treatment technologies are not sufficient.

Reference Value of the European Commission Is Reached

Professor Andrea Iris Schäfer, Head of KIT’s Institute for Advanced Membrane Technology (IAMT) and her team have now developed an innovative method for the quick and energy-efficient elimination of steroid hormones from wastewater. Their technology combines a polymer membrane with activated carbon. “First, water is pressed through a semipermeable membrane that eliminates larger impurities and microorganisms,” Schäfer explains. “Then, water flows through the layer of carbon particles behind, which bind the hormone molecules.” At IAMT, researchers have further developed and improved this process together with filter manufacturer Blücher GmbH, Erkrath. Colleagues at KIT’s Institute of Functional Interfaces (IFG), Institute for Applied Materials (IAM), and the Karlsruhe Nano Micro Facility (KNMF) supported this further development by characterizing the material. This is reported by the scientists in Water Research. “Our technology allows to reach the reference value of 1 nanogram estradiol per liter drinking water proposed by the European Commission,” the Professor of Water Process Engineering says.

Particle Size and Oxygen Concentration Are Important

Scientists studied the processes in the activated carbon layer in more detail and used modified carbon particles (polymer-based spherical activated carbon — PBSAC). “It all depends on the diameter of the carbon particles,” Matteo Tagliavini of IAMT explains. He is the first author of the publication. “The smaller the particle diameter is, the larger is the external surface of the activated carbon layer available for adsorption of hormone molecules.” In an activated carbon layer of 2 mm thickness, the researchers decreased the particle diameter from 640 to 80 ?m and succeeded in eliminating 96% of the estradiol contained in the water. By increasing the oxygen concentration in the activated carbon, adsorption kinetics was further improved and a separation efficiency of estradiol of more than 99% was achieved. “The method allows for a high water flow rate at low pressure, is energy-efficient, and separates many molecules without producing any harmful by-products. It can be used flexibly in systems of variable size, from the tap to industrial facilities,” Schäfer says.

Story Source:

Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Physicists find misaligned carbon sheets yield unparalleled properties

A material composed of two one-atom-thick layers of carbon has grabbed the attention of physicists worldwide for its intriguing — and potentially exploitable — conductive properties.

Dr. Fan Zhang, assistant professor of physics in the School of Natural Sciences and Mathematics at The University of Texas at Dallas, and physics doctoral student Qiyue Wang published an article in June with Dr. Fengnian Xia’s group at Yale University in Nature Photonics that describes how the ability of twisted bilayer graphene to conduct electrical current changes in response to mid-infrared light.

From One to Two Layers

Graphene is a single layer of carbon atoms arranged in a flat honeycomb pattern, where each hexagon is formed by six carbon atoms at its vertices. Since graphene’s first isolation in 2004, its unique properties have been intensely studied by scientists for potential use in advanced computers, materials and devices.

If two sheets of graphene are stacked on top of one another, and one layer is rotated so that the layers are slightly out of alignment, the resulting physical configuration, called twisted bilayer graphene, yields electronic properties that differ significantly from those exhibited by a single layer alone or by two aligned layers.

“Graphene has been of interest for about 15 years,” Zhang said. “A single layer is interesting to study, but if we have two layers, their interaction should render much richer and more interesting physics. This is why we want to study bilayer graphene systems.”

A New Field Emerges

When the graphene layers are misaligned, a new periodic design in the mesh emerges, called a moiré pattern. The moiré pattern is also a hexagon, but it can be made up of more than 10,000 carbon atoms.

“The angle at which the two layers of graphene are misaligned — the twist angle — is critically important to the material’s electronic properties,” Wang said. “The smaller the twist angle, the larger the moiré periodicity.”

The unusual effects of specific twist angles on electron behavior were first proposed in a 2011 article by Dr. Allan MacDonald, professor of physics at UT Austin, and Dr. Rafi Bistritzer. Zhang witnessed the birth of this field as a doctoral student in MacDonald’s group.

“At that time, others really paid no attention to the theory, but now it has become arguably the hottest topic in physics,” Zhang said.

In that 2011 research MacDonald and Bistritzer predicted that electrons’ kinetic energy can vanish in a graphene bilayer misaligned by the so-called “magic angle” of 1.1 degrees. In 2018, researchers at the Massachusetts Institute of Technology proved this theory, finding that offsetting two graphene layers by 1.1 degrees produced a two-dimensional superconductor, a material that conducts electrical current with no resistance and no energy loss.

In a 2019 article in Science Advances, Zhang and Wang, together with Dr. Jeanie Lau’s group at The Ohio State University, showed that when offset by 0.93 degrees, twisted bilayer graphene exhibits both superconducting and insulating states, thereby widening the magic angle significantly.

“In our previous work, we saw superconductivity as well as insulation. That’s what’s making the study of twisted bilayer graphene such a hot field — superconductivity. The fact that you can manipulate pure carbon to superconduct is amazing and unprecedented,” Wang said.

New UT Dallas Findings

In his most recent research in Nature Photonics, Zhang and his collaborators at Yale investigated whether and how twisted bilayer graphene interacts with mid-infrared light, which humans can’t see but can detect as heat. “Interactions between light and matter are useful in many devices — for example, converting sunlight into electrical power,” Wang said. “Almost every object emits infrared light, including people, and this light can be detected with devices.”

Zhang is a theoretical physicist, so he and Wang set out to determine how mid-infrared light might affect the conductance of electrons in twisted bilayer graphene. Their work involved calculating the light absorption based on the moiré pattern’s band structure, a concept that determines how electrons move in a material quantum mechanically.

“There are standard ways to calculate the band structure and light absorption in a regular crystal, but this is an artificial crystal, so we had to come up with a new method,” Wang said. Using resources of the Texas Advanced Computing Center, a supercomputer facility on the UT Austin campus, Wang calculated the band structure and showed how the material absorbs light.

The Yale group fabricated devices and ran experiments showing that the mid-infrared photoresponse — the increase in conductance due to the light shining — was unusually strong and largest at the twist angle of 1.8 degrees. The strong photoresponse vanished for a twist angle less than 0.5 degrees.

“Our theoretical results not only matched well with the experimental findings, but also pointed to a mechanism that is fundamentally connected to the period of moiré pattern, which itself is connected to the twist angle between the two graphene layers,” Zhang said.

Next Step

“The twist angle is clearly very important in determining the properties of twisted bilayer graphene,” Zhang added. “The question arises: Can we apply this to tune other two-dimensional materials to get unprecedented features? Also, can we combine the photoresponse and the superconductivity in twisted bilayer graphene? For example, can shining a light induce or somehow modulate superconductivity? That will be very interesting to study.”

“This new breakthrough will potentially enable a new class of infrared detectors based on graphene with high sensitivity,” said Dr. Joe Qiu, program manager for solid-state electronics and electromagnetics at the U.S. Army Research Office (ARO), an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “These new detectors will potentially impact applications such as night vision, which is of critical importance for the U.S. Army.”

In addition to the Yale researchers, other authors included scientists from the National Institute for Materials Science in Japan. The ARO, the National Science Foundation and the Office of Naval Research supported the study.

Go to Source
Author:

Categories
ProgrammableWeb

Torry Harris Integration Solutions Launches New Product Portal

Torry Harris Integration Solutions (THIS), is an advisor to enterprises worldwide in extending the power of digital access through integration. The company recently announced the launch of its product portal with new SaaS plans to ease the purchasing journey of customers.

“In a post-COVID world, building digital ecosystems takes on more significance, especially to SMEs, who rely on physical contact often within tight geographical boundaries and are more severely impacted economically. We are working with enterprise customers to equip them with tools, training and strategy for empowering the SME segment,” said Karthik TS, Head of Center of Excellence at Torry Harris.

According to Shuba Sridhar, Vice President – Strategic Initiatives, “Our products enable digital ecosystems through adoption and use of API driven digital opportunities. With our SaaS plans, we provide more flexibility and accelerated go-to-market with white-label options. We are currently offering contextual training services for digital upskilling of customer teams.”

The Torry Harris product portfolio includes:

Tools to enable online trade, bundled offerings with partners

Digital Marketplace –Marketplace-in-a-box, for a quicker go-to-market
IoT Glue® – A mobile-enabled IoT Integration platform to connect disparate things
Concierge Bank™ – A marketplace banking accelerator
DigitMarketTM-API Manager – A complete package to manage & monetise APIs
Developer Portal – An API store to discover and expedite API adoption
API Gateway – To manage API connections seamlessly and securely
Publisher Portal – An all-encompassing hub to control API activities

RepoProTM – Enterprise Repository, simplifies asset storage and tracking
AutomatonTM – No-code testing automation tool
AutoStub® – Fast-tracks API testing and delivery
Digit IT Services – Systems integration services include consultancy, strategy, execution and support

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ProgrammableWeb PR</a>

Categories
ScienceDaily

Supercomputing future wind power rise

Wind power surged worldwide in 2019, but will it sustain? More than 340,000 wind turbines generated over 591 gigawatts globally. In the U.S., wind powered the equivalent of 32 million homes and sustained 500 U.S. factories. What’s more, in 2019 wind power grew by 19 percent, thanks to both booming offshore and onshore projects in the U.S. and China.

A study by Cornell University researchers used supercomputers to look into the future of how to make an even bigger jump in wind power capacity in the U.S.

“This research is the first detailed study designed to develop scenarios for how wind energy can expand from the current levels of seven percent of U.S. electricity supply to achieve the 20 percent by 2030 goal outlined by the U.S. Department of Energy National Renewable Energy Laboratory (NREL) in 2014,” said study co-author Sara C. Pryor, a professor in the Department of Earth and Atmospheric Studies, Cornell University. Pryor and co-authors published the wind power study in Nature Scientific Reports, February 2020.

The Cornell study investigated plausible scenarios for how the expansion of installed capacity of wind turbines can be achieved without use of additional land. Their results showed that the US could double or even quadruple the installed capacity with little change to system-wide efficiency. What’s more, the additional capacity would make very small impacts on local climate. This is achieved in part by deploying next generation, larger wind turbines.

The study focused on a potential pitfall of whether adding more turbines in a given area might decrease their output or even disrupt the local climate, a phenomenon caused by what’s referred to as ‘wind turbine wakes.’ Like the water wake behind a motorboat, wind turbines create a wake of slower, choppy air that eventually spreads and recovers its momentum.

“This effect has been subject to extensive modelling by the industry for many years, and it is still a highly complex dynamic to model,” Pryor said.

The researchers conducted simulations with the widely-used Weather Research Forecasting (WRF) model, developed by the National Center for Atmospheric Research. They applied the model over the eastern part of the U.S., where half of the current national wind energy capacity is located.

“We then found the locations of all 18,200 wind turbines operating within the eastern U.S. along with their turbine type,” Pryor said. She added that those locations are from 2014 data, when the NREL study was published.

“For each wind turbine in this region, we determined their physical dimensions (height), power, and thrust curves so that for each 10-minute simulation period we can use a wind farm parameterization within WRF to compute how much power each turbine would generate and how extensive their wake would be,” she said. Power and wake are both a function of the wind speed that strikes the turbines and what the local near-surface climate impact would be. They conducted the simulations at a grid resolution of 4 km by 4 km in order to provide detailed local information.

The authors chose two sets of simulation years because wind resources vary from year to year as a result of natural climate variability. “Our simulations are conducted for a year with relatively high wind speeds (2008) and one with lower wind speeds (2015/16),” Pryor said, because of the interannual variability in climate from the El Nino-Southern Oscillation. “We performed simulations for a base case in both years without the presence/action of wind turbines so we can use this as a reference against which they describe the impact from wind turbines on local climates,” Pryor said.

The simulations were then repeated for a wind turbine fleet as of 2014, then for doubled installed capacity and quadrupled installed capacity, which represents the capacity necessary to achieve the 20 percent of electricity supply from wind turbines in 2030.

“Using these three scenarios we can assess how much power would be generated from each situation and thus if the electrical power production is linearly proportional to the installed capacity or if at very high penetration levels the loss of production due to wakes starts to decrease efficiency,” Pryor said.

These simulations are massively computationally demanding. The simulation domain is over 675 by 657 grid cells in the horizontal and 41 layers in the vertical. “All our simulations were performed in the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) computational resource known as Cori. Simulations presented in our paper consumed over 500,000 CPU hours on Cori and took over a calendar year to complete on the NERSC Cray. That resource is designed for massively parallel computing but not for analysis of the resulting simulation output,” Pryor said.

“Thus, all of our analyses were performed on the XSEDE Jetstream resource using parallel processing and big data analytics in MATLAB,” Pryor added. The Extreme Science and Engineering Discovery Environment (XSEDE), awards supercomputer resources and expertise to researchers and is funded by the National Science Foundation (NSF).

The NSF-funded Jetstream cloud environment is supported by Indiana University, the University of Arizona, and the Texas Advanced Computing Center (TACC). Jetstream is a configurable large-scale computing resource that leverages both on-demand and persistent virtual machine technology to support a much wider array of software environments and services than current NSF resources can accommodate.

“Our work is unprecedented in the level of detail in the wind turbine descriptions, the use of self-consistent projections for increased installed capacity, study domain size, and the duration of the simulations,” Pryor said. However, she acknowledged uncertainty is the best way to parameterize the action of the wind turbines on the atmosphere and specifically the downstream recovery of wakes.

The team is currently working on how to design, test, develop, and improve wind farm parameterizations for use in WRF. The Cornell team recently had a publication on this matter in the Journal of Applied Meteorology and Climatology where all the analysis was performed on XSEDE resources (this time on Wrangler, a TACC system) and have requested additional XSEDE resources to further advance that research.

Wind energy could play a bigger role in reducing carbon dioxide emissions from energy production, according to the study authors. Wind turbines repay their lifetime carbon emissions associated with their deployment and fabrication in three to seven months of operation. This amounts to nearly 30 years of virtually carbon-free electricity generation.

“Our work is designed to inform the expansion of this industry and ensure it’s done in a way that maximizes the energy output from wind and thus continues the trend towards lower cost of energy from wind. This will benefit commercial and domestic electricity users by ensuring continued low electricity prices while helping to reduce global climate change by shifting toward a low-carbon energy supply,” Pryor said.

Said Pryor: “Energy systems are complex, and the atmospheric drivers of wind energy resources vary across time scales from seconds to decades. To fully understand where best to place wind turbines, and which wind turbines to deploy requires long-duration, high-fidelity, and high-resolution numerical simulations on high performance computing systems. Making better calculations of the wind resource at locations across the U.S. can ensure better decision making and a better, more robust energy supply.”

Go to Source
Author:

Categories
ScienceDaily

Magnetic monopoles detected in Kagome spin ice systems

Magnetic monopoles were detected for the first time worldwide at the Berlin Neutron Source BER II in 2008. At that time they in a one-dimensional spin system of a dysprosium compound. About 10 years ago, monopole quasi-particles could also be detected in two-dimensional spin-ice systems consisting of tetrahedral crystal units. However, these spin-ice materials were electrical insulators.

Now: Magnetic monopoles in a metal

Dr. Kan Zhao and Prof. Philipp Gegenwart from the University of Augsburg, together with teams from the Heinz Meier Leibnitz Centre, Forschungszentrum Jülich, the University of Colorado, the Academy of Sciences in Prague and the Helmholtz-Zentrum Berlin, have now shown for the first time that a metallic compound can also form such magnetic monopoles. The team in Augsburg prepared crystalline samples from the elements holmium, silver and germanium for this purpose.

Kagome spin-ice system means frustration

In the HoAgGe crystals, the magnetic moments (spins) of the holmium atoms form a so-called two-dimensional Kagome pattern. This name comes from the Japanese Kagome braiding art, in which the braiding bands are not woven at right angles to each other, but in such a way that triangular patterns are formed.

In the Kagome-pattern the spins of neighbouring atoms can not be aligned contrary to each other as usual. Instead, there are two permitted spin configurations: Either the spins of two of the three atoms point exactly towards the center of the triangle, while those of the third atom point out of the center. Or it is exactly the other way round: One spin points to the center, the other two out of it. This limits the possibilities of spin arrangements — hence the name “Kagome spin ice.” One consequence of this is that this system behaves as if magnetic monopoles were present in it.

This behaviour has now been experimentally demonstrated for the first time in HoAgGe crystals by the cooperation lead by the Augsburg researchers. They cooled the samples near absolute zero temperature and examined them under external magnetic fields of varying strength. Part of the experiments were carried out at the Heinz Maier-Leibnitz Centre in Garching near Munich. They were supported by the department of sample environment of the HZB, which provided a superconducting cryomagnet for the experiments at the FRM-II.

Data on the spin energy spectrum at NEAT

Thus they were able to generate different spin arrangements, which are expected in a Kagome spin ice system. Model calculations from the Augsburg research team showed what the energy spectrum of the spins should look like. This energy spectrum of the spins could then be measured using the method of inelastic neutron scattering at the NEAT instrument at the Berlin neutron source. “This was the final building block for detecting the magnetic monopoles in this system. The agreement with the theoretically predicted spectra is really excellent” says Dr. Margarita Russina, who is responsible for the NEAT instrument at HZB.

Story Source:

Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

New Coronavirus API Tracks Reported Case Data Worldwide

The coronavirus (COVID-19) outbreak continues to spread in the US and worldwide, with nearly 100,000 reported cases globally. Good information is extremely valuable during times like these and APIs can help make such information widely available. We previously covered a number of APIs that let developers leverage any available data about the virus; today we want to spotlight yet another one.

The GitHub repository under the name NovelCOVID has released an API that tracks data about the number of current cases of the virus. It is a simple RESTful APITrack this API with two resources. The first, /all retrieves data on the number of worldwide reported cases, deaths and the number of recovered patients. The /countries resource breaks the data down by country and includes the number of new cases, newly reported deaths and the number of cases deemed critical. There are currently over 80 countries represented in this data set.

The API can be found on GitHub and a Node.js SDKTrack this Framework/Library has also been made available. The ProgrammableWeb directory has a coronavirus category that you can follow to keep up with all of the latest API updates about the outbreak.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">wsantos</a>

Categories
3D Printing Industry

Researchers create roadmap for 3D bioprinting

A worldwide collective of researchers and scientists from universities, institutions, and hospitals have come together to produce a roadmap for 3D bioprinting.  Published in Biofabrication, the paper details the current state of bioprinting, including recent advances of the technology in selected applications as well as the present developments and challenges. It also envisions how the […]

Go to Source
Author: Anas Essop

Categories
IEEE Spectrum

New AI System Predicts Seizures With Near-Perfect Accuracy

For the roughly 50 million people worldwide with epilepsy, the exchange of electrical signals between cells in their brain can sometimes go haywire and cause a seizure—often with little to no warning. Two researchers at the University of Louisiana at Lafayette have developed a new AI-powered model that can predict the occurrence of seizures up to one hour before onset with 99.6 percent accuracy.

“Due to unexpected seizure times, epilepsy has a strong psychological and social effect on patients,” explains Hisham Daoud, a researcher who co-developed the new model.

Detecting seizures ahead of time could greatly improve the quality of life for patients with epilepsy and provide them with enough time to take action, he says. Notably, seizures are controllable with medication in up to 70 percent of these patients.

Categories
IEEE Spectrum

4D Bioprinting Smart Constructs for the Heart

Cardiovascular disease is the leading cause of mortality worldwide, accounting for nearly 18 million deaths each year, according to the World Health Organization. In recent years, scientists have looked to regenerative therapies – including those that use 3D-printed tissue – to repair damage done to the heart and restore cardiac function.

Thanks to advancements in 3D-printing technology, engineers have applied cutting-edge bioprinting techniques to create scaffolds and cardiac tissue that, once implanted, can quickly integrate with native tissues in the body. While 3D bioprinting can create 3D structures made of living cells, the final product is static – it cannot grow or change in response to changes in its environment.

Conversely, in 4D bioprinting, time is the fourth dimension. Engineers apply 4D printing strategies to create constructs using biocompatible, responsive materials or cells that can grow or even change functionalities over time and in response to their environment. This technology could be a game-changer for human health, particularly in pediatrics, where 4D-printed constructs could grow and change as children age, eliminating the need for future surgeries to replace tissues or scaffolds that fail to do the same.

But, 4D bioprinting technology is still young. One of the critical challenges impacting the field is the lack of advanced 4D-printable bioinks – material used to produce engineered live tissue using printing technology – that not only meet the requirements of 3D bioprinting, but also feature smart, dynamic capabilities to regulate cell behaviors and respond to changes in the environment wherever they’re implanted in the body.

Recognizing this, researchers at George Washington University (GWU) and the University of Maryland’s A. James Clark School of Engineering are working together to shed new light on this burgeoning field. GWU Department of Mechanical and Aerospace Engineering Associate Professor Lijie Grace Zhang and UMD Fischell Department of Bioengineering Professor and Chair John Fisher were recently awarded a joint $550,000 grant from the National Science Foundation to investigate 4D bioprinting of smart constructs for cardiovascular study.  

Their main goal is to design novel and reprogrammable smart bioinks that can create dynamic 4D-bioprinted constructs to repair and control the muscle cells that make up the heart and pump blood throughout the body. The muscle cells they’re working with – human induced pluripotent stem cell (iPSC) derived cardiomyocytes – represent a promising stem cell source for cardiovascular regeneration.

In this study, the bioinks, and the 4D structures they’re used to create, are considered “reprogrammable” because they can be precisely controlled by external stimuli – in this case, by light – to contract and elongate on command in the same way that native heart muscle cells do with each and every heartbeat.

The research duo will use long-wavelength near-infrared (NIR) light to serve as the stimulus that prompts the 4D bioprinted structures into action. Unlike ultraviolet or visible light, long-wavelength NIR light could efficiently penetrate the bioprinted structures without causing harm to surrounding cells.

“4D bioprinting is at the frontier of the field of bioprinting,” Zhang said. “This collaborative research will expand our fundamental understanding of iPSC cardiomyocyte development in a dynamic microenvironment for cardiac applications. We are looking forward to a fruitful collaboration between our labs in the coming years.” 

“We are thrilled to work with Dr. Zhang and her lab to continue to develop novel bioinks for 3D- and 4D- printing,” Fisher said. “We are confident that the collaborative research team will continue to bring to light untapped printing strategies, particularly in regards to stem cell biology.” 

Moving forward, Zhang and Fisher hope to apply their 4D bioprinting technique to further study of the fundamental interactions between 4D structures and cardiomyocyte behaviors.

“The very concept of 4D bioprinting is so new that it opens up a realm of possibilities in tissue engineering that few had ever imagined,” Fisher said. “While scientists and engineers have a lot of ground to cover, 4D bioprinted tissue could one day change how we treat pediatric heart disease, or even pave the way to alternatives to donor organs.”

At GWU, Zhang leads the Bioengineering Laboratory for Nanomedicine and Tissue Engineering. At UMD, Fisher leads the Center for Engineering Complex Tissues, a joint research collaboration between UMD, Rice University, and the Wake Forest Institute for Regenerative Medicine. Fisher is also the principal investigator of the Tissue Engineering and Biomaterials Lab, housed within the UMD Fischell Department of Bioengineering.

Categories
ScienceDaily

Grouping ‘smart cities’ into types may help aspiring city planners find a path

A comparative analysis of “smart cities” worldwide reveals four distinct types, according to an international team of researchers. The categories may help city planners to identify and emulate models that are close to their own socio-economic circumstances and policy aspirations.

“Smart cities are those that use new information and communication technologies to solve pressing problems — such as housing, transportation, and energy — in urban planning and governance,” said Krishna Jayakar, professor of telecommunications, Penn State. “Yet, the term ‘smart city,’ remains more of a buzzword than a clearly articulated program of action. Our research seeks to identify models of the smart city from the bottom-up, by looking at programs municipal planners have actually implemented.”

In a paper published online on July 5 in the journal Telecommunications Policy, Jayakar and his colleagues identified the types of smart city with a goal of creating a basis for the study and implementation of smart-city components.

“The construction of smart cities has been actively implemented all over the world,” said Rachel Peng, doctoral candidate in communications, Penn State and a co-author on the paper. “For different types of cities, different strategies are adopted to make them ‘smart.’ We not only aim to present a comparative analysis of municipal smart city plans, but also seek to put forward targeted suggestions for the construction of smart cities based on our findings.”

Specifically, the team conducted a comparative analysis of 60 municipal smart-city plans drawn from countries around the world. They used a statistical tool, called cluster analysis, to identify the combinations of projects that are most often used together.

Their results reveal four major types of smart city:

Essential Services Model

Cities within the group Essential Services Model are characterized by their use of mobile networks in their emergency management programs and by their digital healthcare services. These cities, that may already have good communications infrastructures, prefer to put their money into a few well-chosen smart city programs. Examples include Tokyo and Copenhagen.

Smart Transportation Model

Smart Transportation Model cities encompass those that are densely populated and face problems with moving goods and people within the city. Cities in this group emphasize initiatives to control urban congestion — through smart public transportation, car sharing and/or self-driving cars — as well as the use of information and communication technologies. Singapore and Dubai are included in this group.

Broad Spectrum Model

Cities falling within the Broad Spectrum Model emphasize urban services, such as water, sewage and waste management, and seek technological solutions for pollution control. They are also characterized by a high level of civic participation. Examples include Barcelona, Vancouver and Bejing.

Business Ecosystem Model

The Business Ecosystem Model seeks to use the potential of information and communication technologies to jumpstart economic activity. It includes cities that emphasize digital skills training as a necessary accompaniment to create a trained workforce and aim to foster high-tech businesses. Amsterdam, Edinburgh and Cape Town are examples.

“Our findings can provide city planners with information on specific projects and templates implemented in the field by other planners,” said Jayakar. “Cities hoping to implement smart city plans may also consult the four models to identify cities that match their socio-economic circumstances the most closely to use as an aid in devising their own plans.”

The National Science Foundation of China supported this work.

Story Source:

Materials provided by Penn State. Note: Content may be edited for style and length.

Go to Source
Author: