After more than a decade, ChIP-seq may be quantitative after all

For more than a decade, scientists studying epigenetics have used a powerful method called ChIP-seq to map changes in proteins and other critical regulatory factors across the genome. While ChIP-seq provides invaluable insights into the underpinnings of health and disease, it also faces a frustrating challenge: its results are often viewed as qualitative rather than quantitative, making interpretation difficult.

But, it turns out, ChIP-seq may have been quantitative all along, according to a recent report selected as an Editors’ Pick by and featured on the cover of the Journal of Biological Chemistry.

“ChIP-seq is the backbone of epigenetics research. Our findings challenge the belief that additional steps are required to make it quantitative,” said Brad Dickson, Ph.D., a staff scientist at Van Andel Institute and the study’s corresponding author. “Our new approach provides a way to quantify results, thereby making ChIP-seq more precise, while leaving standard protocols untouched.”

Previous attempts to quantify ChIP-seq results have led to additional steps being added to the protocol, including the use of “spike-ins,” which are additives designed to normalize ChIP-seq results and reveal histone changes that otherwise may be obscured. These extra steps increase the complexity of experiments while also adding variables that could interfere with reproducibility. Importantly, the study also identifies a sensitivity issue in spike-in normalization that has not previously been discussed.

Using a predictive physical model, Dickson and his colleagues developed a novel approach called the sans-spike-in method for Quantitative ChIP-sequencing, or siQ-ChIP. It allows researchers to follow the standard ChIP-seq protocol, eliminating the need for spike-ins, and also outlines a set of common measurements that should be reported for all ChIP-seq experiments to ensure reproducibility as well as quantification.

By leveraging the binding reaction at the immunoprecipitation step, siQ-ChIP defines a physical scale for sequencing results that allows comparison between experiments. The quantitative scale is based on the binding isotherm of the immunoprecipitation products.

Story Source:

Materials provided by Van Andel Research Institute. Note: Content may be edited for style and length.

Go to Source


Biologists create new genetic systems to neutralize gene drives

In the past decade, researchers have engineered an array of new tools that control the balance of genetic inheritance. Based on CRISPR technology, such gene drives are poised to move from the laboratory into the wild where they are being engineered to suppress devastating diseases such as mosquito-borne malaria, dengue, Zika, chikungunya, yellow fever and West Nile. Gene drives carry the power to immunize mosquitoes against malarial parasites, or act as genetic insecticides that reduce mosquito populations.

Although the newest gene drives have been proven to spread efficiently as designed in laboratory settings, concerns have been raised regarding the safety of releasing such systems into wild populations. Questions have emerged about the predictability and controllability of gene drives and whether, once let loose, they can be recalled in the field if they spread beyond their intended application region.

Now, scientists at the University of California San Diego and their colleagues have developed two new active genetic systems that address such risks by halting or eliminating gene drives in the wild. On Sept.18, 2020 in the journal Molecular Cell, research led by Xiang-Ru Xu, Emily Bulger and Valentino Gantz in the Division of Biological Sciences offers two new solutions based on elements developed in the common fruit fly.

“One way to mitigate the perceived risks of gene drives is to develop approaches to halt their spread or to delete them if necessary,” said Distinguished Professor Ethan Bier, the paper’s senior author and science director for the Tata Institute for Genetics and Society. “There’s been a lot of concern that there are so many unknowns associated with gene drives. Now we have saturated the possibilities, both at the genetic and molecular levels, and developed mitigating elements.”

The first neutralizing system, called e-CHACR (erasing Constructs Hitchhiking on the Autocatalytic Chain Reaction) is designed to halt the spread of a gene drive by “shooting it with its own gun.” e-CHACRs use the CRISPR enzyme Cas9 carried on a gene drive to copy itself, while simultaneously mutating and inactivating the Cas9 gene. Xu says an e-CHACR can be placed anywhere in the genome.

“Without a source of Cas9, it is inherited like any other normal gene,” said Xu. “However, once an e-CHACR confronts a gene drive, it inactivates the gene drive in its tracks and continues to spread across several generations ‘chasing down’ the drive element until its function is lost from the population.”

The second neutralizing system, called ERACR (Element Reversing the Autocatalytic Chain Reaction), is designed to eliminate the gene drive altogether. ERACRs are designed to be inserted at the site of the gene drive, where they use the Cas9 from the gene drive to attack either side of the Cas9, cutting it out. Once the gene drive is deleted, the ERACR copies itself and replaces the gene-drive.

“If the ERACR is also given an edge by carrying a functional copy of a gene that is disrupted by the gene drive, then it races across the finish line, completely eliminating the gene drive with unflinching resolve,” said Bier.

The researchers rigorously tested and analyzed e-CHACRs and ERACRs, as well as the resulting DNA sequences, in meticulous detail at the molecular level. Bier estimates that the research team, which includes mathematical modelers from UC Berkeley, spent an estimated combined 15 years of effort to comprehensively develop and analyze the new systems. Still, he cautions there are unforeseen scenarios that could emerge, and the neutralizing systems should not be used with a false sense of security for field-implemented gene drives.

“Such braking elements should just be developed and kept in reserve in case they are needed since it is not known whether some of the rare exceptional interactions between these elements and the gene drives they are designed to corral might have unintended activities,” he said.

According to Bulger, gene drives have enormous potential to alleviate suffering, but responsibly deploying them depends on having control mechanisms in place should unforeseen consequences arise. ERACRs and eCHACRs offer ways to stop the gene drive from spreading and, in the case of the ERACR, can potentially revert an engineered DNA sequence to a state much closer to the naturally-occurring sequence.

“Because ERACRs and e-CHACRs do not possess their own source of Cas9, they will only spread as far as the gene drive itself and will not edit the wild type population,” said Bulger. “These technologies are not perfect, but we now have a much more comprehensive understanding of why and how unintended outcomes influence their function and we believe they have the potential to be powerful gene drive control mechanisms should the need arise.”

Go to Source


Study shows difficulty in finding evidence of life on Mars

In a little more than a decade, samples of rover-scooped Martian soil will rocket to Earth.

While scientists are eager to study the red planet’s soils for signs of life, researchers must ponder a considerable new challenge: Acidic fluids — which once flowed on the Martian surface — may have destroyed biological evidence hidden within Mars’ iron-rich clays, according to researchers at Cornell University and at Spain’s Centro de Astrobiología.

The researchers conducted simulations involving clay and amino acids to draw conclusions regarding the likely degradation of biological material on Mars. Their paper, “Constraining the Preservation of Organic Compounds in Mars Analog Nontronites After Exposure to Acid and Alkaline Fluids,” published Sept. 15 in Nature Scientific Reports.

Alberto G. Fairén, a visiting scientist in the Department of Astronomy in the College of Arts and Sciences at Cornell, is a corresponding author.

NASA’s Perseverance rover, launched July 30, will land at Mars’ Jezero Crater next February; the European Space Agency’s Rosalind Franklin rover will launch in late 2022. The Perseverance mission will collect Martian soil samples and send them to Earth by the 2030s. The Rosalind Franklin rover will drill into the Martian surface, collect soil samples and analyze them in situ.

In the search for life on Mars, the red planet’s clay surface soils are a preferred collection target since the clay protects the molecular organic material inside. However, the past presence of acid on the surface may have compromised the clay’s ability to protect evidence of previous life.

“We know that acidic fluids have flowed on the surface of Mars in the past, altering the clays and its capacity to protect organics,” Fairén said.

He said the internal structure of clay is organized into layers, where the evidence of biological life — such as lipids, nucleic acids, peptides and other biopolymers — can become trapped and well preserved.

In the laboratory, the researchers simulated Martian surface conditions by aiming to preserve an amino acid called glycine in clay, which had been previously exposed to acidic fluids. “We used glycine because it could rapidly degrade under the planet’s environmental conditions,” he said. “It’s perfect informer to tell us what was going on inside our experiments.”

After a long exposure to Mars-like ultraviolet radiation, the experiments showed photodegradation of the glycine molecules embedded in the clay. Exposure to acidic fluids erases the interlayer space, turning it into a gel-like silica.

“When clays are exposed to acidic fluids, the layers collapse and the organic matter can’t be preserved. They are destroyed,” Fairén said. “Our results in this paper explain why searching for organic compounds on Mars is so sorely difficult.”

The paper’s lead author was Carolina Gil?Lozano of Centro de Astrobiología, Madrid and the Universidad de Vigo, Spain. The European Research Council funded this research.

Story Source:

Materials provided by Cornell University. Original written by Blaine Friedlander. Note: Content may be edited for style and length.

Go to Source


Study rules out dark matter destruction as origin of extra radiation in galaxy center

The detection more than a decade ago by the Fermi Gamma Ray Space Telescope of an excess of high-energy radiation in the center of the Milky Way convinced some physicists that they were seeing evidence of the annihilation of dark matter particles, but a team led by researchers at the University of California, Irvine has ruled out that interpretation.

In a paper published recently in the journal Physical Review D, the UCI scientists and colleagues at Virginia Polytechnic Institute and State University and other institutions report that — through an analysis of the Fermi data and an exhaustive series of modeling exercises — they were able to determine that the observed gamma rays could not have been produced by what are called weakly interacting massive particles, most popularly theorized as the stuff of dark matter.

By eliminating these particles, the destruction of which could generate energies of up to 300 giga-electron volts, the paper’s authors say, they have put the strongest constraints yet on dark matter properties.

“For 40 years or so, the leading candidate for dark matter among particle physicists was a thermal, weakly interacting and weak-scale particle, and this result for the first time rules out that candidate up to very high-mass particles,” said co-author Kevork Abazajian, UCI professor of physics & astronomy.

“In many models, this particle ranges from 10 to 1,000 times the mass of a proton, with more massive particles being less attractive theoretically as a dark matter particle,” added co-author Manoj Kaplinghat, also a UCI professor of physics & astronomy. “In this paper, we’re eliminating dark matter candidates over the favored range, which is a huge improvement in the constraints we put on the possibilities that these are representative of dark matter.”

Abazajian said that dark matter signals could be crowded out by other astrophysical phenomena in the Galactic Center — such as star formation, cosmic ray deflection off molecular gas and, most notably, neutron stars and millisecond pulsars — as sources of excess gamma rays detected by the Fermi space telescope.

“We looked at all of the different modeling that goes on in the Galactic Center, including molecular gas, stellar emissions and high-energy electrons that scatter low-energy photons,” said co-author Oscar Macias, a postdoctoral scholar in physics and astronomy at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo whose visit to UCI in 2017 initiated this project. “We took over three years to pull all of these new, better models together and examine the emissions, finding that there is little room left for dark matter.”

Macias, who is also a postdoctoral researcher with the GRAPPA Centre at the University of Amsterdam, added that this result would not have been possible without data and software provided by the Fermi Large Area Telescope collaboration.

The group tested all classes of models used in the Galactic Center region for excess emission analyses, and its conclusions remained unchanged. “One would have to craft a diffuse emission model that leaves a big ‘hole’ in them to relax our constraints, and science doesn’t work that way,” Macias said.

Kaplinghat noted that physicists have predicted that radiation from dark matter annihilation would be represented in a neat spherical or elliptical shape emanating from the Galactic Center, but the gamma ray excess detected by the Fermi space telescope after its June 2008 deployment shows up as a triaxial, bar-like structure.

“If you peer at the Galactic Center, you see that the stars are distributed in a boxy way,” he said. “There’s a disk of stars, and right in the center, there’s a bulge that’s about 10 degrees on the sky, and it’s actually a very specific shape — sort of an asymmetric box — and this shape leaves very little room for additional dark matter.”

Does this research rule out the existence of dark matter in the galaxy? “No,” Kaplinghat said. “Our study constrains the kind of particle that dark matter could be. The multiple lines of evidence for dark matter in the galaxy are robust and unaffected by our work.”

Far from considering the team’s findings to be discouraging, Abazajian said they should encourage physicists to focus on concepts other than the most popular ones.

“There are a lot of alternative dark matter candidates out there,” he said. “The search is going to be more like a fishing expedition where you don’t already know where the fish are.”

Also contributing to this research project — which was supported by the National Science Foundation, the U.S. Department of Energy Office of Science and Japan’s World Premier International Research Center Initiative — were Ryan Keeley, who earned a Ph.D. in physics & astronomy at UCI in 2018 and is now at the Korea Astronomy and Space Science Institute, and Shunsaku Horiuchi, a former UCI postdoctoral scholar in physics & astronomy who is now an assistant professor of physics at Virginia Tech.

Go to Source


Portrait of a virus

More than a decade ago, electronic medical records were all the rage, promising to transform health care and help guide clinical decisions and public health response.

With the arrival of COVID-19, researchers quickly realized that electronic medical records (EMRs) had not lived up to their full potential — largely due to widespread decentralization of records and clinical systems that cannot “talk” to one another.

Now, in an effort to circumvent these impediments, an international group of researchers has successfully created a centralized medical records repository that, in addition to rapid data collection, can perform data analysis and visualization.

The platform, described Aug.19 in Nature Digital Medicine, contains data from 96 hospitals in five countries and has yielded intriguing, albeit preliminary, clinical clues about how the disease presents, evolves and affects different organ systems across different categories of patients COVID-19.

For now, the platform represents more of a proof-of-concept than a fully evolved tool, the research team cautions, adding that the initial observations enabled by the data raise more questions than they answer.

However, as data collection grows and more institutions begin to contribute such information, the utility of the platform will evolve accordingly, the team said.

“COVID-19 caught the world off guard and has exposed important deficiencies in our ability to use electronic medical records to glean telltale insights that could inform response during a shapeshifting pandemic,” said Isaac Kohane, senior author on the research and chair of the Department of Biomedical Informatics in the Blavatnik Institute at Harvard Medical School. “The new platform we have created shows that we can, in fact, overcome some of these challenges and rapidly collect critical data that can help us confront the disease at the bedside and beyond.”

In its report, the Harvard Medical School-led multi-institutional research team provides insights from early analysis of records from 27,584 patients and 187,802 lab tests collected in the early days of epidemic, from Jan. 1 to April 11. The data came from 96 hospitals in the United States, France, Italy, Germany and Singapore, as part of the 4CE Consortium, an international research repository of electronic medical records used to inform studies of the COVID-19 pandemic.

“Our work demonstrates that hospital systems can organize quickly to collaborate across borders, languages and different coding systems,” said study first author Gabriel Brat, HMS assistant professor of surgery at Beth Israel Deaconess Medical Center and a member of the Department of Biomedical Informatics. “I hope that our ongoing efforts to generate insights about COVID-19 and improve treatment will encourage others from around the world to join in and share data.”

The new platform underscores the value of such agile analytics in the rapid generation of knowledge, particularly during a pandemic that places extra urgency on answering key questions, but such tools must also be approached with caution and be subject to scientific rigor, according to an accompanying editorial penned by leading experts in biomedical data science.

“The bar for this work needs to be set high, but we must also be able to move quickly. Examples such as the 4CE Collaborative show that both can be achieved,” writes Harlan Krumholz, senior author on the accompanying editorial and professor of medicine and cardiology and director of the Center for Outcomes Research and Evaluation at Yale-New Haven Hospital.

What kind of intel can EMRs provide?

In a pandemic, particularly one involving a new pathogen, rapid assessment of clinical records can provide information not only about the rate of new infections and the prevalence of disease, but also about key clinical features that can portend good or bad outcomes, disease severity and the need for further testing or certain interventions.

These data can also yield clues about differences in disease course across various demographic groups and indicative fluctuations in biomarkers associated with the function of the heart, kidney, liver, immune system and more. Such insights are especially critical in the early weeks and months after a novel disease emerges and public health experts, physicians and policymakers are flying blind. Such data could prove critical later: Indicative patterns can tell researchers how to design clinical trials to better understand the underlying drivers that influence observed outcomes. For example, if records are showing consistent changes in the footprints of a protein that heralds aberrant blood clotting, the researchers can choose to focus their monitoring, treatments on organ systems whose dysfunction is associated with these abnormalities or focus on organs that could be damaged by clots, notably the brain, heart and lungs.

The analysis of the data collected in March demonstrates that it is possible to quickly create a clinical sketch of the disease that can later be filled in as more granular details emerge, the researchers said.

In the current study, researchers tracked the following data:

  • Total number of COVID-19 patients
  • Number of intensive care unit admissions and discharges
  • Seven-day average of new cases per 100,000 people by country
  • Daily death toll
  • Demographic breakdown of patients
  • Laboratory tests to assess cardiac, immune and kidney and liver function, measure red and white blood cell counts, inflammatory markers such as C-reactive protein, as well as two proteins related to blood clotting (D-dimer) and cardiac muscle injury (troponin)

Telltale patterns

The report’s observations included:

  • Demographic analyses by country showed variations in the age of hospitalized patients, with Italy having the largest proportion of elderly patients (over 70 years) diagnosed with COVID-19.
  • At initial presentation to the hospital, patients showed remarkable consistency in lab tests measuring cardiac, immune, blood-clotting and kidney and liver function.
  • On day one of admission, most patients had relatively moderate disease as measured by lab tests, with initial tests showing moderate abnormalities but no indication of organ failure.
  • Major abnormalities were evident on day one of diagnosis for C-reactive protein — a measure of inflammation — and D-dimer protein, a chemical that measures blood clotting with test results progressively worsening in patients who went on to develop more severe disease or died.
  • Levels of the liver enzyme bilirubin, which indicate liver function, were initially normal across hospitals but worsened among persistently hospitalized patients, a finding suggesting that most patients did not have liver impairment on initial presentation.
  • Creatinine levels — which measure how well the kidneys are filtering waste — showed wide variations across hospitals, a finding that may reflect cross-country variations in testing, in the use of fluids to manage kidney function or differences in timing of patient presentation at various stages of the disease.
  • On average, white blood cell counts — a measure of immune response — were within normal ranges for most patients but showed elevations among those who had severe disease and remained hospitalized longer.

Even though the findings of the report are observations and cannot be used to draw conclusions, the trends they point to could provide a foundation for more focused and in-depth studies that get to the root of these observations, the team said.

“It’s clear that amid an emerging pathogen, uncertainty far outstrips knowledge,” Kohane said. “Our efforts establish a framework to monitor the trajectory of COVID-19 across different categories of patients and help us understand response to different clinical interventions.”

Co-investigators included Griffin Weber, Nils Gehlenborg, Paul Avillach, Nathan Palmer, Luca Chiovato, James Cimino, Lemuel Waitman, Gilbert Omenn, Alberto Malovini; Jason Moore, Brett Beaulieu-Jones; Valentina Tibollo; Shawn Murphy; Sehi L’Yi; Mark Keller; Riccardo Bellazzi; David Hanauer; Arnaud Serret-Larmande; Alba Gutierrez-Sacristan; John Holmes; Douglas Bell; Kenneth Mandl; Robert Follett; Jeffrey Klann; Douglas Murad; Luigia Scudeller; Mauro Bucalo; Katie Kirchoff; Jean Craig; Jihad Obeid; Vianney Jouhet; Romain Griffier; Sebastien Cossin; Bertrand Moal; Lav Patel; Antonio Bellasi; Hans Prokosch; Detlef Kraska; Piotr Sliz; Amelia Tan; Kee Yuan Ngiam; Alberto Zambelli; Danielle Mowery; Emily Schiver; Batsal Devkota; Robert Bradford; Mohamad Daniar; Christel Daniel; Vincent Benoit; Romain Bey; Nicolas Paris; Patricia Serre; Nina Orlova; Julien Dubiel; Martin Hilka; Anne Sophie Jannot; Stephane Breant; Judith Leblanc; Nicolas Griffon; Anita Burgun; Melodie Bernaux; Arnaud Sandrin; Elisa Salamanca; Sylvie Cormont; Thomas Ganslandt; Tobias Gradinger; Julien Champ; Martin Boeker; Patricia Martel; Loic Esteve; Alexandre Gramfort; Olivier Grisel; Damien Leprovost; Thomas Moreau; Gael Varoquaux; Jill-Jênn Vie; Demian Wassermann; Arthur Mensch; Charlotte Caucheteux; Christian Haverkamp; Guillaume Lemaitre; Silvano Bosari, Ian Krantz; Andrew South; Tianxi Cai.

Relevant disclosures:

Co-authors Riccardo Bellazzi of the University of Pavia and Arthur Mensch, of PSL University, are shareholders in Biomeris, a biomedical data analysis company.

Go to Source


Sun and rain transform asphalt binder into potentially toxic compounds

A dramatic oil spill, such as the Deepwater Horizon accident in the Gulf of Mexico a decade ago, can dominate headlines for months while scientists, policymakers and the public fret over what happens to all that oil in the environment. However, far less attention is paid to the fate of a petroleum product that has been spread deliberately across the planet for decades: asphalt binder.

Now a study by chemists at the Florida State University-headquartered National High Magnetic Field Laboratory shows that asphalt binder, when exposed to sun and water, leaches thousands of potentially toxic compounds into the environment. The study was published in the journal Environmental Science & Technology.

Asphalt binder, also called asphalt cement, is the glue that holds together the stones, sand and gravel in paved roads. The heavy, black, sticky goo is derived from bottom-of-the-barrel crude oil at the tail end of the distillation process.

The MagLab, funded by the National Science Foundation and the State of Florida, is a world leader in the field of petroleomics, which studies the mind-numbingly complex hydrocarbons that make up crude oil and its byproducts. Using high-resolution ion cyclotron resonance (ICR) mass spectrometers, chemists there have developed expertise in identifying the tens of thousands of different types of molecules that a single drop can contain, and how that composition can be changed by time, bacteria or environmental conditions.

Ryan Rodgers, director of petroleum applications and of the Future Fuels Institute at the MagLab, had wanted for years to study asphalt binder using the ICR instruments. It was a logical next step in his group’s years-long effort to better understand the structure and behavior of petroleum molecules and their potentially toxic effects. Previous studies had shown that soils and runoff near paved roads exhibit higher concentrations of polycyclic aromatic hydrocarbons (PAHs), which are known to be carcinogenic. Rodgers suspected there were dots connecting those PAHs and asphalt binder, and he wanted to find them.

“The long-term stability of petroleum-derived materials in the environment has always been a curiosity of mine,” said Rodgers, who grew up on the Florida Gulf Coast. “Knowing their compositional and structural complexity, it seemed highly unlikely that they would be environmentally benign. How do silky smooth black roads turn into grey, rough roads? And where the heck did all the asphalt go?”

He finally acquired a jug of asphalt binder from a local paving company and handed the project off to Sydney Niles, a Ph.D. candidate in chemistry at Florida State, and MagLab chemist Martha Chacón-Patiño. They designed an experiment in which they created a film of binder on a glass slide, submerged it in water, and irradiated it in a solar simulator for a week, sampling the water at different timepoints to see what was in it. They suspected that the sun’s energy would cause the reactive oxygen-containing compounds in the water to interact with the hydrocarbons in the binder, a process called photooxidation, thus creating new kinds of molecules that would leach into the water.

“We had this road sample and we shined fake sunlight on it in the presence of water,” explained Niles, lead author on the paper. “Then we looked at the water and we found that there are all these compounds that are derived from petroleum, and probably toxic. We also found that more compounds are leached over time.”

The hydrocarbons they found in the water contained more oxygen atoms. The scientists were confident that the sun was indeed the mechanism behind the process because far fewer compounds leached into a control sample that had been kept in the dark, and those had fewer oxygen atoms. In fact, the amount of water-soluble organic compounds per liter that the team found in the water of the irradiated sample after a week was more than 25 times higher than in the sample that had been left in the dark. And, using the lab’s ICR magnets, they detected more than 15,000 different carbon-containing molecules in the water from the irradiated sample.

Given the general toxicity of PAHs, these results are cause for concern, Niles and Rodgers said. But the team will need to do more experiments to investigate that toxicity.

“We have definitively shown that asphalt binder has the potential to generate water-soluble contaminants, but the impact and fate of these will be the subject of future research,” Rodgers said.

They also plan more studies to look at exactly how the compounds are transforming and if different categories of petroleum molecules behave differently.

Niles worries about hydrocarbons in and out of the lab. If she forgets to bring her reusable produce bags to the grocery store, she’d rather juggle her veggies on the way to the register than use a store-furnished plastic bag. Although these findings aren’t good news for the planet, she said, they could lead to positive change.

“Hopefully it’s motivation for a solution,” she said. “I hope that engineers can use this information to find a better alternative, whether it’s a sealant you put on the asphalt to protect it or finding something else to use to pave roads.”

Go to Source

3D Printing Industry

Three unknown facts about continuous fiber 3D printing from Anisoprint CEO Fedor Antonov

With a decade of expertise, it’s safe to say Anisoprint knows a thing or two about continuous fiber 3D printing. The Russian manufacturer specializes in composite fiber co-extrusion 3D printers capable of fabricating high-strength reinforced parts. The company also develops and supplies the fiber-reinforced materials that enable the multitude of applications of the niche technology. […]

Go to Source
Author: Kubi Sertoglu

3D Printing Industry

Ahead of Davos meeting World Economic Forum publishes 3D printing white paper for leaders, gives update on digital file tax

3D printing is on the agenda for Davos 2020, cementing the technology as crucial for the new decade. The World Economic Forum has published two new items that invite discussion. The first, “Would a digital border tax slow down adoption of 3D printing?”, is a continuation of the conversation around how to treat digital goods. […]

Go to Source
Author: Tia Vialva


Beyond the ‘replication crisis,’ does research face an ‘inference crisis’?

For the past decade, social scientists have been unpacking a “replication crisis” that has revealed how findings of an alarming number of scientific studies are difficult or impossible to repeat. Efforts are underway to improve the reliability of findings, but cognitive psychology researchers at the University of Massachusetts Amherst say that not enough attention has been paid to the validity of theoretical inferences made from research findings.

Using an example from their own field of memory research, they designed a test for the accuracy of theoretical conclusions made by researchers. The study was spearheaded by associate professor Jeffrey Starns, professor Caren Rotello, and doctoral student Andrea Cataldo, who has now completed her Ph.D. They shared authorship with 27 teams or individual cognitive psychology researchers who volunteered to submit their expert research conclusions for data sets sent to them by the UMass researchers.

“Our results reveal substantial variability in experts’ judgments on the very same data,” the authors state, suggesting a serious inference problem. Details are newly released in the journal Advancing Methods and Practices in Psychological Science.

Starns says that objectively testing whether scientists can make valid theoretical inferences by analyzing data is just as important as making sure they are working with replicable data patterns. “We want to ensure that we are doing good science. If we want people to be able to trust our conclusions, then we have an obligation to earn that trust by showing that we can make the right conclusions in a public test.”

For this work, the researchers first conducted an online study testing recognition memory for words, “a very standard task” in which people decide whether or not they saw a word on a previous list. The researchers manipulated memory strength by presenting items once, twice, or three times and they manipulated bias — the overall willingness to say things are remembered — by instructing participants to be extra careful to avoid certain types of errors, such as failing to identify a previously studied item.

Starns and colleagues were interested in one tricky interpretation problem that arises in many recognition studies, that is, the need to correct for differences in bias when comparing memory performance across populations or conditions. Unfortunately, this situation can arise if memory for the population of interest if equal to, better than, or worse than controls. Recognition researchers use a number of analysis tools to distinguish these possibilities, some of which have been around since the 1950’s.

To determine if researchers can use these tools to accurately distinguish memory and bias, the UMass researchers created seven two-condition data sets and sent them to contributors without labels, asking them to indicate whether or not the conditions were from the same or different levels of the memory strength or response bias manipulations. Rotello explains, “These are the same sort of data they’d be confronted with in an experiment in their own labs, but in this case we knew the answers. We asked, ‘did we vary memory strength, response bias, both or neither?'”

The volunteer cognitive psychology researchers could use any analyses they thought were appropriate, Starns adds, and “some applied multiple techniques, or very complex, cutting-edge techniques. We wanted to see if they could make accurate inferences and whether they could accurately gauge uncertainty. Could they say, ‘I think there’s a 20 percent chance that you only manipulated memory in this experiment,’ for example.”

Starns, Rotello and Cataldo were mainly interested in the reported probability that memory strength was manipulated between the two conditions. What they found was “enormous variability between researchers in what they inferred from the same sets of data,” Starns says. “For most data sets, the answers ranged from 0 to 100 percent across the 27 responders,” he adds, “that was the most shocking.”

Rotello reports that about one-third of responders “seemed to be doing OK,” one-third did a bit better than pure guessing, and one-third “made misleading conclusions.” She adds, “Our jaws dropped when we saw that. How is it that researchers who have used these tools for years could come to completely different conclusions about what’s going on?”

Starns notes, “Some people made a lot more incorrect calls than they should have. Some incorrect conclusions are unavoidable with noisy data, but they made those incorrect inferences with way too much confidence. But some groups did as well as can be expected. That was somewhat encouraging.”

In the end, the UMass Amherst researchers “had a big reveal party” and gave participants the option of removing their responses or removing their names from the paper, but none did. Rotello comments, “I am so impressed that they were willing to put everything on the line, even though the results were not that good in some cases.” She and colleagues note that this shows a strong commitment to improving research quality among their peers.

Rotello adds, “The message here is not that memory researchers are bad, but that this general tool can assess the quality of our inferences in any field. It requires teamwork and openness. It’s tremendously brave what these scientists did, to be publicly wrong. I’m sure it was humbling for many, but if we’re not willing to be wrong we’re not good scientists.” Further, “We’d be stunned if the inference problems that we observed are unique. We assume that other disciplines and research areas are at risk for this problem.”

Go to Source


Add a Nerf Dart Launcher to Your Robot Rover

It was a serious undertaking to build a robot just a decade or two ago — an undertaking that required a lot of technical skill and a non-trivial budget. Fortunately, the recent proliferation of inexpensive and easy-to-use maker tools and components has turned constructing a robot into a fun weekend project. Those robots are most often simple rovers, and the excitement of operating them tends to wear off pretty quickly. That’s why you may want to follow Markus Purtz’s tutorial and add a Nerf dart launcher to your robot rover.

This project is intended to work with Purtz’ FPV Rover v2.0 tank design, which is a small and relatively simple robot with provisions for first-person view control. Once you’re done 3D printing and creating the FPV Rover v2.0, you can follow this guide to add a turret to your tank so you can shoot Nerf darts at enemy robots or nefarious intruders. Build two of them, and you can have miniature tank battles in your living room! This turret is designed specifically to be compatible with the FPV Rover v2.0, so no tricky modifications are needed.

To make the Nerf dart launcher, you’ll first need to 3D print the provided mechanical parts. Then you’ll need to purchase a pair of servo motors — a standard hobby servo and a 360 degree servo, two motors, two ESCs (Electronic Speed Controllers), three LiPo batteries and a protection circuit, various mounting hardware, and, of course, a pack of Nerf darts. Once you have all of the parts ready, assembly is easy. After the build is done, you’ll be able to pan and tilt the turret with the servos, and the two motors will launch the Nerf darts from the magazine at a respectable velocity.

Go to Source
Author: Cameron Coward