They called it The Great Stink. In the summer of 1858, London was hit with a heatwave of noxious consequence. The city filled with a stench emanating from opaque pale-brown fluid flowing along what was once poetically known as “the Silver Thames”. Politicians whose offices overlooked the river doused their curtains with chloride of lime to mask the smell, the first time they’d been incentivized to really take action. At the time, close-quarters living arrangements and poor hygiene were contributing to a rise in illnesses and epidemics. But residents of what was then the world’s largest city believed it was unpleasant smells that directly transmitted contagions like the plague, chlamydia and cholera.
Their belief, the miasma theory of disease transmission, had some truth to it—it just wasn’t precise. The smell of stagnant, contaminated water is indicative of a perfect breeding ground for microorganisms that can cause water-borne diseases. But it’s the germs in the water—not the stench emanating from it—that’s really the problem, and, at the time, scientists had limited technologies and tools to understand the difference. So they found themselves focusing on solutions that couldn’t actually stop the spread of disease.
Now, disease also spreads via Facebook statuses and Google results—not just the droplets from a sneeze or the particles that linger in the air when we forget to cough properly into our elbow crease—and around the world, digital health misinformation is having increasingly catastrophic impacts on physical health. Recent research found Twitter bots were sharing content that contributed to positive sentiments about e-cigarettes. In West Africa, online health misinformation added to the Ebola death toll. In New South Wales, Australia, where the spread of conspiracy theories about water fluoridation run rampant, children suffering from tooth decay are hospitalized for mass extractions at higher rates than in regions where water fluoridation exists. Over the last several weeks, new cases of measles—which the CDC declared eliminated from the United States in 2000—have emerged in places like Portland, Boston, Chicago, and the state of Michigan; researchers worry that the reemergence of preventable diseases like this one is related to a drop in immunization rates due to declining trust in vaccines, which is in turn tied to misleading content encountered on the internet. With new tools and technologies now available to help identify where and how health misinformation spreads, evidence is building that the health misinformation we encounter online can motivate decisions and behaviors that actually make us more susceptible to disease.
You might call these phenomena misinfodemics — the spread of a particular health outcome or disease facilitated by viral misinformation.
Much of the origin of today’s vaccine hesitancy can be traced to a single, retracted article that met the viral power of the internet. The lead scientist of the original piece was in the process of filing a patent for an alternative measles vaccine, and he led a campaign to link the competing measles-mumps-rubella vaccine to autism. The article he published is now widely recognized to have been the result of serious financial conflicts of interest, unethical data collection (including the lead author paying children for their blood samples during his son’s 10th birthday party) and fraud. His medical license has since been revoked, but the virus his article produced has since continued to infect our information channels.The fraudulent study has been referenced as a basis for health hoaxes related to flu vaccines, misinformed advice to refuse the provision of vitamin K to newborns for prevention of bleeding, and modifying evidence-based immunization schedules.
Vaccines are just one part of this story. Researchers led by Dr. Brittany Seymour mapped the direct relationship between viral health misinformation and growing advocacy against water fluoridation. Their findings demonstrated that strong ties on digital social networks galvanizing behind a severely flawed study about fluoridation led to people forming group identities online that continue to fuel the spread of health misinformation. Misinformation based on discredited studies continues to mutate and spread online—in memes, articles and videos, through platforms including Pinterest, Instagram and Facebook. Like the germs running through the River Thames, toxic information now flows through our digital channels.
In the U.S., aggregate data seem to imply that vaccination rates are stable. But this optimism may be short-sighted in today’s digital age, where younger populations—future vaccine decision-makers, in some states— - are becoming sensitized to vaccine misinformation online. DFor example, diseases like measles have long been thought to spread in communities with insufficient “herd immunity”—i.e., not enough vaccinated people to prevent the spread of highly infectious disease. But herd immunity is no longer just a matter of quality public health ecosystems, where vaccinations and antibiotics alone can prevent the spread of disease, but also of quality public information ecosystems. We now know, for example, that social media-based rumors made Ebola spread faster—and that when crisis responders adapted their communications strategies, more communities began receiving vital treatment and taking action for prevention.
And yet, our understanding of exactly how digital infections happen remains focused more on symptoms, looking at the number of shares a given vaccine-hesitancy tweet receives, than on some of the underlying causes, like the digital infrastructure that makes some internet users more susceptible to encountering false information about zimmunization. Additionally, as Richard Carpiano and Nicholas Fitz have argued, “anti-vaxx” as a concept describing a group or individual lacking confidence in evidence-based immunization practices creates a stigma that focuses on the person—the parent as a decision-maker or the unvaccinated child—and the community. More often, as Seymour has noted, the problem is rooted in the virality of the message and the environments in which it spreads.
Public health authorities are not explicitly paying attention to the information ecosystem and how it may impact the spread of vaccine-preventable diseases in the near future. When 75% of Pinterest posts related to vaccines are discussing the false link between measles vaccines and autism, what does it mean for future herd immunity? And what about when state-sponsored disinformation campaigns exploit the vulnerabilities our systems have already created? Just this week, scientists at George Washington University found that a number of Russian bot and troll accounts on Twitter posted about vaccines 22 times more often than the average user.
To date, many public health interventions seem to be addressing the outward signs of a misinfodemic by debunking myths and recommending that scientists collect more data and publish more papers. As well, much of the field remains focused on providing communications guidelines and engaging in traditional broadcast diffusion strategies, but not search engine optimization, viral marketing campaigns, and accessing populations through social diffusion approaches. Research demonstrates that public health digital outreach uses language and strategies that are often inaccessible to the populations they are trying to target. This has created what researchers Michael Golebiewski and danah boyd call “data voids” :search terms where “available relevant data is limited, non-existent, or deeply problematic.” In examining these environments, researchers like Renee DiResta at Data For Democracy have documented the sorts of algorithmic rabbit holes that can lead someone into the depths of disturbing, anxiety-inducing, scientific-sounding (albeit unvalidated and potentially harmful) content that often profits from explanations with quick fixes at a cost.
To its credit, Google has made important progress in this regard. Its search-related guidelines prioritize expertise, authoritativeness, and trustworthiness; now, when you search for something like “flu symptoms” you’ll find Harvard and Mayo Clinic–backed knowledge graph information panels appear on the right-hand side. ” The data in these panels include downloadable PDFs for more information. Facebook also says it’s working to address misinfodemics through a new feature that shares additional context for articles, allowing users to click on the image, and see links to related articles, maps visualizing where a particular article has been shared, source information, and related Wikipedia pages.
It’s not just the big platforms working to stop misinfodemics. Our work on the Credibility Coalition, an effort to develop web-wide standards around online content credibility, and PATH, a project aimed at translating and surfacing scientific claims in new ways, are two efforts of many to think about data standards and information access across different platforms. The Trust Project, meanwhile, has developed a set of machine-readable trust indicators for news platforms; Hypothesis isa tool used by scientists and others to annotate content online; and Hoaxy visualizes the spread of claims online.
Even the CDC and Mayo Clinic maintain Instagram presences, though their collective following is 160,000 people, or 0.1% of Kim Kardashian’s follower base. Health advocates, like Dr. Jennifer Gunter (“Twitter’s Resident Gynecologist”) who blogs about women’s health, debunking celebrity-endorsed myths to a broad audience, and Canadian professor Timothy Caulfield, whose health video series about extreme remedies around the world was recently picked up by Netflix, are gaining recognition online. Doctors around the world are bridging gaps by borrowing strategies from marketing, and scientists are advocating for collaboration between social influencers and public health experts.
Misinfodemics can seem devastating. One lesson learned from urbanization, like what happened in 19th century London, is that when people come together, the risk of disease spread increases. We still don’t completely understand why, especially because new evidence changes scientific consensus over time. After London’s Great Stink, researchers found enough evidence to develop a new understanding of disease transmission, updating the dominant idea that smells caused illness to the new germ theory of disease. Ultimately, there was not one solution but an ecosystem of solutions: People started using antiseptics to keep surgical procedures sanitary, taking antibiotics to treat viral diseases like herpes or HIV, participating in community vaccination campaigns to protect from (and eradicate) diseases like polio and smallpox, and create sewage systems separate from drinking water sources. Today, though the Thames is still polluted, it is no longer a consistent origin of catastrophic epidemics.
Now we know that disease also spreads when people cluster in digital spaces. We know that memes—whether about cute animals or health-related misinformation—spread like viruses: mutating, shifting, and adapting rapidly until one idea finds an optimal form and spreads quickly. What we have yet to develop are effective ways to identify, test and vaccinate against these misinfo-memes. One of the great challenges ahead is identifying a memetic theory of disease that takes into account how digital virality and its surprising, unexpected spread can in turn have real-world public health effects. Until that happens, we should expect more misinfodemics that endanger outbreaks of measles, Ebola and tooth decay, where public health practitioners must simultaneously battle the spread of disease and the spread of misinformation.
from Technology | The Atlantic https://ift.tt/2NxlUIt