BusinessStone's Throw

My Visit to The Trust Project and How to Fix Facebook

Opinion Advocates for ideas and draws conclusions based on the author/producer’s interpretation of facts and data.

We are part of The Trust Project
The Trust Project founder Sally Lehrman is seen here at right gathered with other members of the board for a meeting last Tuesday at One World Trade Center.
Root Cause of Misinformation Age: Platforms More Than People

Before I dive into this column, I just want to be clear that weekday trips to fancy Manhattan buildings, where I meet media bigwigs and thought leaders from academia, are many miles from my norm.

An average Tuesday in the 11 o’clock hour features me, say, sitting on a couch in my living room, tapping away at work and arguing with my dogs while debating whether to consume my noon caffeine jolt a few minutes early or shoehorn in time for my unfinished stab at the day’s Wordle.

But this past Tuesday was quite different.

I found myself on the 44th floor of One World Trade Center, at a conference room inside the Manhattan offices of Condé Nast/Advance Publications, where The Trust Project (TTP) was holding a board meeting.

The organization’s director, the inimitable Sally Lehrman, had graciously invited me and another journalist to speak to the board about our experiences as recent entrants to the international initiative.

TTP’s mission is centered around fortifying journalism’s commitment to transparency. Participating Trust Project media outlets are required to enter a rigorous process, led by Lehrman and her diligent team, where the applying organizations learn how to incorporate an array of standards, best practices, and protocols developed to engender trust in digital news sites.

You Can Label Me

To name one representative example: Trust Project outlets, on their digital platforms, must label each piece of content; participating news organizations are required to clearly display for readers what is news versus opinion versus branded content and all the rest.

Anyway, bigger picture, the organization’s board and wider team is working to combat the disease of misinformation and disinformation, laser-focused on how TTP can play its part in addressing the metastasizing cancers of both the intentional and unintentional promulgation of propaganda.

A trio of academics spoke to the group last Tuesday about the nature of agenda-driven rhetoric, the way it takes root, and how our fractured information infrastructure fosters inequities and sows distrust and discord.

The professors’ presentations got me thinking about the vastness of the problem, and how difficult of a challenge organizations like The Trust Project face in trying to fight the far-reaching tentacles of propagandists and those wittingly and unwittingly amplifying messages designed to divide.

State actors and communication-savvy ideologues seeking to advance destructive agendas reside in an era ripe for rhetorical menace. And they’re aided by millions of accomplices among the masses, of both the suspecting and unsuspecting variety. (Journalists with overly quick social media-sharing trigger fingers are most definitely included among the accomplices. I’ve been guilty of premature posting myself.)

While the magnitude of the problem could darken the optimism of even the most hopeful tech reform activist, some flickers of light appear to illuminate a pathway forward.

Social Scene

In poking around after the Trust Project meeting for relevant findings, I stumbled across new USC research that helps reveal how we should be focused on the architecture of social media platforms when identifying what has caused our current problems and how to fix them. When debating the way to repair our broken communications landscape, direct your attention to the tech wizards who manufacture the social web’s manipulative incentive systems.

The USC social scientists push back against the notion that misinformation largely spreads because people are ill-wired to distinguish fact from fiction.

We know misinformation is a worldwide hazard carrying socioeconomic and political consequences, the researchers note. But what primarily drives its spread? Even while acknowledging the power of confirmation bias, a different factor appears to be the fuel of misinformation’s digital engine.

“The answer lies in the reward structure on social media that encourages users to form habits of sharing news that engages others and attracts social recognition,” the researchers conclude in their findings, published last week in PNAS, a scientific journal. “Once users form these sharing habits, they respond automatically to recurring cues within the site and are relatively insensitive to the informational consequences of the news shared, whether the news is false or conflicts with their own political beliefs.”

It’s good news because people can fix the wiring of our algorithmically-structured digital machines whereas the core operating systems of us flesh and blood humans are more hardwired. (Although, admittedly, social media consumption can also rejigger our brains, other studies have shown).

If the researchers’ conclusion about what drives misinformation’s spread in our muddied information ecosystem is true (and, yeah, it’s true, although “no duh!” isn’t exactly a scientific argument) they deliver an important and encouraging empirical finding. It means, in essence, that human nature isn’t necessarily the unfixable, dominant cause for the spread of lies and unintentional falsehoods online.

You’ve Got Mail

The cause at hand is the reward structure of today’s social web, a digital universe with minimal resemblance to the early internet. We subsist in a worldwide web of digital communication that could, at least theoretically, be reengineered to operate in a fundamentally different and better way.

“In this research, we show that the structure of online sharing built into social platforms is more important than individual deficits in critical reasoning and partisan bias – commonly cited drivers of misinformation,” the paper states. “Due to the reward-based learning systems on social media, users form habits of sharing information that attracts others’ attention. Once habits form, information sharing is automatically activated by cues on the platform without users considering response outcomes such as spreading misinformation.”

But you might be asking yourself what exactly the social media sites can even be expected to do? The short answer is plenty.

I caught up with the lead author of the paper, Dr. Gizem Ceylanam, a behavioral scientist and postdoctoral researcher at the Yale School of Management who received her Ph.D. from USC, where her group started the project.

For starters, she noted how social platforms should stop taking popularity as a signal for quality. Currently, the social sites prioritize content that receives popular attention in the form of likes and shares. Algorithms focus on this content and distribute it widely.

“But what we find is that the popular content is also likely to be sensational, emotionally provoking, and false,” Ceylanam told me in an e-mail interview last Friday.

Platform Pivots

She also stressed how algorithmic de-prioritization of unverified news is needed. When posts generate quick likes and shares, the algorithm amplifies the content’s visibility.

The platforms should introduce an embargo or verification period, Ceylanam suggested. After information is verified, the platforms would then continue making it visible. If it is not correct, then the algorithm can be designed to de-prioritize this information.

“This is in line with the notion of ‘friction’ in habits literature,” she said. “If a behavior is habitual, you add friction and make that behavior hard to execute. Over time, this leads to extinction of that behavior.”

She also suggested the platforms add buttons to incentivize truthful sharing such as “Fact-Check” or “Skip.” People would then see how many others might have clicked those buttons before them.

There’s an added benefit to this tactic too. The buttons would serve as digital signals to readers that they should always be keeping the notion of truth in mind when consuming content online, Ceylanam observed.

“Also, if so many people fact-checked a piece of news, then it will signal that there is something fishy about it or something that needs to be verified,” Ceylanam argued. “Currently, with only likes, shares, emoji buttons, the only thing on people’s minds is whether their content is going to generate popularity and social approval they seek out on the platform.”

Power Play 

To those who fret about giving tech platforms too much influence over the flow of information, I say you’re very right to worry. It’s unsettling to consider the power these massive corporations possess to stifle the spread of unpopular ideas from left, right or center.

But also understand that Zuckerberg and Elon and their colleagues already manipulate and maneuver the flow. And the status quo version of that manipulation provokes outrage to generate clicks and line pockets, even if less blatantly so than a few years ago on Facebook.

The platforms are already light-years from agnostic. So this isn’t about wiring in structures for the first time that incorporate incentivizes and algorithmic preferences. It’s about rewiring.

Also, why should we expect traditional news media outlets to exercise prudent discretion and not demand similar rigor from “social” media? If you boil the argument down, it’s not about if discretion should be exercised. It’s about how to exercise it.

Most everyone wants newspapers to show some level of judgment in what letters to the editor we do or do not publish. Unpopular opinions are fine, lies are not. That general principle should apply to social media publishers, too, not just traditional media publishers.

As for the ability to detect blatant, easily provable lies immediately, that capacity, at least from a technical standpoint, has already arrived. Anyone who has played around with ChatGPT’s chatbot intuitively must know the mammoth role artificial intelligence will undoubtedly play in our digital future, whether we like it or not.

Unsweet Emotion 

And sure, we might have all already known that social media platforms exploit our emotions. But this USC research tackles a slightly different point: Put one way, the study tells us the onus is on platforms more than people, in a manner of speaking.

Ceylanam also insists it’s reasonable to believe the social media companies can be persuaded to change, especially in appealing to their bottom-line self-interest.

For instance, some users are leaving Facebook, displeased with the quality of content. And user complaints about misinformation have risen while charitable donations through Facebook’s fundraising tool declined in the second half of last year, Ceylanam noted. Enhancing trust is good for business.

She was also intent on distinguishing between the approach she’s proposing as compared to the focus others might have on entirely removing objectionable content.

“We are saying that just do not amplify the content just because it receives lots of likes if it is not fully verified,” she said. “It will be contained in the system but it will be buried eventually.”

When information quality on the platforms is seen as poor, many avoid sharing. Although social media posts continue to serve as the beating heart of the world’s lightning-speed news cycle, you’ve probably noticed certain friends disengage in recent years, given the avalanche of misinformation and anger.

Silence is a Virtue?

Separating fact from fiction when reading your friends’ posts has become exhausting for some. It can be easier to decline comment.

“They are hesitant and constantly asking themselves whether I should even like this content as it may or may not be true,” Ceylanam said. “Some people in our surveys even said they do not share anything rather than being worried that what they share might be fake.”

That sentiment does not serve Facebook’s ambitions.

It’s also important to emphasize how significant of an issue this has all become for local communities. Despite the incredibly constructive civic conversations social media can admittedly spawn, some of the public dialogue has been poisoned by the nature of the medium.

As for the free speech argument, depending on the framing, it’s basically bogus. The First Amendment protects us from the government censoring our speech. The founders were not looking to guarantee the legal right to reach billions of people with the click of a button. We’re entitled to our uncensored opinions, not a worldwide audience.

And sometimes when you need to interview the ideal source, the best person is sitting right next to you, at least remotely speaking.

Inside Baseball

I chatted on the topic with our very own Examiner Editor-in-Chief Martin Wilbur last Friday, knowing his four decades of experience covering local news (before and after the emergence of the web and then the social web) gave him a front seat to how civic affairs and community conversations have changed.

During a brief phone talk, he provided a couple examples of instances when misinformation complicated people’s understanding of a local story we covered.

In one case, residents began to post online that an educator was injured when trying to protect a student from self-harm. Martin’s eventual reporting of related events corrected the more precise record but who knows how many people only ever read and shared the original incorrect version of the story on social media.

He separately recounted how the maximum capacity of residential development permitted by New Castle’s proposed Form Based Code was taken out of context and perpetuated through social posts, creating a false narrative that never died, ultimately dooming the plan last year.

There were legitimate concerns about the code, and exaggerated concerns. The exaggerated concerns were introduced into the debate as facts, Martin recalled, never vanquished from the rhetorical stage, or the public understanding.

He also lamented how combative keyboard warriors can, for example, attack proposed development projects from the comfort of their homes, hiding behind screens, not needing to show up in person to learn pros and cons at Planning Board meetings and the like as in the olden days.

Devil in the Details

So when a false or misleading detail is shared on social media, intentionally or unintentionally, Martin said people then feel emboldened to shout their own self-righteous, angry, ill-informed digital responses. And they often do so in a confrontational, fact-free manner they’d have been less likely to employ if face-to-face with friends and foes at town hall 20 years ago.

“There are even some very intelligent people who post if not intentional omissions then from a particular point of view on an issue and they’re not going to include information from another perspective and it’s taken very often as gospel because someone is seen in the community as reputable or intelligent,” he said. “A lot of times it’s something that might be 90 percent true but you don’t know whether there’s an omission or something that isn’t reported that could cast a whole different light on it.”

As news organizations like ours compete for the digital public square with cacophonous but potent community forums on platforms like Facebook, the local dialogue takes certain shape as a direct result of the online sharing tendencies of your friends and neighbors.

If John Doe feels like he’ll earn a dopamine drip via social media likes and flattering comments by sharing a certain flavor of content, the accuracy of the content might subconsciously become less critical to him than whether it’ll secure, say, some smiley face emojis from digital “friends,” or even angry GIF faces from detractors if he possesses troll-like sensibilities.

How many times have you seen people share a link to an article immediately after the piece publishes, just based on the headline, as soon as news breaks, proving the sharer didn’t even skim the article?

Is that person looking to be a responsible steward of information or are they seeking digital slaps on the back? (Or, yes, they might just be motivated by a desire to advance a preconceived notion or political agenda.)

Wookin’ Pa Nub

But, more to the point, did that person most likely share the piece prematurely because they’re unable to distinguish fact from fiction? Or is it because they’re looking for love? The USC research suggests users are habitually trained to secure affirmation with certain online shares.

Don’t misunderstand, enhancing news literacy is critical. Teaching 21st century students how to discern between facts and falsehoods online should be a top priority in education. But if users are, broadly speaking, relatively insensitive to the consequences of news they share, and are primarily fueled by a platform-manufactured desire for digital attention, advocates should prioritize the lobbying of social media companies to reform.

Facebook in 2023 looks little like it did when it was founded 18 years ago, let alone compared to even a couple years ago. The degree to which the company allowed its platform to be a supple breeding ground for misinformation changed, at least to an extent, after the Jan. 6, 2020 Capitol riots. So the Facebook of tomorrow isn’t fated to operate the way it functions today.

In fact, the USC research showed how habitual users sometimes shared information that challenged their own political beliefs, seemingly more motivated by generating response than any other factor. What if the techies in Silicon Valley built algorithms that rewarded spreading truth more than the ability to stimulate emotion?

While sharing of falsehoods might often stem from bias and/or laziness, the habitual nature of the sharing is central to understanding the best avenues to explore when seeking to mitigate misinformation’s spread.

“Social media sites could be restructured to build habits to share accurate information,” state the USC researchers, whose study featured more than 2,400 Facebook users.

Trust But Verify 

As for assessing the role tech can play in helping to create a better information universe, the topic isn’t just in Lehrman’s wheelhouse. The question over tech’s role in decontaminating the worldwide digital cesspool was essentially the driving intellectual force behind TTP’s creation in 2014. The award-winning journalist began asking why technology couldn’t support news trustworthiness and integrity instead of driving it down.

The Trust Project partners with news sites that display strong integrity, and aims to amplify their work in order to illustrate a sharp contrast for the world on what credible online journalism does and does not look like.

TTP developed eight of what it calls Trust Indicators®, a tool to assess whether a site should earn your confidence. Labeling content, describing journalist expertise and offering the opportunity to provide feedback are among the “Indicators” that trusted sites use.

Another key aspect of The Trust Project involves efforts by the organization to work with social media and search companies to, as Lehrman put it, “enhance the ability of their algorithms and human teams to tell the difference between real news and the imposters.”

Those initiatives are especially critical amidst the emergence in recent years of what’s known as “pink-slime journalism.”

Basically, pink-slime sites have the general look and feel of news websites but are just propaganda pushers masquerading as credible news organizations.

In fact, Lehrman pointed to a slime site that’s now publishing in her Bay Area backyard. A legitimate newspaper, The San Francisco Chronicle, reported on Friday about a site calling itself “The San Francisco Inquirer,” which is manufacturing cheap digital slop and packaging it as local journalism.

“Disinformation is an insidious pollutant in our information systems,” Lehrman stated to me in an e-mail on Sunday. “It undermines our trust in one another and in our institutions.”

‘Russia, If You’re Listening’

It’s also important to remember there are those who intelligently argue how the impact of misinformation and disinformation in influencing world events has been widely exaggerated, or at least misunderstood.

The highest-profile debate on this topic, of course, involves the role Russia’s influence campaign played in swaying votes in the 2016 election of Trump.

A half dozen researchers from universities in four countries – the United States, Denmark, Germany and Ireland – published a study two weeks ago that says no evidence was found “of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization or voting behavior.”

The study analyzed survey data from almost 1,500 U.S. respondents who provided access to their Twitter accounts and answered questions about their political attitudes and beliefs at multiple points during the 2016 campaign. (It’s not clear why researchers analyzed Twitter, instead of Facebook, where the influence seemed dramatically more significant, given the nature of the platform and the volume of use).

But to me, it’s more about how bad actors aim to coarsen the culture and upend norms. (Russia’s primary goal was creating division among Americans and distrust in democratic institutions. The Kremlin never believed Trump would likely win, and his surprise victory was only seen as a long shot cherry on top, University of Washington Associate Professor and co-founder of the Center for an Informed Public Kate Starbird emphasized for us at The Trust Project meeting.)

Sure, most Democrats would vote for Democrats and most Republicans would vote Republican whether or not a Russian bot infiltrated their digital chats with hateful anti-police or racist anti-immigrant rhetoric.

But they’d be less inclined to view their politically-different neighbors as evildoers without the additional layers of rhetorical heat burning away lingering vestiges of civility and shared facts. The past half-dozen or so years have been marked by a normalization of extremist language. (Not to mention the fact that even a seemingly nominal amount of actual vote-changing impact from foreign interference shouldn’t be tolerated or poo-pooed, and could be result-altering.)

Tools of the Trade

After hearing Starbird and the other speakers discuss various aspects of understanding misinformation, and the role journalists can play to address related problems, I got the chance to chat with The Trust Project board Secretary/Treasurer Larry Kramer, who retired as president and publisher of USA Today in 2015.

The world, Kramer said in a subsequent e-mail exchange, needs sophisticated new tools to identify misinformation and its sources.

He also commented how trusted watchdogs are desperately needed “to help people understand where content is coming from.”

Because disinformation is a burgeoning political strategy, and due to the fact that current technology helps spread falsehoods like a virus, the efforts of groups like The Trust Project to treat the sickness has become increasingly vital.

This battle needs to be war-planned at an institutional level, with sophisticated but nimble organizations coordinating the fight. A loose collection of critics isn’t enough.

“So the perfect storm is that many more people are weaponizing information at the same time technological advances have given them the ability to reach massive audiences at little cost,” said Kramer, the founder of MarketWatch, which he created in 1995 and later sold to Dow Jones.

“The end result is that the public needs to quickly develop and use new, more sophisticated tools to help identify misinformation and its sources,” he also told me in an e-mail on Saturday.

You’re False News

Meanwhile, there’s also research that concludes how humans, not bots, are mainly responsible for misinformation’s spread. False news spreads faster and more widely on social media than true news, with false information being 70 percent more likely to be retweeted than true information, a 2018 study by a trio of M.I.T. scholars illustrated.

Even while stipulating how there’s an ongoing dispute over the implications and impact of false news, we know it can produce violence, famously illustrated by the so-called “Pizzagate,” a conspiracy theory in 2016 that raged like wildfire on social media.

A claim that a Washington D.C. pizzeria was the site of a child sex-trafficking ring run by high-level Democratic Party officials went viral. The theory was entirely unfounded – it did not even contain a kernel of misunderstood truth – but it led to a man firing shots inside the restaurant and several death threats to the restaurant’s owner and employees.

And if you fear a future world where emotionally fragile corporate overlords from a small handful of multibillion-dollar enterprises maintain a tight grip on the way information is consumed and shared, I’ve got news for you – that day has already arrived. Today is about agitating for the best possible manifestation of that dystopian-like reality.

Beefing up the use of third-party fact-checkers, increased transparency around paid content and an investment in media literacy are also among the ways that platforms can help construct a healthier information infrastructure. We need a Web 3.0 built with better guardrails against the potential future excesses of A.I. and virtual reality.

With all of this in mind, I was curious to learn, anecdotally, if people I know also subscribe to the belief that distrust of social media has grown.

News Hound

One of the most voracious and enthusiastic local news consumers I know is my friend Ken Diorio, a Bedford Corners resident. It’s only a slight exaggeration to say Ken is usually at least a half-step ahead of The Examiner on breaking stories. For instance, he was the first to alert me to the plane crash near Westchester Airport last week, giving us extra time to publish a piece by Friday afternoon.

I asked for his general thoughts on misinformation, disinformation and social media’s role.

“Twitter used to be a great source of crowd source information versus corporate media,” Ken replied over text on Friday. “However, it’s been turned into a weapon of misinformation. Confirming sources is a thing of the past. Now (it’s) a race to report, not if it’s accurate.”  

While he said social media remains everyone’s “first look at an issue,” he also remarked how the platforms are usually full of falsehoods.

“To me and friends, text strings have replaced Facebook and Twitter,” he said. “Too many things taken out of proportion.”

All that being said, there is a giant elephant in this room. In today’s world, one man’s fact can be another man’s fiction. But industry leaders must embrace the reality that actual truth exists. And, in fairness, there’s been movement away in recent years from false equivalencies, both sides-ism and contrived objectivity. You can’t hedge on calling clear balls and strikes in order to comfort those who believe Pizzagate was real, or COVID is fake.

Eye Test

I also must acknowledge the weirdness of writing about the ills of social media in a piece I’ll ultimately be sharing to Examiner social media. I guess the bottom line is that people who spend their work time preparing professionally-reported, fact-checked information still pine for platforms where readers can more often trust their eyes. I only hate on social because I’d love to love it.

But it really doesn’t have to be this way, or at least not this bad.

And hey, don’t forget, Mark Zuckerberg is a Westchester native. This is just a relatively useless hunch, belied by lots of conflicting evidence, but I can’t shake the feeling that some part of Zuck still wants to do right, amidst all the wreckage he’s wrought.

So, Mark, if you’re reading this piece, get cracking on fixing your “Social Network.”

Just think, Jesse Eisenberg can play you in a more flattering sequel about your middle-aged heroics.

Now that would be a Meta moment the trust-building world could “like.”

We'd love for you to support our work by joining as a free, partial access subscriber, or by registering as a full access member. Members get full access to all of our content, and receive a variety of bonus perks like free show tickets. Learn more here.