There are people who will never be happy with anything the Arkansas Democrat-Gazette or any other responsible newspaper does, and a comment on last week’s column was indicative of why.
(I know I shouldn’t read the comments, but sometimes it’s educative.)
A frequent combatant in the troll wars noted, “The future of news is media like Elon Musk’s X and citizen journalists being fact-checked by their peers (Community Notes).”
On the surface, it would seem such an endeavor would be fruitful. The site formerly known as Twitter (yeah, I’m sorry, but “X” makes it sound like a porn site aimed at 12-year-old boys) states, “In the face of misleading information, we aim to create a better informed world so people can engage in healthy public conversation. We work to mitigate detected threats and also empower customers with credible context on important issues.”
One of the ways it says it’s doing that is with Community Notes, formerly known as Birdwatch (because, of course, Elon Musk isn’t happy unless he’s making others miserable with weird demands and rebrands). Sarah Perez of TechCrunch wrote in August, on the occasion of the service announcing the “extra context” notes would be removed for those already experienced with Community Notes, that the system has been refined “so the ‘wisdom of the crowds,’ so to speak, couldn’t be easily gamed by someone or a group of people wanting to spread misinformation. The system is not as simple as having a post or fact check upvoted or downvoted for accuracy. If that were the case, brigades of like-minded contributors could team up to promote their own viewpoints.

“Instead, Community Notes uses a ‘bridging’ algorithm that attempts to find consensus among people who don’t usually share the same views. Not everyone can immediately become a contributor to Community Notes, either. They first have to prove they’re capable of writing helpful ‘notes’ by correctly assessing other notes as either Helpful or Not Helpful, which earns them points.”
Sure, I guess that’s one way to get your news if you don’t trust “mainstream media,” but crowdsourcing doesn’t really scream “credibility.” It does tend to scream, though …
So how does Community Notes work in real life?
Not very well, according to a story by Madison Czopek of the Poynter Institute, a nonprofit media institute and newsroom that provides fact-checking, media literacy and journalism ethics training to citizens and journalists. The institute’s MediaWise Director Alex Mahadevan told a summit of fact-checkers in June that the experiment thus far had been lackluster, with only about 8.5 percent of the notes showing up for regular users. (And that was before the announcement that notes would be removed for some, soooo …)

“To determine which Community Notes see the light of day,” Czopek wrote, “Twitter relies on ratings from other Community Notes users, who can read notes and evaluate whether those notes cite high-quality sources, are easy to understand, provide important context and more. If notes get enough ratings stating that they are helpful, they are more likely to be displayed publicly.” However, for a note to become public, it has to be accepted by a consensus across the political spectrum.
Ideological consensus … sure, that’s easy!
Mahadevan noted that getting a “cross-ideological agreement on truth” is virtually impossible in the hyperpartisan environment we now have. Plus, the site uses past behavior to determine political leanings and waits till a similar number on the right and left weigh in on a note.

How that works with people like me who are not only moderates but also reading tweets across the spectrum in the course of their work … well, I’m sure the algorithms would figure that out, just like they’ve figured out things like me not eating beef because of my IBS. Heck yeah, keep on showing me ads for Omaha Steaks!
One of the biggest problems is that 60 percent of the most-rated notes aren’t public, meaning the tweets most in need of a context note don’t get one. “So this algorithm that was supposed to solve the problem of biased fact-checkers basically means there is no fact-checking,” Mahadevan said. “So crowd-sourced fact-checking in the style the Community Notes wants to do [is] essentially non-existent.”
But maybe being engineered to fail is the point.
Valerie Wirtschafter and Sharanya Majumder found a mixed bag in an analysis on Community Notes published in Journal of Online Trust and Safety. While they found some bright spots, there weren’t noticeable declines in engagement with tweets marked with Community Notes, nor were misleading tweets more likely to be deleted. (Mahadevan found that it tended to do better on things like pop culture than politics.)

Yoel Roth, who resigned as head of trust and safety at Twitter after Musk bought the company, said that before Musk, Czopek wrote, “Twitter had a two-pronged approach to addressing misinformation—one method that was centralized, where the company applied labels to misinformation, and another community-sourced method. Now, it has put ‘all of its eggs in the basket of Birdwatch,’ Roth said.
“’I think that’s a failure,’ he said. ‘I think Community Notes is an interesting concept. I think it has some areas where it’s successful. I think we’re seeing some interesting applications of the product. But there’s many other areas where it is not a robust solution for harm mitigation.’”
Along similar lines, I was also struck by a response last week to our little troll “friend” above, who agreed that X and similar media are the future: “As it should be. This paper, much like the mainstream media, spins stories around false narratives instead of just reporting facts. The editors try to interweave the narratives and the facts get omitted and it all becomes a big pile of propaganda that leftists eagerly consume as ‘news.'”
I would be eager to learn what, exactly, they believe the news side of this paper has been doing. I came from the news side, and I don’t recall any of this propagandizing going on. (Dude, you have the burden of proof, so please, bring the receipts.) When reporting reality seems like propaganda, something is seriously wrong with the picture, and a big part of it, I believe, is due to both media illiteracy (being unable to discern the differences between reliable and unreliable news sources) and hyperpartisanship. The two feed off each other, which is an unhealthy situation all around.
It leads to people believing things like the attack on the Capitol that we saw happening in real time being just a normal tourist visit, or that because their guy didn’t win an election it was stolen despite the dearth of evidence of that happening. It also leads, as in the example of the troll and his buddy, to mistrust of any media source that doesn’t feed their biases.

But that’s just not how the newspaper business works, at least for responsible media companies. Reporters gather information through research, interviews, etc., write and fact-check their stories, then send them to an editor who then reads, fact-checks, asks further questions, edits, and then sends them on to at least two (typically, at least at our newspaper) other editors (rim and slot on the copy desk) who go through the same process to prepare it for publication. Stories are proofread again on the page and any changes needed are made, reread and signed off on so that by the time they reach the reader, they should be clean. (For opinion at our paper, at least two editors read almost everything; we have fewer editors than the news side does, but we still want to make sure that we get things right.)
Not all newspapers work as ours does, where our digital edition (not the website) is a copy of the paper in digital form, with the same news hole and space restrictions as the physical paper (which is partly why the extended version of my column is on the blog rather than in the paper, other than I can put in pictures of fur-nephews Spike and Charlie), but I’d argue that it’s a cleaner read than those that put no length restrictions on articles.
By narrowing focus, it’s easier to produce a readable story. Throwing everything in implies that all facts in a story have equal weight when that’s not really true, plus how would you know when to stop (that’s beside the fact that in breaking news facts come out in dribs and drabs)? We know that not everything can be, or should be, included; those “omissions” some of these guys rail about often have little to do with the actual news, and more to do with what a cable “news” channel has whipped up to fill time on air. Be thankful for editors who keep you from having to plow through walls of text (another reason I tend to use photos and editorial cartoons to break things up on the blog). I still have nightmares about The Starr Report, which we ran in our paper when I was on the clerks’ desk.
We have had training not only in the nuts and bolts of the craft (writing, editing, story flow, etc.) but also in legalities such as libel and the Freedom of Information Act, and because of that we endeavor to fulfill our tasks as completely and responsibly as possible; not doing so could result in being sued and/or being fired. If an error is made, it’s corrected as soon as possible if for no other reason than credibility’s sake. If you can’t be bothered to admit your errors, you shouldn’t be in the business in the first place.

But you know, the company formerly known as Twitter doesn’t have to exercise the same responsibility as publishers, plus with Community Notes it’s getting free labor from people who are most likely paying to use the service because Musk cut the people actually trained to ride herd on things like fact-checking.
📰📰📰📰📰
Crowdsourcing has good potential, but it also can fail miserably (the Internet detectives of the Boston bombing come to mind, misidentifying missing student Sunil Tripathi as one of the bombers; Tripathi died by suicide the month before, but his body wasn’t found till after the bombing).
For now I’d much rather put my trust in people who’ve done the research and been fact-checked by layers of editors. People who are willing to stand up for correct information using their own name, and who provide documentation to back up their research.
Pardon me if I trust people who do this for a living than some armchair Internet sleuths with no culpability for being wrong.



