Thanks @endomorphosis or raising an important point that we’ve been acutely aware of but the website (still a WIP bc it talks about too many different use-cases) does not highlight, for want of a more nuanced discussion and clarity on what the goals are (maybe a blog post is warranted!). The ends, in our view, are an ability for the public* to audit the on-platform sharing behaviors of users in relation to articles or topics of interest. We framed it originally as a tool for newsrooms and their audiences, which makes it unclear that our information landscapes tool is designed to be an open audit tool for you to use as long as there are articles** that you care about tracking in online discourse. If your point is about not framing it as “disinformation” tracking, that is absolutely taken and we should make it clear that we aren’t the arbiters of truth including rephrasing the “fight” part on the website.
A side note about our motivations: As external researchers, we have severely limited abilities to look into how purported propaganda is spread on the platform, and what harms have actually impacted other users, until platforms track the behaviors in multiple instances listed out by Meta and Twitter. This is only released months after such networks have been removed, hindering research potential into public harms from such ‘influence operations’. We aren’t the arbiters of truth and our goal is simply to introduce more transparency into the risks from online disinformation.
Now, to your point about journalists misspecifying what “misinformation” is or abusing this system we will create–in this case, mis-labeling your own website as misinformation. I don’t disagree that there will be false positives on what constitutes misinformation and that consensus among a single subgroup of incentivized people is generally a really bad metric to identify valid misinformation. However, the purpose of creating a tool like this is to lower the barrier to information access, to allow the everyday researcher (or journalist) to access historical data on events where the establishment has acted against public interests, and cite it as evidence to limit their abilities to abuse this power in the future. I could also cite examples where researchers were able to quickly come up with some externally verified sets of URLs that constituted misinformation (see Co-AID and ESOC COVID-19 data) which, despite potential false positives, largely accelerated the dispelling of rumors. So there is clearly potential for good to come out of this. It is an important but disjoint discussion to debate the merits of third-party verification and “who fact-checks the fact-checkers”.
For instance, if you could track multiple instances of when accounts are wrong about what constitutes misinformation (which you can do with the tool we are creating), you can cite it as evidence to have lower confidence in what such journalists/so-called fact-checkers say in the future. Even better, if you can establish that there are networks of collusive accounts based on public data, it improves your ability to make a case for yourself in the future. This idea of historical credibility is similar, in principle, to a tool called Birdwatch that Twitter is developing internally to try and improve their misinformation and online harm detection systems.
Our proposed system isn’t a perfect solution and misinformation is a complex, multi-headed beast to tackle. Introduce bad actors on top of this and every system bears potential as an attack vector for exploitation. But seeing as no one has been able to solve the misinformation problem in a significant way, including platforms worth billions and researchers with decades of experience, I think this is a small step in a direction that is worth exploring to improve the public accountability of platforms.
*or the independent oversight boards commissioned by regulatory authorities, or disillusioned collectives of tech employees hoping to make platforms accountable, such as the Integrity Institute
**in the future projects, any URLs or keywords