No, it’s not to eat them. What’s wrong with you?
RationalWiki defines concern troll as follows:
A concern troll is someone who disingenuously visits sites of an opposing ideology to disrupt conversation by offering unwanted advice on how to solve problems which do not really exist. Topics of "concern" usually involve tactical use of rhetoric, site rules, or with more philosophical consistency. The concern troll's posts are almost exclusively intended to derail the normal functions of their targeted website.
Alternatively, a concern troll is someone who enters a discussion with a pre-formed opinion contrary to the majority opinion, but pretends to conform in order to subtly sow dissent and doubt without being called on it. Such attempts often begin with the troll raising "concerns" about the topic of the discussion, hence the name.
It’s the latter definition I’m going to concern myself with, and I’m going to limit myself to discussing them in the context of Twitter.
The cleverest concern trolls can create very effective parodies of their opponent’s positions. The way they do this is by walking a fine line between ridiculousness and credibility. Ideally, a concern troll wants their trolling to be both:
Deranged or cringe enough for those outsiders following the discussion to be thoroughly put off from the position they’re parodying.
Just close enough to their opponent’s actual position that their opponent can’t easily call the troll out without undermining their position, i.e., plausible deniability.
It either says something about me or how far the overton window has shifted that I would be only mildly surprised to hear someone sincerely say either of the above examples. I think it’s probably the overton window.
Keep in mind that these two examples are just two of the ones that got exposed. How many more trolls are convincing enough to go undetected? I confess I tried something like this myself once, made a parody Twitter account, primarily to see how long it would take for someone to call me out and secondarily out of profound boredom. Mea culpa. Wasn’t my finest hour. But I learned that you can get away with saying far more ridiculous things for far longer than you might think. I suspect, but can’t confirm, that there are entire Twitter communities devoted to this sort of non-obvious trolling.
Of course, in the context of an ordinary internet forum, if you’re the victim of a concern troll you can, if you feel so inclined, devote the potentially not insubstantial time and energy investment to clarifying the differences between the troll’s position and yours without undermining the latter (that is, assuming the troll hasn’t pinpointed a genuine problem with your position).
On Twitter, with the character limit, delving into the nuances of an issue can be prohibitively difficult.
On the other hand, assuming bad faith without good evidence is also counterproductive. That sort of assumption fosters an atmosphere thick with distrust.
Why is trust essential in discourse? It’s not entirely obvious to me that it should be, a-priori. I don’t need, for example, to trust that my conversation partner is a good person or has my best interests in mind or would water my plants if I asked him to or anything. In order to have a productive discourse, we need a particular kind of trust: participants need to be able to trust that their conversation partners are engaging in good faith discussion.
Let’s suppose, to see why this might be, that we have a toy model of two conversation participants, A and B. A believes normative proposition p and B believes normative proposition q. And let’s further stipulate that as a society we want to incentivize truth-seeking, which is to say, we want both participants to marshal their intellectual powers toward finding the truth of a given matter.
Keep in mind that both conversation participants also have a motivation to advance their agendas for p and q.
If A has good reason to believe that p is true and that B is primarily engaged in good faith truth-seeking, then A has good reason to engage in truth-seeking himself; after all, by his reckoning, if they collaboratively reach the truth, then they will end up agreeing on p, which will further A’s agenda. If A has reason to suspect B isn’t engaging in good faith truth-seeking, then it may be in A’s rational interest to disengage from the discourse; it won’t likely yield any useful insights toward the truth and could in fact hurt A’s cause of advancing p if B is primarily interested in pushing his agenda. And vice-versa from B’s perspective.
So in order to have an efficient truth-seeking discourse, it’s generally necessary for both sides to believe the other side is engaging in good faith. And the easiest way to achieve this is for both sides to actually engage in good faith.
Repeated failures of good faith engagement, as on platforms like Twitter where there is little in the way of an enforcement mechanism, can foster distrust and turn the place into a cesspool of sound bites and gotchas.
Hence I pose the problem: how do we weed out trolls on Twitter and signal good faith to other conversation participants? Surely the solution can’t be heavy-handed Big Brother censorship.
Proposal: the green checkmark.
This is a first pass solution that rests upon the assumption that trolls are rational actors, which admittedly might not be the case.
The implementation might not be simple, but the idea is. Twitter could make concern trolling a losing proposition, counteractive to their agenda p, by giving accounts a green checkmark signalling their allegiance on the issue of p vs. ~p, which the account holder would prove by a charitable donation to an organization on their side of the debate. The amount a user would be required to donate to get a green checkmark would be chosen to be just high enough to offset whatever advances to their agenda a user could reap by concern trolling. Thus, a rational concern troll would simply elect not to troll, or to do so without a green checkmark, which would be interpreted by other users as a signal that the user has no skin in the game and might not be engaging in good faith.
Objection 1: Wouldn’t this just lock poor people out of debates?
Answer 1: To the extent that one’s Twitter reach correlates with both their potential to do damage to the discourse by concern trolling and their net wealth, the amount required to be donated for a green checkmark would also be chosen to scale with one’s Twitter reach. Someone with ten followers might, for example, have to only donate a dollar to get a checkmark. If they gained a million followers overnight, they’d have to make up the difference with a further donation (after a grace period) or risk losing their checkmark.
Objection 2: Couldn’t a motivated troll just donate to their own side, get a green checkmark, and then concern troll, LARPing as the other side?
Answer 2: The green checkmark would be designed with a drop-down menu showing which organization(s) the user has donated to. A mismatch between the user’s rhetoric and the organization(s) would signal to other users that they aren’t engaging in good faith.
Objection 3: Doesn’t this rely on the assumption that trolls will behave rationally?
Answer 3: Yes. And not all of them would, but some certainly would. I never claimed that this would be a perfect solution, just that it’s a first pass attempt to make the problem less bad.
Objection 4: Isn’t this idea more odious big tech censorship?
Answer 4: Only to the extent that the blue checkmark already is. Nobody would be deplatformed. I’m not proposing any new rules governing speech on Twitter.
Paging Elon Musk. I have an idea to turn your investment into less of a shithole.