Handmade Network»Blog

[Question] Everything in moderation, or: Damn, I wish those spams offered Handmade sneakers.

Hi all,

I think we can count ourselves lucky in having only seen around 10 spams so far in the lifetime of this site. For some peculiar reason only known to spammers, our little community is held to be part of the demographic eagerly in search of new Nike sneakers. Hardly Handmade* merchandise, as I think you'll agree.

Thoughtful as they are, these spams usually come in pairs: a post in English and one in French. Just like sneakers if you will. I'll leave it to the reader to determine which of the two languages is the left and the right shoe.

Nevertheless, as site admins we don't feel these shoes fit. For that reason we're working on a patently obvious system and a method ;-) to combat these nuisance posts. First I'll outline what this will look like initially, then I'll outline how we want to improve on this going forward. Lastly there's a few questions to project owners and the community at large.


Sometime soon, hopefully next week, posts will gain a new button letting our members flag it for moderation.
This button will bring you to a form with a few reasons to choose from as to why you want to report this post, along with a field for optional feedback.

By default we'll have "Spam" selected as the reason. If you believe a post to be spam, all you have to do to report it is hit the report button on the post, then hit the report button on the form that submits it to the moderation queue. Note that we may streamline this in the future and incorporate this into the thread view itself, but in the first iteration it'll take you away for a moment and then return you to the thread after you're done.

Even so, for something that's obviously spam, it'll take maybe 5 seconds at most to hit the button and then confirm it on the next screen.

So why this interstitial form? Isn't one click enough? Well, it would be if moderating forums was all about weeding out spam. Like it or not, we have to consider the possibility that someone might post inappropriate material that's not what you or I would consider spam. Think hateful rhetoric for example. If all someone does is post hateful screed, we may as well treat them as if they were spammers. Nevertheless, it's a different form of speech.

Other examples why you might want to report a post when it's not spam:
- Someone instead posts relatively decent quality material, but it's sprinkled with invective.
- A debate turns a stack of ad-hominem attacks overflowing and drowning out an otherwise good thread.
- You believe a post to contain plagiarised material, whether text or code.

If you have a good faith reason to question a post and want us to caution its author, and/or edit the post to remove the offending language/material, you can also use this button. In this case you'd flick the reason from Spam to the otherwise most appropriate reason why you believe action should be taken. The optional field mentioned earlier is where you could then elaborate on why you believe the author should be cautioned and what if any action should be taken in your opinion.

So, having submitted a moderation request, what then happens? I'm glad you asked.
An email will go out to the site team (Abner Coimbre, Andrew Chronister, Matt Mascarenhas, Martin Cohen and yours truly). It'll be roughly formatted like so:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
So and so reported a (blog/forum) post by Such and Such titled Say what now? to project Ah, right.
With the following comments:
  Spam (or other reason and elaboration)

Contents:
---
  Post contents
---

Member details (Such and Such)
Registered on: today
Number of posts: 1
Email: [email protected]

Quick actions:
- Perma-ban its author and delete all their other posts.
- Temporarily ban its author.
- This was misreported. Unflag the post and send a message to the reporter.

Or:
- Visit the thread and decide from there


By the way, in case it isn't clear: We will allow flagging a post more than once, once by every member. This is mainly in case different views exist on a controversial post. To avoid being deluged in reports when spam appears, however, we'll only send an email the first time a post is reported.

For posts that clearly don't fall into this category and have been flagged for other reasons, we'll be able to check if more than 1 report was filed before we make a decision. No worries :)

We'll try and find a clear way to communicate to you that a post has been flagged already, so that you don't have to. In fact, for posts flagged as spam we may even remove the report button. We'll see what turns out to be the sensible thing to do here.

What we probably won't do is hide the post from view until the moderation request has been reviewed. We may do so and trust our members not to abuse this power, in the full knowledge Abner will have a stern word with them if this ability is used on posts that clearly had no business being reported.

Edit: d7samurai suggests that a post that's flagged by more than n number of members can be given the doubt of the benefit and be hidden. It can always be restored if it turns out multiple members (or sock puppets) are in collusion against an otherwise upstanding member.

That's one thing we'd like feedback on. While I think the membership at large will behave honourably, we are approaching a time where we'll become more visible to the world and attract newcomers. Not all those newcomers may be as mature as our present community has shown itself to be.

So, do we hide reported posts from view in the knowledge that if they were misflagged, we can restore it and reprimand the reporter. Knowing that a troll might one day come along and report one of your posts, no harm, no foul? Erring on the side of caution.

Or do we - and by this I mean all of us as the community - instead trust that between the 5 site staff members, anything truly obnoxious will be gone the same day, and more likely than not within a few hours at most. Erring on the side of free speech.

We wouldn't mind some feedback on this.

Also, not unimportantly: Once this functionality goes in, project owners will gain a setting along these lines:
- Notify me of reported posts, I'll take action
- Notify me of reported posts, but let HMN staff decide what to do
- I trust the HMN staff to moderate my forums and blogs on my behalf

The difference being - because accounts are shared across the entire network - project owners won't be given the ban hammer, but they can if they want delete the post to their part of the site. Once using that quick action, they'll be given the chance to send a mail off to the staff asking them for followup action, all from the same page.

That is how we see phase one of the upcoming moderation ability.

One of the future enhancements we're thinking of adding down the line will be the ability for a project owner to ban somebody just from their neck of the woods without it affecting that member's ability to use the rest of the site.

In addition, here's a few things we have thought of but haven't decided on yet.

Should posts by new members (and existing ones with no history of posting) go in a moderation queue automatically, hidden from view until released? This would inconvenience newcomers but would effectively stop spam dead in its tracks. Do so only if the post contains links? Or leave things as is and trust the first person to come across spam to flag it, therefore being more welcoming to newcomers who might otherwise be caught in this net temporarily through no fault of their own.

Those are questions again we pose to you, our community and project owners. What type of community do you want to be is essentially the question. :) We'll obviously take your feedback under advisement. Incidentally, this is also something we can let project owners decide on a per project basis. Some of them may want to enable the moderation queue for new posters, others might want to trust that any nuisance posts will be flagged and dealt with quickly enough.


With that I think I've bored all of you for long enough. Please leave your feedback here, or in confidence in an email to [email protected]

Thanks,
Jeroen
Jeremiah Goerdt,
First of all, I can't seem to find the button for reporting your post. You've used at least one word that I had to look up in the dictionary, and that's inappropriate.

Second, I am a tyrant and usually lean towards the most strict option for moderation. Hiding posts when reported (d7's idea is perfect for this) and preventing new accounts from burning down the house sound like good measures to me.

That being said, I'm also a big fan of free speech, so going the lenient route is good as well. You have a nice little balancing act ahead and we'll definitely get a better feel for what needs tweaked as the trolls come marching in.

Thumbs up, you're doing the gods' work.

-Jeremiah
David Butler, Edited by David Butler on
A couple random thoughts:

To ensure free speech and also to encourage as much richness from a wide variety of personalities,
I think that there should always be some place for some things that some people might find offensive
but that place may very well be a separate thread. For example: an argument might be considered
off-topic for a thread, but people may have good points to raise in an argument. So probably the
best thing to do is to move the posts to a new thread...

The idea of moving off-topic posts also has more straightforward uses, like non-handmade-hero
code questions in the handmade hero forums for example.

Also, one intermediate step between banning/hiding or not would be to mark the post as possibly offensive,
and hide it there in a thread just for that kind of stuff or in the same thread where you have to click to un-hide it
or perhaps there would be some personal settings that could be leveraged to define default behavior.

Also, if one person in particular tends to offend you, than maybe you can block just their posts from just your view?

I think another thing that might be needed is a way to automatically suggest fixes for or at-least
point out spelling/grammar mistakes, and that will help maintain a high level of quality over time...
and those notifications can simply just go to the author and then there would be self-moderation...

@CaptainKraft: :p I had to look up two words, and then look up one of the words in one of the definitions
Matt Mascarenhas, Edited by Matt Mascarenhas on Reason: Correct attribution
To nip spam in the bud, I reckon we could go for the sending newcomers' posts straight into a "moderation queue […] only if the post contains links" option. I'd clarify that, though, to only moderate on external URLs, and also, rather than holding the whole post, let the post go live just with the URL(s) obfuscated to be reinstated once it's passed moderation. I had this "URL-based moderation" happen for my first post on the Phoronix forums (the whole post held back), and it was a little frustrating having to wait for it to go live. Probably just obfuscating the URL would have alleviated that frustration. Naturally, spam needn't even involve URLs at all, rather planting the seeds of pleasant ideas simply by mentioning stuff – the mere-exposure effect – but while we don't have the tech to detect that kind of thing (do we?), a URL-based approach would probably be the way to go.

For the other types of reporting, maybe auto-hiding the post once n people have reported it would be reasonable. The trick there, though, would be guarding against someone creating n accounts just for the occasion. Personally, when I join a community I tend to feel that I ought to settle in and get the lay of the land before reporting folks, so maybe it would be reasonable to take into account people's post count and / or membership length to determine whether to count them as one of the n reporters when auto-hiding, thus guarding against that one person–multiple new accounts situation.

Also, regarding your suggestion in IRC, Jeroen, that we may want to limit the number of reports people can have open at once. As I see it, this would guard against a deluge of reports for the team to deal with in the event that an otherwise reputable member goes rogue. If we're more likely to receive reportable posts (i.e. spam-like) and their accompanying (legitimate) reports than legitimate posts and accompanying reports from a rogue member, then limiting the number of reports may actually be detrimental to the fight against spam-like stuff.

So, in short, although I'm not the biggest fan of the notion of reputation, my vote goes for auto-obfuscation of external URLs in the posts of "newcomers", auto-hiding of posts reported by n "reputable members", and monitoring to see which of the two situations we're in regarding the multiple concurrent reports limit.
Sean Barrett,
URL obfuscation will prevent people from getting phished, but I don't think it'll stop the forum getting spammed; the spammers will just post the spam with the obfuscated URLs anyway, possibly not even noticing that it's not working.

But yes, I'm in favor of the "N reports from people of sufficient age/posts".

I do worry about the possibility that some project's forums may have too few viewers to hit the threshold though.
Jeroen van Rijn, Edited by Jeroen van Rijn on
nothings
URL obfuscation will prevent people from getting phished, but I don't think it'll stop the forum getting spammed; the spammers will just post the spam with the obfuscated URLs anyway, possibly not even noticing that it's not working.

But yes, I'm in favor of the "N reports from people of sufficient age/posts".

I do worry about the possibility that some project's forums may have too few viewers to hit the threshold though.


The post would still receive moderation. The email sent on the first report would make sure of that, going either to the site staff or the project owner depending on the project's settings.

What the threshold is for is those occasions where the moderators are all otherwise engaged for longer than normal, say for example attending HMHcon. It could then be the case that there's a 24 hour turn-around instead of a more usual much shorter response time.

In that case if enough members of sufficient standing flag a post, it'll be hidden from view so that it's no longer a site/sight for sore eyes, until moderation catches up and acts accordingly. It's a fail-safe if you will. Truly egregious posts I imagine will be flagged several times and as such would get hidden. If a post is made to a less active part of the site, fewer people would end up seeing it to begin with, but even the first report would still be acted upon.

Just one person flagging the post is always going to be enough to have a pair of eyeballs look at it and decide what action if any should be taken. I hope that addresses your concerns, Sean.
Andrew Chronister,
These ideas all seem quite salubrious. There's definitely room to experiment here -- human moderation is still the most practical way to solve the spam problem on our scale, so I think our best avenue is making the tools for our moderators as good as possible.

And maybe in a few years, google will open source their gmail spam filtering tools?
Jeroen van Rijn,
ChronalDragon
These ideas all seem quite salubrious. There's definitely room to experiment here -- human moderation is still the most practical way to solve the spam problem on our scale, so I think our best avenue is making the tools for our moderators as good as possible.

And maybe in a few years, google will open source their gmail spam filtering tools?


I approve of your use of the word salubrious. It's a salubrious use of the word.

Also, demetrispanos has kindly offered to assist if our spam fighting needs should reach the point where we'd want to exploit Natural Language Processing and probability. I also had bayesian methods in mind as well as an avenue of exploration should the need arise.

Mainly the feedback I've been asking for is to judge what our illustrious community believes to be the correct balance to strike, whether for complete handmade moderation or a more automated form (beginning with holding posts from new members for moderation as a possibility to explore). That is what part of the spectrum of countermeasures should we be aiming for?

But I agree that for now we'll certainly stick with fighting spam manually using the first phase of outlined moderation tools, meanwhile building a corpus of such posts to train any possible future such methods on. I will start building the simple method of flagging posts as outlined in the original post.

Meanwhile more people can chip in with their thoughts on the matter, and after our next meeting we'll then come back with a strategy based on those responses about what other moderation capabilities we might wish to employ going forward. That seems like a sensible thing to do.
Jeremiah Goerdt,
Jeroen and Chronal, you've both been reported for forcing at least 1 user of HMN to open a dictionary. Despicable.