Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft: MSC3215: Aristotle - Moderation in all things #3215
base: old_master
Are you sure you want to change the base?
Draft: MSC3215: Aristotle - Moderation in all things #3215
Changes from 1 commit
23159fb
87d9da4
80e78e0
1a826ad
a379b78
da689ab
c941c49
79d9a66
ad08ef8
c21243e
b3e1db1
9439f62
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what the moderators of a room should do with a report that the contents of the room violates the ToS of a server. Should it be up to the room moderators to ACL the server, or up to the homeserver to pull its users out of the room?
If a user from a homeserver with a very restricted ToS happens to join your public room, it probably shouldn't be up to the room moderators to deal with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's where the current abuse report API comes in play. I'll clarify this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hard-coding a list seems destined to failure. Maybe the list of forbidden content should be in the room somewhere? For example what if adult content isn't allowed? Or discussion of drugs? I think it makes sense for the moderators to make this list. Especially "a Client may give to give a user the opportunity to think a little about whether the behavior they report truly is abuse" is very difficult when this spec-provided list may not be aligned in any way with what is actually allowed in the room.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love the idea of making the list extensible. It definitely makes sense.
On the other hand, if we think tooling, I believe that having a standardized list is the way to go because:
Additionally:
What do you think about the following?
While points 2+ don't seem very complicated at first, I feel that they deserve their own MSC. I can rephrase the current MSC to leave room for them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good point. Maybe we can have "well known" types of abuse that can be translated by clients. For example
"nature": ["m.abuse.nature.toxic", "custom.Sent a message that wasn't the phrase \"Cat.\""]
. This way the well-known ones can be auto-translated but the room doesn't need to use all of the well-known types and can add their own. Of course making a consistent UX across well-known and custom categories may be hard.On the other hand this probably isn't much of an issue. The moderators that create these rules will enter them in the language(s) that their community uses. Just like the moderators will need to translate the ToS at signup, set the room topic or similar.
I'm not sure that this provides a huge benefit. I think the reason would usually just be shown. If there is any filtering this is probably customized by the mod team anyways. I would be interested in any examples where this would be helpful.
What do you have in mind? I can't think of an example here.
I think this is a client issue. The client can provide a "quick list" of categories if they think it will be useful. They can also save it to the user account or pull a server default list if required. I don't think the spec is the best place to store a helpful list of abuse types to be honest.
I think this makes sense. I'm still not sure how much value the well-known list provides but I don't think they hurt much if they are optional to use and you can add custom ones. I agree that future MSCs can extend the list and add mechanisms to help clients suggest good defaults.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed with both points, but I still feel that this deserves its own MSC :)
It's basically the same thing as translation, just for moderators.
Actually, I can't think of a good example, either (I was thinking for instance of using the report button to report calls for help in case of suicide threats, but it's not very convincing).
On the other hand, having a standard list would be very useful for bots/tools. We can imagine a bot lurking both in the Moderation Room and in the Community Room and that watches specifically for
m.abuse.nature.spam
reports. Whenever it receives one in the Moderation Room, it checks in the Community Room whether the message truly looks like spam, using whatever heuristics are at hand, and may either take decisions such as auto-kicking or ping a moderator with a human-readable message to suggest kicking the offender. Similarly, a sufficiently smart bot could use libnsfw to discard or deprioritizem.abuse.nature.porn
reports that don't seem to be porn, etc.In a very different scenario, we can imagine a bot that files reports on GitLab. In certain cases, the bot should file the abuse report with as much context as it can possibly find - if the bot is present in the Community Room, it can attach the content of messages, copy links to images, etc. Except if the abuse is, say
m.abuse.nature.gore
orm.abuse.nature.rape
or ... the moderators probably don't want to see the image.Agreed. Although I believe that we need a few entries to bootstrap testing.
Are we in agreement that this MSC can start with a short list and that the customization mechanism can wait for a followup MSC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One downside of translation is that it is basically putting words in the moderators mouth. Every client and every translation will interpret the boundaries of each
nature
differently. For a mod team it seems desirable to know exactly what categories you provide and exactly what they mean.I don't fine these examples too convincing as they can easily be setup when configuring the bot. Even with a custom list the bot is still receiving predictable
nature
s as the list is picked by the moderators. I guess it makes a bot slightly easier to use across mod rooms or policies but this still think it is a minor benefit at best.What type of testing do you think you need? I'm confused by this comment.
I don't think I agree. It sounds like this is a breaking change. I would rather get the customization mechanism specified from the onset so that clients don't need to be changed when it gets added. At the least we would need to explicitly reserve some format for the custom messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can escape translation for the end-user.
I disagree on this point. Making natures non-standard by default feels like a footgun to me.
Once the MSC feels stable enough, I'm planning to prototype this MSC (probably as part of develop.element.io and matrix.org) to gather feedback from actual moderators and end-users. That's what I meant by "testing".
I don't think this is a breaking change if we specify that clients that do not support customization may fallback to the standard list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced. If we step away from creating a single moderation policy for the whole world it becomes very reasonable that the acceptable content policy is written in just the languages that the moderators operate. I'm not sure how effective a mod can be if they don't speak the same language as the user anyways.
What is the danger you see in this?
I meant that clients will still need to know how to display these custom messages. Otherwise these users will no longer be able to report abuse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any case in which we'd need two tools to communicate with each other. I don't have specific examples yet, but I'd be really surprised if it didn't show up quickly.
Agreed. I still believe that this can be done in a followup MSC, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off by: Erkin Alp Güney erkinalp9035@gmail.com
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I'm writing this regardless of the status of the MSC in case it gets picked up again later by someone else, even if that's in another form.)
It would be really useful for the client to give the room a distinct type. Currently in Mjolnir (which has a partial implementation of the routing bot) this behaviour is problematic as it clashes with the
acceptInvitesFromSpace
behaviour and alsoprotectAllJoinedRooms
. matrix-org/mjolnir#475There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't be happy with a solution that requires a bot on my homeserver joining all rooms it's invited into. This seems too abusable. I want my server only participating in rooms that my users explicitly joined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, this is an event with type
m.abuse.report
, rather than anm.room.message
event with"msgtype": "m.abuse.report"
, correct?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, it's a message event with type
m.abuse.report
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, building encryption support into a bot that's part of the homeserver may be tricky...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it doesn't really have to be part of the homeserver. Truly, I'd probably prefer if it wasn't, because it would make all the retry logics easier to write without complicating the homeserver.