Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: MSC3215: Aristotle - Moderation in all things #3215

Draft
wants to merge 12 commits into
base: old_master
Choose a base branch
from

Conversation

Yoric
Copy link

@Yoric Yoric commented May 24, 2021

A step towards decentralized moderation/reputation.

Rendered.

@Yoric Yoric changed the title MSCXXXX: Aristotle - Moderation in all things MSC3215: Aristotle - Moderation in all things May 24, 2021
@Yoric
Copy link
Author

Yoric commented May 24, 2021

I realized that there is a big hole around "message routing". Marking as draft until I have a better idea of how to handle that.

@Yoric Yoric closed this May 24, 2021
@Yoric Yoric changed the title MSC3215: Aristotle - Moderation in all things Draft: MSC3215: Aristotle - Moderation in all things May 24, 2021
@Yoric Yoric reopened this May 24, 2021
@Yoric Yoric marked this pull request as draft May 24, 2021 14:13
@turt2live turt2live added kind:core MSC which is critical to the protocol's success proposal A matrix spec change proposal proposal-in-review and removed proposal-in-review labels May 24, 2021
proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
Comment on lines 162 to 164
Users should not need to join the moderation room to be able to send `m.abuse.report` messages to it, as it would
let them snoop on reports from other users. Rather, we introduce a built-in bot as part of this specification: the
Routing Bot. This Routing Bot is part of the server and has access to priviledged information such as room membership.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, i think this is a bad idea when the focus of matrix wants to shift away from servers and onto users and rooms, maybe specify that such a bot can either be server-controlled or "self-hosted", and it'll solve a lot of problems down the line (such as the "permissions bot" in #2962 which both needs to be verifiable in a decentralized manner, but also built-into every server it'll touch)

State that the bot can be any valid matrix user, then it only has to follow below behaviour to be "acceptable" as a cog in this MSC.

Note that this is a point of centralisation, though i think this is less of a problem than requiring built-in server bots

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow what you're suggesting.

I've removed the "privileged" part, though.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
@Yoric
Copy link
Author

Yoric commented May 25, 2021

Recent changes:

  • introducing the Routing Bot to fill the hole around routing;
  • analysis of a number of possible attacks and what could be done to improve the situation vs. the attacks that can work.

@@ -127,16 +114,15 @@ Community Room. This is materialized as deleting `m.room.moderated_by`.

#### Rejecting moderation

A member of a Moderation Room may disconnect the Moderation Room from a Community Room by removing state event
`m.room.moderation.moderator_of.XXX`. This may serve to reconfigure moderation if a Community Room is deleted
A member of a Moderation Room may disconnect the Moderation Room from a Community Room by removing state event `m.room.moderation.moderator_of`, `XXX`. This may serve to reconfigure moderation if a Community Room is deleted
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A member of a Moderation Room may disconnect the Moderation Room from a Community Room by removing state event `m.room.moderation.moderator_of`, `XXX`. This may serve to reconfigure moderation if a Community Room is deleted
A member of a Moderation Room may disconnect the Moderation Room from a Community Room by removing state event `("m.room.moderation.moderator_of", "XXX")`. This may serve to reconfigure moderation if a Community Room is deleted

state event "keys" are often displayed as a tuple

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or simply "by removing the corresponding m.room.moderation.moderator_of state event".

Are a Moderation room and Community room still connected if you have a m.room.moderation.moderator_of state event with an empty state key in the moderation room?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's connected at one end but not the other, so I'd say no.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
Comment on lines +208 to +209
We could expose an API to provide a certificate that Alice has witnessed
an event and another API to check the certificate.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sceptical of the benefits of this, but i'll keep it to that, no comment right now

Note that adding an endpoint increases complexity on the server, which you may want to avoid in a "on-matrix"-only abuse report system

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, it's not part of the spec, just an illustration of how we could improve things without breaking the MSC.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
### Interfering administrator (moderator homeserver)

Consider the following case:
Variant:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Variant:
Also consider another variant;

- every abuse report in room _CR_ is deanonymized by EvilBot.
- Marvin has access to all abuse reports in _MR_.

Variant:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Variant:
Another variant;

Copy link
Member

@anoadragon453 anoadragon453 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking the time to write all of this out. It's clear you've thought about a lot of the edge cases.

I think it's a noble goal to try and reduce homeserver admin interference as much as possible, though that can be a difficult goal with our current architecture.

I think the current design is essentially 90% of the way there, though I acknowledge that we're sailing on relatively uncharted waters w.r.t built-in bots and sharing information cleanly between separate rooms.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
Comment on lines 60 to 61
- As there may still be a need to report entire rooms, the current abuse report API remains in place for
reporting entire rooms, although it is expected that further MSCs will eventually deprecate this API.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reporting an entire room to a room moderator probably doesn't make too much sense. I suppose reporting these to admins of homeservers that are in the room is the better route?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that, for the time being, we keep the reporting API, reporting only to our homeserver admin and we'll work on improving this in future MSCs.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
- Marvin can impersonate @alice:compromised.org and invite an evil moderator or bot to _MR_ ;
- Marvin has access to all abuse reports in _MR_.

I cannot find any solution to these problems: as long as an administrator can impersonate a moderator, they can access all moderation data past the date of impersonation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hopefully these actions will at least be somewhat obvious to Alice (or other moderators) if state events in the room start appearing that she never sent herself.

Matrix's threat model definitely already includes "I trust my homeserver admin".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I add a comment to this effect in the MSC?

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
Comment on lines 364 to 366
However, this would require implementing yet another new communication protocol based on PDUs/EDUs, including a
(small) custom encryption/certificate layer and another retry mechanism. The author believes that this would entail
a higher risk and result in code that is harder to test and trust.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for including this - I see now that the weight that comes with this solution is not just a new API, but all the scaffolding that goes along with it (including message secrecy).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

including message secrecy

From what I can tell this is not currently part of the MSC, but it could be in the future.

On the point against adding another handshake, I am not too sure as to what the problem is, as a certificate is not only not part of the MSC as it currently stands, but arguably isn't that useful, since any bad actor could spam reports for events they can see instead, which doesn't make a big difference to the moderators dealing with it.

Not too sure what the trust and testing part is about though, since there is already a precedent for using handshakes to send state to rooms a server is not a part of, so the code to perform it would likely be not much of a new thing, especially when compared to creating the concept of a routing bot.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved

If the Routing Bot was attached to a specific homeserver, giving it the ability to check whether a user from the same homeserver is sending a legitimate abuse report would be simple and most likely riskless.

However, this means that one Routing Bot per homeserver member of the Community Room needs to be invited to each Moderation Room. In particular, this would expose all the content of the Moderation Room to this Routing Bot and to the administrator of every homeserver member of the Community Room.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you need to invite every routing bot of every homeserver in the community room? Why not just 1+ routing bots that belong to the homeservers of the moderator(s)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming that the Routing Bot needs to be on the same homeserver as the user to be able to check that the abuse report is forged. You're probably right that we only need it to be on the same homeserver as a moderator, if we assume that the moderator was a member of the Community Room at the time of the event that is reported. This looks like a complex invariant, though.

Yoric added 4 commits May 25, 2021 15:25
This proposal is not about automated banning, so let's not focus
on automated banning in intro and examples.
- `m.abuse.nature.toxic`: toxic behavior, including insults, unsollicited invites;
- `m.abuse.nature.illegal`: illegal behavior, including child pornography, death threats,...;
- `m.abuse.nature.spam`: commercial spam, propaganda, ... whether from a bot or a human user;
- `m.abuse.nature.other`: doesn't fit in any category above.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hard-coding a list seems destined to failure. Maybe the list of forbidden content should be in the room somewhere? For example what if adult content isn't allowed? Or discussion of drugs? I think it makes sense for the moderators to make this list. Especially "a Client may give to give a user the opportunity to think a little about whether the behavior they report truly is abuse" is very difficult when this spec-provided list may not be aligned in any way with what is actually allowed in the room.

Copy link
Author

@Yoric Yoric May 31, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love the idea of making the list extensible. It definitely makes sense.

On the other hand, if we think tooling, I believe that having a standardized list is the way to go because:

  1. it makes internationalization possible;
  2. it makes it easier to write bots or other tools to display abuse reports in a human-readable manner for non-technical moderators;
  3. it makes it easier to customize clients to handle abuse-specific case.

Additionally:

  • if we do not have a default list of abuse natures, we increase the difficulty of setting up new rooms;
  • my personal experience with e.g. Reddit or Twitter suggests that having a list that is too long makes it harder to pick one abuse nature;
  • if we allow full customization of the list, an evil moderator running illegal activities could probably take advantage of this to hide "report this room" behind a fake "report this room" and use it to deanonymize abuse reporters who believe that they are reporting the entire room to a homeserver administrator.

What do you think about the following?

  1. having a (possibly long) list of standardized abuse natures, initialized in this MSC and extended in future MSCs;
  2. having a mechanism that will let the moderation room and/or the community room compose a list of abuse natures from both standard and non-standard value (the latter will require the room to also specify some internationalization) and that will let the client use natures from this list;
  3. having a mechanism that will specify a default list of standardized abuse natures when creating a new community (or perhaps moderation?) room;
  4. think of (UX-based?) counter-measures to avoid the fake "report this room" button.

While points 2+ don't seem very complicated at first, I feel that they deserve their own MSC. I can rephrase the current MSC to leave room for them.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it makes internationalization possible

This is a good point. Maybe we can have "well known" types of abuse that can be translated by clients. For example "nature": ["m.abuse.nature.toxic", "custom.Sent a message that wasn't the phrase \"Cat.\""]. This way the well-known ones can be auto-translated but the room doesn't need to use all of the well-known types and can add their own. Of course making a consistent UX across well-known and custom categories may be hard.

On the other hand this probably isn't much of an issue. The moderators that create these rules will enter them in the language(s) that their community uses. Just like the moderators will need to translate the ToS at signup, set the room topic or similar.

it makes it easier to write bots or other tools to display abuse reports in a human-readable manner for non-technical moderators;

I'm not sure that this provides a huge benefit. I think the reason would usually just be shown. If there is any filtering this is probably customized by the mod team anyways. I would be interested in any examples where this would be helpful.

it makes it easier to customize clients to handle abuse-specific case.

What do you have in mind? I can't think of an example here.

if we do not have a default list of abuse natures, we increase the difficulty of setting up new rooms;

I think this is a client issue. The client can provide a "quick list" of categories if they think it will be useful. They can also save it to the user account or pull a server default list if required. I don't think the spec is the best place to store a helpful list of abuse types to be honest.

What do you think about the following?

I think this makes sense. I'm still not sure how much value the well-known list provides but I don't think they hurt much if they are optional to use and you can add custom ones. I agree that future MSCs can extend the list and add mechanisms to help clients suggest good defaults.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it makes internationalization possible

This is a good point. Maybe we can have "well known" types of abuse that can be translated by clients. For example "nature": ["m.abuse.nature.toxic", "custom.Sent a message that wasn't the phrase \"Cat.\""]. This way the well-known ones can be auto-translated but the room doesn't need to use all of the well-known types and can add their own. Of course making a consistent UX across well-known and custom categories may be hard.

On the other hand this probably isn't much of an issue. The moderators that create these rules will enter them in the language(s) that their community uses. Just like the moderators will need to translate the ToS at signup, set the room topic or similar.

Agreed with both points, but I still feel that this deserves its own MSC :)

it makes it easier to write bots or other tools to display abuse reports in a human-readable manner for non-technical moderators;

I'm not sure that this provides a huge benefit. I think the reason would usually just be shown. If there is any filtering this is probably customized by the mod team anyways. I would be interested in any examples where this would be helpful.

It's basically the same thing as translation, just for moderators.

it makes it easier to customize clients to handle abuse-specific case.

What do you have in mind? I can't think of an example here.

Actually, I can't think of a good example, either (I was thinking for instance of using the report button to report calls for help in case of suicide threats, but it's not very convincing).

On the other hand, having a standard list would be very useful for bots/tools. We can imagine a bot lurking both in the Moderation Room and in the Community Room and that watches specifically for m.abuse.nature.spam reports. Whenever it receives one in the Moderation Room, it checks in the Community Room whether the message truly looks like spam, using whatever heuristics are at hand, and may either take decisions such as auto-kicking or ping a moderator with a human-readable message to suggest kicking the offender. Similarly, a sufficiently smart bot could use libnsfw to discard or deprioritize m.abuse.nature.porn reports that don't seem to be porn, etc.

In a very different scenario, we can imagine a bot that files reports on GitLab. In certain cases, the bot should file the abuse report with as much context as it can possibly find - if the bot is present in the Community Room, it can attach the content of messages, copy links to images, etc. Except if the abuse is, say m.abuse.nature.gore or m.abuse.nature.rape or ... the moderators probably don't want to see the image.

if we do not have a default list of abuse natures, we increase the difficulty of setting up new rooms;

I think this is a client issue. The client can provide a "quick list" of categories if they think it will be useful. They can also save it to the user account or pull a server default list if required. I don't think the spec is the best place to store a helpful list of abuse types to be honest.

Agreed. Although I believe that we need a few entries to bootstrap testing.

What do you think about the following?

I think this makes sense. I'm still not sure how much value the well-known list provides but I don't think they hurt much if they are optional to use and you can add custom ones. I agree that future MSCs can extend the list and add mechanisms to help clients suggest good defaults.

Are we in agreement that this MSC can start with a short list and that the customization mechanism can wait for a followup MSC?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's basically the same thing as translation

One downside of translation is that it is basically putting words in the moderators mouth. Every client and every translation will interpret the boundaries of each nature differently. For a mod team it seems desirable to know exactly what categories you provide and exactly what they mean.

On the other hand, having a standard list would be very useful for bots/tools

I don't fine these examples too convincing as they can easily be setup when configuring the bot. Even with a custom list the bot is still receiving predictable natures as the list is picked by the moderators. I guess it makes a bot slightly easier to use across mod rooms or policies but this still think it is a minor benefit at best.

Agreed. Although I believe that we need a few entries to bootstrap testing.

What type of testing do you think you need? I'm confused by this comment.

Are we in agreement that this MSC can start with a short list and that the customization mechanism can wait for a followup MSC?

I don't think I agree. It sounds like this is a breaking change. I would rather get the customization mechanism specified from the onset so that clients don't need to be changed when it gets added. At the least we would need to explicitly reserve some format for the custom messages.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's basically the same thing as translation

One downside of translation is that it is basically putting words in the moderators mouth. Every client and every translation will interpret the boundaries of each nature differently. For a mod team it seems desirable to know exactly what categories you provide and exactly what they mean.

I don't think we can escape translation for the end-user.

On the other hand, having a standard list would be very useful for bots/tools

I don't fine these examples too convincing as they can easily be setup when configuring the bot. Even with a custom list the bot is still receiving predictable natures as the list is picked by the moderators. I guess it makes a bot slightly easier to use across mod rooms or policies but this still think it is a minor benefit at best.

I disagree on this point. Making natures non-standard by default feels like a footgun to me.

Agreed. Although I believe that we need a few entries to bootstrap testing.

What type of testing do you think you need? I'm confused by this comment.

Once the MSC feels stable enough, I'm planning to prototype this MSC (probably as part of develop.element.io and matrix.org) to gather feedback from actual moderators and end-users. That's what I meant by "testing".

Are we in agreement that this MSC can start with a short list and that the customization mechanism can wait for a followup MSC?

I don't think I agree. It sounds like this is a breaking change. I would rather get the customization mechanism specified from the onset so that clients don't need to be changed when it gets added. At the least we would need to explicitly reserve some format for the custom messages.

I don't think this is a breaking change if we specify that clients that do not support customization may fallback to the standard list.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can escape translation for the end-user.

I'm not convinced. If we step away from creating a single moderation policy for the whole world it becomes very reasonable that the acceptable content policy is written in just the languages that the moderators operate. I'm not sure how effective a mod can be if they don't speak the same language as the user anyways.

I disagree on this point. Making natures non-standard by default feels like a footgun to me.

What is the danger you see in this?

I don't think this is a breaking change if we specify that clients that do not support customization may fallback to the standard list.

I meant that clients will still need to know how to display these custom messages. Otherwise these users will no longer be able to report abuse.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the danger you see in this?

Any case in which we'd need two tools to communicate with each other. I don't have specific examples yet, but I'd be really surprised if it didn't show up quickly.

I meant that clients will still need to know how to display these custom messages. Otherwise these users will no longer be able to report abuse.

Agreed. I still believe that this can be done in a followup MSC, though.

"state_key": "m.room.moderated_by",
"type": "m.room.moderated_by",
"content": {
"room_id": XXX, // The room picked for moderation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fail to see the purpose of this room. IIUC users never actually use this room, they likely don't even have access.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably to couple with ban lists as rooms.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fail to see the purpose of this room. IIUC users never actually use this room, they likely don't even have access.

The entire room or specifying the room ID in the state event?

If the former, well, we need to send abuse reports somewhere. The current abuse API has them sent to a proprietary admin API. We replace this with a standard room. At this stage, it's up to users and tooling to decide what they do with it.

If the latter, the client needs a way to find where to post the abuse reports.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I am talking about in the state event.

If the latter, the client needs a way to find where to post the abuse reports.

Please elaborate, I thought they just talked to the bot.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please elaborate, I thought they just talked to the bot.

The bot is just a delivery mechanism to send a message to the Moderation Room. The same bot may be used by several Moderation Rooms. So we need both the userID of the bot (to talk to it) and the roomID of the Moderation Room (to tell it where to send the message).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why doesn't the bot have a mapping from source room to destination.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using field room_id from the m.abuse.report message to know where the room comes from?

Yes, this could work, too, if we require the bot to be stateful. I believe that the best way to do it, though, is to keep rooms themselves the source of truth, rather than some bot memory.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Does this mean that the bot has to peek into the "community room" to see where it should send the report to? Or is the bot expected to be part of that room already?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current status of the MSC, the user's client copies this value room_id as field moderated_by_id in the m.abuse.report. This lets the bot find out where to route the message without having to peek into the room.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I missed that. It still feels weird to me that we need to expose this to the user but I'll consider this resolved until I have a better idea what to do here.

"type": "m.room.moderated_by",
"content": {
"room_id": XXX, // The room picked for moderation.
"user_id": XXX, // The bot in charge of forwarding reports to `room_id`.
Copy link

@kevincox kevincox May 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why enforce that there is a bot? This seems to be over-complicated. Why not just provide a list of users that should be invited to a report room? This could be a bot, or it could just be the room admin(s). This way the use of a bot is not mandated. This also has a number of advantages, such as maybe one of the admins is causing the issue, the user could choose to exclude that user (this benefit is lost in the case of a bot, but at least it is allowed in some cases). Furthermore the user can use E2EE if they know the admin's keys. This removes an extra set of keys that need to be dealt with.

It is also much simpler, especially for the case of a small team of admins that manage a room or two. Now they don't need to set up rooms, bots or anything. Yet they are still prepared if a (rare) abuse report comes in. In fact I would even consider that clients recommend reporting to the users of highest power-level in the room if this event is not present, this means that there is some sort of reasonable route for reporting abuse even if the room moderators haven't considered that abuse may concern. (People generally don't think about these issues until they happen)

The process then becomes:

  1. Create a room.
  2. Invite the users listed in the m.room.moderated_by event. If there is no such event invite all of the users at the highest power level in the room.
    a. Optional: The user may instead invite only a subset of this list.
  3. Send an m.abuse.report as described below in the MSC.

At that point that report may be handled by the bot listed in the m.room.moderated_by event, it may be manually forwarded to another room for internal discussion among the moderators, or it may just be discussed in the reporting room. This approach seems much simpler and much more flexible.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason for there being a bot is that Matrix does not offer a built-in mechanism for users who are not member of a room (in this case, the Moderation Room) to post events towards that room. The bot is the simplest routing mechanism that I can think of. If I read correctly your counter-proposal, it does not address this (rather fundamental) issue.

In fact I would even consider that clients recommend reporting to the users of highest power-level in the room if this event is not present, this means that there is some sort of reasonable route for reporting abuse even if the room moderators haven't considered that abuse may concern.

Good idea. We can definitely add this as a suggestion in the MSC if the state events are not setup.

At that point that report may be handled by the bot listed in the m.room.moderated_by event, it may be manually forwarded to another room for internal discussion among the moderators, or it may just be discussed in the reporting room. This approach seems much simpler and much more flexible.

What's the "reporting room"? If it's what I call the Community Room, receiving abuse reports in the same room as they were sent leads to immediate deanonymization of the reporters.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read correctly your counter-proposal, it does not address this (rather fundamental) issue.

It does address it, but it does effectively sidestep it. It allows you to use a bot which forwards to a room, or it lets you just use the room that the user created to send the report. It puts the choice to the moderation team.

What's the "reporting room"?

I mean the room the reporter created with the bot.


Let me summarize the flow you are requiring in this MSC.

  1. Create a room with $bot.
  2. Send an abuse report.
  3. $bot copies report to $modroom.

That is a completely reasonable report flow. However it seems overly specified. There are many other valid moderation workflows that don't need or want this complexity. Especially if you have a smallish community abuse reports will be rare. So having a dedicated mod room for discussion is probably not necessary. And in many communities the mods may want to keep the separate reports in different rooms for organization. Also keep in mind that for many (probably most) communities the "mod team" is one person. So copying the abuse report to $modroom is quite pointless. It just lets that one mod discuss with themselves.

What I am suggesting is that we just drop everything about the bot and the moderation room from this MSC. It can still be implemented, but it leaves each mod team free to implement their own workflow and leaves room for different bots that work differently (and leaves room for no bot at all). In this case the MSC becomes

  1. Create a room with $reportusers.
  2. Send an abuse report.

This way we have a standardized method for reporting abuse. But the actual workflow for handling it is still flexible. It is totally valid to use a bot as suggested in this MSC to copy the report to a moderation room, however that is optional, not required.

This has a number of advantages in my mind.

  • Small mod teams don't need to configure a bot.
  • Small mod teams don't require any configuration.
  • Leaves a lot of room for flexibility. Maybe instead of forwarding to the "mod room" it files a ticket in a ticketing system.

TL;DR I don't see the benefit of mandating the bot and its behaviour.

Copy link
Author

@Yoric Yoric Jun 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does the reporter send the abuse report in your counter-proposal? To a userID as specified in moderated_by?

If I understand correctly, you're splitting the MSC in two. The bot and moderation room are still necessary in many (most?) cases but remain unspecified. Essentially, we're losing the specification for moderator_of and how it interacts with possible bots.


  • Small mod teams don't need to configure a bot.
  • Small mod teams don't require any configuration.

A few notes on this:

  1. Note that a good UX can make this configuration all happen in one-click.
  2. In your proposal, a bot is still needed to convert the structured report into something human-readable.
  3. In your proposal, what's the scenario/UX for a large Community Room, e.g. MatrixHQ? Since these large public rooms are the ones that attract most abuse and spam, I'm hoping to target them pretty quickly for experimentation, with smaller rooms coming later.
  • Leaves a lot of room for flexibility. Maybe instead of forwarding to the "mod room" it files a ticket in a ticketing system.

Well, that's possible with either variant. The main difference in this specific scenario is that the Moderation Room allows more than one bot to operate.

More generally, I believe that the true difference between your proposal and mine is that in yours, the abuse endpoint is a user (which may optionally connect to a Moderation Room, etc.) while in mine, the abuse endpoint is a room (which may optionally connect to users, etc.)


TL;DR I don't see the benefit of mandating the bot and its behaviour.

I believe that I understand your point and that I understand the point of minimizing. If I were to go this way, though, I'd probably come back with a MSC for the bot pretty soon :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does the reporter send the abuse report in your counter-proposal? To a userID as specified in moderated_by?

Yes. You could have a moderated_by which is a list of users. (or a bot)

If I understand correctly, you're splitting the MSC in two.

Basically. I see a lot of value to specifying how the client reports abuse, but I'm not so convinced that the "report management" workflow that you have proposed is sufficient for all use cases, furthermore I don't see as much value in standardizing it. So I think it makes sense to get the reporting flow through. Then we can consider management workflows later if we find value in standardizing it.

Note that a good UX can make this configuration all happen in one-click.

But who is expected to run the bot? Is it to be built into every homeserver?

In your proposal, a bot is still needed to convert the structured report into something human-readable.

Why can't this just be done by the client for the non-bot case?

In your proposal, what's the scenario/UX for a large Community Room, e.g. MatrixHQ? Since these large public rooms are the ones that attract most abuse and spam, I'm hoping to target them pretty quickly for experimentation, with smaller rooms coming later.

In this case we would probably want a bot because with a lot of reports new rooms for each new report may be undesirable. So set moderated_by to the bot address. I don't really know how the mod teams of MatrixHQ work so I don't know if they would want it sent to a mod room like you have described, or they would prefer other options such and filing tickets or sending email.

My point here is that the bot workflow works even without being part of the spec. Each mod team can use a bot that works for them instead of mandating a single bot that implements one workflow.

If I were to go this way, though, I'd probably come back with a MSC for the bot pretty soon

That sounds fine to me. If you can justify the value of specifying the bot I am all ears :)

Copy link
Author

@Yoric Yoric Jun 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a room is the abstraction where several tools (including Matrix clients) can connect, not because it's a place to discuss

I get that. But I think it would be nice if it could also be a place to discuss. Basically allow the mods and users to connect directly and discuss if desired, rather than forcing an intermediary. Again, the intermediary is still allowed, but not required.

This also raises your question about responding to the user. Does the mod team reach out directly? Do we specify a method for the bot to reply? The nice thing about not specifying the bot at all is that it lets different options be tested, multiple people can make bots that work the way they want. If we specify the bot we need to answer all of these questions upfront.

I believe that neither proposal solves that scenario, because of the difficulty of trusting DMs. I believe that this deserves its own MSC.

I also believe that both proposal allow for such experimentation (especially since each proposal can easily be extended into the other proposal :) ).

Let me try to clarify what I understand about your proposal and my proposal.

Yours:

1. Mod Room - This is where the mods live, the users know the ID of this room but can't necessarily join it.

2. Community Room - This is where the users live (and likely the mods too).

3. Reporting Room - This is created to send a report. It contains the reporter and a bot.

Mine:

1 is not specified. It may exist (like when using a forwarding, spam checking, enhancing bot), but the spec doesn't require it.

3 contains the reporter and the list of "targets" which may be one bot, or may be moderator(s) or may be both. This is the key difference in our proposals, I prefer not to specify exactly where reports go and how they are handled to allow various approaches to be evaluated and see which approaches work for which mod teams.

Agreed on the summary.

My point here is that hardcoding the appearance in the client is actually less flexible than letting a bot decide of the appearance.

The big difference is how many bots can be used on one mod team. IIUC you can only have one, so the whole team is locked into a style of formatting. If it is done by the client each mod can pick their preferred client. Additional for "small" communities the mods probably wont be bothering to pick their favourite bot so they will just get whatever comes with their homeserver or integration manager (or wherever the bot comes from, it isn't really specified here).

Good point.


Unfortunately, I believe that we have reached a stage at which this conversation has stopped progressing. We both have arguments that make sense, we each appreciate the other's arguments, but I feel that continuing this thread will simply block everything.

So, to summarize:

  1. I believe that we agree that both proposals make sense, that both support experimentation and that each proposal may later be amended to essentially become/encompass the other.
  2. Specifically, it feels like your proposal may be better suited to small Community Rooms but may need to encompass my proposal to be better suited to large Community Rooms. Conversely, it feels like my proposal may be better suited for large Community Rooms and may need to encompass my proposal to be better suited to small Community Rooms.
  3. My first priority is to help deal with the abuse that crops up in large Community Rooms. This is a clear, present and pressing problem.

For these reasons, I believe that continuing work on this MSC more or less as is (i.e. my proposal) does not harm your proposal and should yield clear benefits in terms of both enabling experimentation (including experimentation on your proposal) and aiding the fight against abuse.

Therefore, I'm planning to:

  • stop the current conversation (although we may very well continue it on another channel);
  • proceed by experimenting on this MSC;
  • if experimentation proves successful, remove the "draft" tag and try to move this MSC towards standardization;
  • all of this while keeping the door open for a further MSC based on your proposal.

Does this make sense?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the point that you are missing is that I think specing out the "second half" right now of this MSC is (mildly) harmful. I agree that what I am proposing is a subset of yours. But I think that it makes sense to start with that subset.

Most importantly I am still do not see the benefits that you see in specifying the "second half". If there is no benefit of nailing something down in the spec then I think it is best not to specify it to avoid unnecessary restrictions that may come back to bite us down the road.

Would it help if I put forward a stripped down version of this and we can consider deferring this for now?

each proposal may later be amended to essentially become/encompass the other

It isn't clear that your proposal can be cleanly cut down to the minimal version. Could you clarify roughly the tweaks that you would make? It is just changing the bot name to a MXID or list of MXID?

My first priority is to help deal with the abuse that crops up in large Community Rooms. This is a clear, present and pressing problem.

Yes, that is why I am suggesting pushing out the "first half". Then you can write the bot and use it for these communities. This looks like the fastest path to me and avoids adding technical debt.

So to be clear this is the plan that I think makes the most sense:

  • proceed by pushing the "reporting" half of this MSC.
  • Experiment with the bot, solving the problem for these communities.
  • If we find reasons why we need to specify the "second half" of this MSC then propose those in another MSC.

Again, if you can specify clear reasons why the MSC would be worse without specifying the behaviour of the bot then I think it can go ahead. But reading back I still don't see any reasons why the bot needs to be specified. I have only see "big communities will need it". That statement may well be true but that doesn't mean that it needs to be specified. My preferred subset allows using a bot, and it may well be the case that all users would use a bot, but unless there is a downside to removing the bot from the proposal then I think we should. Simpler is better.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with reducing the MSC to its first half and keeping the second half as an illustration of a possible workflow.

Before experimentation, I'm not ok with specifying that the client must be able to display abuse reports or that we can specify several targets in moderated_by.

Does this work for you?

Copy link
Author

@Yoric Yoric Jun 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few additions:

  • I'm currently in the process of experimenting with the current MSC;
  • I believe that to make moderator_of and moderated_by most generic for a MSC v1, their content probably shouldn't be specified. Rather, it is an agreement between the Community Room, the Moderation Room and the bot and/or client.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds good.

I'm not ok with specifying that the client must be able to display abuse reports

Then we can consider things like a fallback with extensible events in the future or mandating the bot. For now we can assume that anyone who ads the metadata has a way to read the reports.

that we can specify several targets in moderated_by

Can we cut down the middle for now and say that it must be a list of one element? That way we don't need to break the API to allow multiple recipients. What is your objection to allowing multiple? It doesn't seem to cause any issues in my mind.

I believe that to make moderator_of and moderated_by most generic for a MSC v1, their content probably shouldn't be specified. Rather, it is an agreement between the Community Room, the Moderation Room and the bot and/or client.

That sounds good. We can also consider adding back moderated_by as a more opaque attribute if we want that can be a room ID for a bot or whatever else is desired. But for now I think it is fine to prototype without it and see what is actually desired there. Simple is good :)

Comment on lines +30 to +34
1. If the abuse report concerns an event in an encrypted room, the homeserver administrator typically does not have access to that room, while a room moderator would, hence cannot act upon that report.
2. Many homeserver administrators do not wish to be moderators, especially in rooms in which they do not participate themselves.
3. As the mechanism does not expose an API for reading the abuse reports, it is difficult to experiment with bots that could help moderators.
4. As the mechanism is per-homeserver, reports from two users of the same room that happen to have accounts on distinct homeservers cannot be collated.
5. There is no good mechanism to route a report by a user to a moderator, especially if they live on different homeserver.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
1. If the abuse report concerns an event in an encrypted room, the homeserver administrator typically does not have access to that room, while a room moderator would, hence cannot act upon that report.
2. Many homeserver administrators do not wish to be moderators, especially in rooms in which they do not participate themselves.
3. As the mechanism does not expose an API for reading the abuse reports, it is difficult to experiment with bots that could help moderators.
4. As the mechanism is per-homeserver, reports from two users of the same room that happen to have accounts on distinct homeservers cannot be collated.
5. There is no good mechanism to route a report by a user to a moderator, especially if they live on different homeserver.
1. If the abuse report concerns an event in an encrypted room, the homeserver administrator
typically does not have access to that room, while a room moderator would,
hence cannot act upon that report.
2. Many homeserver administrators do not wish to be moderators, especially
in rooms in which they do not participate themselves.
3. As the mechanism does not expose an API for reading the abuse reports,
it is difficult to experiment with bots that could help moderators.
4. As the mechanism is per-homeserver, reports from two users of the same room
that happen to have accounts on distinct homeservers cannot be collated without
manual effort and coordination between server administrators.
5. There is no good mechanism to route a report by a user to a moderator,
especially if they live on different homeserver.

Signed-off by: Erkin Alp Güney erkinalp9035@gmail.com

"state_key": "m.room.moderated_by",
"type": "m.room.moderated_by",
"content": {
"room_id": XXX, // The room picked for moderation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably to couple with ban lists as rooms.

- `m.abuse.nature.toxic`: toxic behavior, including insults, unsollicited invites;
- `m.abuse.nature.illegal`: illegal behavior, including child pornography, death threats,...;
- `m.abuse.nature.spam`: commercial spam, propaganda, ... whether from a bot or a human user;
- `m.abuse.nature.other`: doesn't fit in any category above.
Copy link

@erkinalp erkinalp May 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- `m.abuse.nature.other`: doesn't fit in any category above.
- `m.abuse.nature.other`: abuse that doesn't fit in any category above.
- `m.filter.copying`: automated screening to account for laws like Directive of Copyright
in the Digital Single Market.

Signed-off by: Erkin Alp Güney erkinalp9035@gmail.com

Comment on lines +402 to +405
- `m.room.moderation.moderated_by` will be prefixed `org.matrix.msc3215.room.moderation.moderated_by`;
- `m.room.moderation.moderator_of` will be prefixed `org.matrix.msc3215.room.moderation.moderator_of`;
- `m.abuse.report` will be prefixed `org.matrix.msc3215.abuse.report`;
- `m.abuse.nature.*` will be prefixed `org.matrix.msc3215.abuse.nature.*`.
Copy link

@erkinalp erkinalp May 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly shorter unstable keys:

Suggested change
- `m.room.moderation.moderated_by` will be prefixed `org.matrix.msc3215.room.moderation.moderated_by`;
- `m.room.moderation.moderator_of` will be prefixed `org.matrix.msc3215.room.moderation.moderator_of`;
- `m.abuse.report` will be prefixed `org.matrix.msc3215.abuse.report`;
- `m.abuse.nature.*` will be prefixed `org.matrix.msc3215.abuse.nature.*`.
- `m.room.moderation.moderated_by` will be prefixed `org.matrix.msc3215.moderated_by`;
- `m.room.moderation.moderator_of` will be prefixed `org.matrix.msc3215.moderator_of`;
- `m.abuse.report` will be prefixed `org.matrix.msc3215.report`;
- `m.abuse.nature.*` will be prefixed `org.matrix.msc3215.nature.*`.
- `m.filter.copying` will be `org.matrix.msc3215.copying`.

Signed-off by: Erkin Alp Güney erkinalp9035@gmail.com

Copy link
Contributor

@sumnerevans sumnerevans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this idea. I have just a couple of small suggestions for making it more readable and a couple small questions.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved

### Invariants

- Each room MAY have a state event `m.room.moderated_by`. If specified, this is the room ID towards which abuse reports MUST be sent. As rooms may be deleted `m.room.moderated_by` MAY be an invalid room ID. A room that has a state event `m.room.moderated_by` supports moderation. Users who wish to set this state event MUST be have a Power Level sufficient to kick and ban users.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would these events be exposed in the stripped state (#3173)?

The scenario that I am imagining is if a client wants to display the m.room.moderated_by room to the user before joining the room.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question.

I do not see any reason to strip this from the state. I don't think that this is classified in any way if the room is previewable.

proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
proposals/3215-towards-decentralized-moderation.md Outdated Show resolved Hide resolved
Yoric and others added 2 commits June 7, 2021 10:57
Co-authored-by: Sumner Evans <me@sumnerevans.com>
@turt2live turt2live added the needs-implementation This MSC does not have a qualifying implementation for the SCT to review. The MSC cannot enter FCP. label Jun 8, 2021
@neilmiddleton
Copy link

neilmiddleton commented Jun 18, 2021

I have one concern about the reporting process here, mostly around the fact that the reporting process effectively ends up putting the user into a state where we've added them to a room with the bot. IMHO we should never be changing the state of what the user sees in their own client - that's for them to control.

Thus, one potential alternative is to have the report process via a simple API call which is received by the bot and then posted to the moderation room as per this spec. A room would already have m.room.moderated_by, but we could add m.room.moderation_endpoint which defines the domain/path of the reporting service. This can then be a standardised API.

For instance, room X has m.room.moderation_endpoint defined as moderation.mydomain.org. The client receives an abuse report from the user and tries to ping that over to a standard path (for instance, POST v1/reports/new). Should this service be unavailable the client can retry on an exponential backoff basis until a point where the report is unsuccessful and is killed.

At this point, our API (aka the bot) has a report, and the details listed above so is able to post the report in the correct room.

In theory this also opens us up to being able to support multiple rooms being moderated in one particular room without having the bot join all the moderated rooms, but only the moderation room.

@Yoric
Copy link
Author

Yoric commented Jun 18, 2021

I have one concern about the reporting process here, mostly around the fact that the reporting process effectively ends up putting the user into a state where we've added them to a room with the bot. IMHO we should never be changing the state of what the user sees in their own client - that's for them to control.

I agree with the issue.

It may be possible to solve this UX problem through careful client design.

Thus, one potential alternative is to have the report process via a simple API call which is received by the bot and then posted to the moderation room as per this spec. A room would already have m.room.moderated_by, but we could add m.room.moderation_endpoint which defines the domain/path of the reporting service. This can then be a standardised API.

For instance, room X has m.room.moderation_endpoint defined as moderation.mydomain.org. The client receives an abuse report from the user and tries to ping that over to a standard path (for instance, POST v1/reports/new). Should this service be unavailable the client can retry on an exponential backoff basis until a point where the report is unsuccessful and is killed.

Are you assuming that moderation.mydomain.org is a Matrix server? If not, authentication becomes a major issue.

Assuming that it is a Matrix server, this still means that we need to rollout yet another federation communication protocol on Matrix. I'm not a big fan of that, as I believe that it tends to increase complexity and reduce reliability.

See this paragraph for a short discussion on a similar idea.

At this point, our API (aka the bot) has a report, and the details listed above so is able to post the report in the correct room.

In theory this also opens us up to being able to support multiple rooms being moderated in one particular room without having the bot join all the moderated rooms, but only the moderation room.

FWIW, in the current version of the spec, the bot doesn't need to join the Community Room.


An alternative might be offering some Matrix API that let us route a message through a bot, assuming that we have a way for users of self-identifying as bots. I fear that this is complex enough that it would require its own MSC, though.

@kevincox
Copy link

I think the room problem is fine. After all not all rooms need to show up in the room list. There are other MSCs that discuss ways to mark the type of rooms such as "abuse report" which could be used to group these rooms away from the regular room list. For now I think we can rely on clients to put these rooms in an appropriate place in the UI.

@Yoric
Copy link
Author

Yoric commented Jun 20, 2021

I think the room problem is fine. After all not all rooms need to show up in the room list. There are other MSCs that discuss ways to mark the type of rooms such as "abuse report" which could be used to group these rooms away from the regular room list. For now I think we can rely on clients to put these rooms in an appropriate place in the UI.

Agreed. If we wish for a quick solution, we could drop a state item e.g. (m.abuse.moderation.request, "") to mark the room as having been opened to request moderation, then hide such rooms in the UX unless/until there is a response.

@@ -0,0 +1,408 @@
# MSC3215: Aristotle: Moderation in all things
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While probably not something which should be done in this proposal, extending this to spaces would be super useful. A room which is added as a child to a space with a moderation room set being required to also inherit that moderation room (and the ability for any bot to verify it's PL in the new room and remove the room from the space if it can't moderate it) would make it possible to have moderated spaces where many more people can be trusted to add new rooms to a space (like people can add channels to a slack team etc).


Users should not need to join the moderation room to be able to send `m.abuse.report` events to it, as it would let them snoop on reports from other users. Rather, we introduce a built-in bot as part of this specification: the Routing Bot.

1. When the Routing Bot is invited to a room, it always accepts invites.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I'm writing this regardless of the status of the MSC in case it gets picked up again later by someone else, even if that's in another form.)

It would be really useful for the client to give the room a distinct type. Currently in Mjolnir (which has a partial implementation of the routing bot) this behaviour is problematic as it clashes with the acceptInvitesFromSpace behaviour and also protectAllJoinedRooms. matrix-org/mjolnir#475

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't be happy with a solution that requires a bot on my homeserver joining all rooms it's invited into. This seems too abusable. I want my server only participating in rooms that my users explicitly joined.

"type": "m.room.moderated_by",
"content": {
"room_id": XXX, // The room picked for moderation.
"user_id": XXX, // The bot in charge of forwarding reports to `room_id`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a single bot for routing moderation reports is a clear single point of failure, as if the server running the bot goes down, no moderator from any server will be able to receive reports.

Comment on lines +345 to +349
### Alternative to the Routing Bot

The "knocking" protocol is an example of an API that lets users inject state events in a room in which they do not belong. It is possible that we could follow the example of this protocol and implement a similar "abuse" API.

However, this would require implementing yet another new communication protocol based on PDUs/EDUs, including a (small) custom encryption/certificate layer and another retry mechanism. The author believes that this would entail a higher risk and result in code that is harder to test and trust.
Copy link
Contributor

@Kladki Kladki May 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rational for not using a handshake just seems a bit odd:

this would require implementing yet another new communication protocol based on PDUs/EDUs

The routing bot uses PDUs anyways, since that is what basically all communication is on matrix.

including a (small) custom encryption/certificate layer

I honestly don't know where this came from, since this was mentioned for mitigating the issues of the routing bot, so it would not be needed otherwise.

and another retry mechanism.

I would assume that they can just be done in the same way as retries for any other API?

The author believes that this would entail a higher risk and result in code that is harder to test and trust.

Considering that there is already precedent for handshakes, but none for anything like a routing bot, I would think that the opposite is true.

On the other hand, using a join handshake would:

  • Allow for reporting even if one of the moderator's homeservers are down
  • Not require creating a special room when reporting content in a room
  • Guarantees that reports actually came from the user (or at least server) it says it has.
  • Wouldn't have issues with verifying the reporter is in the room, can see the event, etc.

The only problem with this approach is that encrypting reports wouldn't be (easily) doable from what I can tell.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for avoiding a handshake can probably be found in the issues section where the author discusses how to avoid deanonymization of reports (against homeservers).
The PDU templating APIS required by make/send/knock/join will encode the reporter in the PDU.

More context: https://matrix.to/#/!NasysSDfxKxZBzJJoE:matrix.org/$0fcxVUJT37ZYVDWPuxnYhz8EygbSKEETIDRHlO0m1e0?via=matrix.org&via=envs.net&via=element.io

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I commented on this part because it was explaining why handshakes were not used over the routing bot. That part more so is about the issues with the routing bot as-is. But yes some upsides of the routing bot are mentioned there as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:core MSC which is critical to the protocol's success needs-implementation This MSC does not have a qualifying implementation for the SCT to review. The MSC cannot enter FCP. proposal A matrix spec change proposal
Projects
None yet
Development

Successfully merging this pull request may close these issues.