Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Commit

Permalink
Merge branch 'develop' of github.com:matrix-org/synapse into anoa/pas…
Browse files Browse the repository at this point in the history
…sword_reset_confirmation

* 'develop' of github.com:matrix-org/synapse: (22 commits)
  Update the test federation client to handle streaming responses (#8130)
  Do not propagate profile changes of shadow-banned users into rooms. (#8157)
  Make SlavedIdTracker.advance have same interface as MultiWriterIDGenerator (#8171)
  Convert simple_select_one and simple_select_one_onecol to async (#8162)
  Fix rate limiting unit tests. (#8167)
  Add functions to `MultiWriterIdGen` used by events stream (#8164)
  Do not allow send_nonmember_event to be called with shadow-banned users. (#8158)
  Changelog fixes
  1.19.1rc1
  Make StreamIdGen `get_next` and `get_next_mult` async  (#8161)
  Wording fixes to 'name' user admin api filter (#8163)
  Fix missing double-backtick in RST document
  Search in columns 'name' and 'displayname' in the admin users endpoint (#7377)
  Add type hints for state. (#8140)
  Stop shadow-banned users from sending non-member events. (#8142)
  Allow capping a room's retention policy (#8104)
  Add healthcheck for default localhost 8008 port on /health endpoint. (#8147)
  Fix flaky shadow-ban tests. (#8152)
  Fix join ratelimiter breaking profile updates and idempotency (#8153)
  Do not apply ratelimiting on joins to appservices (#8139)
  ...
  • Loading branch information
anoadragon453 committed Aug 26, 2020
2 parents 1d79f7b + 88b9807 commit 3f4e350
Show file tree
Hide file tree
Showing 98 changed files with 1,814 additions and 697 deletions.
10 changes: 10 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,16 @@ from Synapse as most users have updated their client. Further context can be
found at [\#6766](https://github.com/matrix-org/synapse/issues/6766).


Synapse 1.19.1rc1 (2020-08-25)
==============================

Bugfixes
--------

- Fix a bug introduced in v1.19.0 where appservices with ratelimiting disabled would still be ratelimited when joining rooms. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
- Fix a bug introduced in v1.19.0 that would cause e.g. profile updates to fail due to incorrect application of rate limits on join requests. ([\#8153](https://github.com/matrix-org/synapse/issues/8153))


Synapse 1.19.0 (2020-08-17)
===========================

Expand Down
1 change: 1 addition & 0 deletions changelog.d/7377.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add filter `name` to the `/users` admin API, which filters by user ID or displayname. Contributed by Awesome Technologies Innovationslabor GmbH.
1 change: 1 addition & 0 deletions changelog.d/7991.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Don't fail `/submit_token` requests on incorrect session ID if `request_token_inhibit_3pid_errors` is turned on.
1 change: 1 addition & 0 deletions changelog.d/8104.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix a bug introduced in v1.7.2 impacting message retention policies that would allow federated homeservers to dictate a retention period that's lower than the configured minimum allowed duration in the configuration file.
1 change: 1 addition & 0 deletions changelog.d/8130.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Update the test federation client to handle streaming responses.
1 change: 1 addition & 0 deletions changelog.d/8139.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fixes a bug where appservices with ratelimiting disabled would still be ratelimited when joining rooms. This bug was introduced in v1.19.0.
1 change: 1 addition & 0 deletions changelog.d/8140.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add type hints to `synapse.state`.
1 change: 1 addition & 0 deletions changelog.d/8142.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add support for shadow-banning users (ignoring any message send requests).
1 change: 1 addition & 0 deletions changelog.d/8147.docker
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Added curl for healthcheck support and readme updates for the change. Contributed by @maquis196.
1 change: 1 addition & 0 deletions changelog.d/8152.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add support for shadow-banning users (ignoring any message send requests).
1 change: 1 addition & 0 deletions changelog.d/8157.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add support for shadow-banning users (ignoring any message send requests).
1 change: 1 addition & 0 deletions changelog.d/8158.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add support for shadow-banning users (ignoring any message send requests).
1 change: 1 addition & 0 deletions changelog.d/8161.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Refactor `StreamIdGenerator` and `MultiWriterIdGenerator` to have the same interface.
1 change: 1 addition & 0 deletions changelog.d/8162.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
1 change: 1 addition & 0 deletions changelog.d/8163.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add filter `name` to the `/users` admin API, which filters by user ID or displayname. Contributed by Awesome Technologies Innovationslabor GmbH.
1 change: 1 addition & 0 deletions changelog.d/8164.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add functions to `MultiWriterIdGen` used by events stream.
1 change: 1 addition & 0 deletions changelog.d/8167.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix tests that were broken due to the merge of 1.19.1.
1 change: 1 addition & 0 deletions changelog.d/8171.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Make `SlavedIdTracker.advance` have the same interface as `MultiWriterIDGenerator`.
4 changes: 4 additions & 0 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
FROM docker.io/python:${PYTHON_VERSION}-slim

RUN apt-get update && apt-get install -y \
curl \
libpq5 \
xmlsec1 \
gosu \
Expand All @@ -69,3 +70,6 @@ VOLUME ["/data"]
EXPOSE 8008/tcp 8009/tcp 8448/tcp

ENTRYPOINT ["/start.py"]

HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1
29 changes: 29 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,3 +162,32 @@ docker build -t matrixdotorg/synapse -f docker/Dockerfile .

You can choose to build a different docker image by changing the value of the `-f` flag to
point to another Dockerfile.

## Disabling the healthcheck

If you are using a non-standard port or tls inside docker you can disable the healthcheck
whilst running the above `docker run` commands.

```
--no-healthcheck
```
## Setting custom healthcheck on docker run

If you wish to point the healthcheck at a different port with docker command, add the following

```
--health-cmd 'curl -fSs http://localhost:1234/health'
```

## Setting the healthcheck in docker-compose file

You can add the following to set a custom healthcheck in a docker compose file.
You will need version >2.1 for this to work.

```
healthcheck:
test: ["CMD", "curl", "-fSs", "http://localhost:8008/health"]
interval: 1m
timeout: 10s
retries: 3
```
9 changes: 6 additions & 3 deletions docs/admin_api/user_admin_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ The api is::

GET /_synapse/admin/v2/users?from=0&limit=10&guests=false

To use it, you will need to authenticate by providing an `access_token` for a
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.

The parameter ``from`` is optional but used for pagination, denoting the
Expand All @@ -119,8 +119,11 @@ from a previous call.
The parameter ``limit`` is optional but is used for pagination, denoting the
maximum number of items to return in this call. Defaults to ``100``.

The parameter ``user_id`` is optional and filters to only users with user IDs
that contain this value.
The parameter ``user_id`` is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the ``name`` parameter.

The parameter ``name`` is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.

The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.
Expand Down
22 changes: 14 additions & 8 deletions docs/sample_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -378,11 +378,10 @@ retention:
# min_lifetime: 1d
# max_lifetime: 1y

# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
# Retention policy limits. If set, and the state of a room contains a
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
# to these limits when running purge jobs.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
Expand All @@ -408,12 +407,19 @@ retention:
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
# If any purge job is configured, it is strongly recommended to have at least
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
# set, or one job without 'shortest_max_lifetime' and one job without
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
# room's policy to these values is done after the policies are retrieved from
# Synapse's database (which is done using the range specified in a purge job's
# configuration).
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# - longest_max_lifetime: 3d
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 1d

# Inhibits the /requestToken endpoints from returning an error that might leak
Expand Down
35 changes: 27 additions & 8 deletions scripts-dev/federation_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,12 @@
import base64
import json
import sys
from typing import Any, Optional
from urllib import parse as urlparse

import nacl.signing
import requests
import signedjson.types
import srvlookup
import yaml
from requests.adapters import HTTPAdapter
Expand Down Expand Up @@ -69,7 +71,9 @@ def encode_canonical_json(value):
).encode("UTF-8")


def sign_json(json_object, signing_key, signing_name):
def sign_json(
json_object: Any, signing_key: signedjson.types.SigningKey, signing_name: str
) -> Any:
signatures = json_object.pop("signatures", {})
unsigned = json_object.pop("unsigned", None)

Expand Down Expand Up @@ -122,7 +126,14 @@ def read_signing_keys(stream):
return keys


def request_json(method, origin_name, origin_key, destination, path, content):
def request(
method: Optional[str],
origin_name: str,
origin_key: signedjson.types.SigningKey,
destination: str,
path: str,
content: Optional[str],
) -> requests.Response:
if method is None:
if content is None:
method = "GET"
Expand Down Expand Up @@ -159,11 +170,14 @@ def request_json(method, origin_name, origin_key, destination, path, content):
if method == "POST":
headers["Content-Type"] = "application/json"

result = s.request(
method=method, url=dest, headers=headers, verify=False, data=content
return s.request(
method=method,
url=dest,
headers=headers,
verify=False,
data=content,
stream=True,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()


def main():
Expand Down Expand Up @@ -222,7 +236,7 @@ def main():
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]

result = request_json(
result = request(
args.method,
args.server_name,
key,
Expand All @@ -231,7 +245,12 @@ def main():
content=args.body,
)

json.dump(result, sys.stdout)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))

for chunk in result.iter_content():
# we write raw utf8 to stdout.
sys.stdout.buffer.write(chunk)

print("")


Expand Down
47 changes: 47 additions & 0 deletions stubs/frozendict.pyi
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Stub for frozendict.

from typing import (
Any,
Hashable,
Iterable,
Iterator,
Mapping,
overload,
Tuple,
TypeVar,
)

_KT = TypeVar("_KT", bound=Hashable) # Key type.
_VT = TypeVar("_VT") # Value type.

class frozendict(Mapping[_KT, _VT]):
@overload
def __init__(self, **kwargs: _VT) -> None: ...
@overload
def __init__(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
def __getitem__(self, key: _KT) -> _VT: ...
def __contains__(self, key: Any) -> bool: ...
def copy(self, **add_or_replace: Any) -> frozendict: ...
def __iter__(self) -> Iterator[_KT]: ...
def __len__(self) -> int: ...
def __repr__(self) -> str: ...
def __hash__(self) -> int: ...
2 changes: 1 addition & 1 deletion synapse/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
except ImportError:
pass

__version__ = "1.19.0"
__version__ = "1.19.1rc1"

if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when
Expand Down
37 changes: 37 additions & 0 deletions synapse/api/ratelimiting.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
from typing import Any, Optional, Tuple

from synapse.api.errors import LimitExceededError
from synapse.types import Requester
from synapse.util import Clock


Expand All @@ -43,6 +44,42 @@ def __init__(self, clock: Clock, rate_hz: float, burst_count: int):
# * The rate_hz of this particular entry. This can vary per request
self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]

def can_requester_do_action(
self,
requester: Requester,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
_time_now_s: Optional[int] = None,
) -> Tuple[bool, float]:
"""Can the requester perform the action?
Args:
requester: The requester to key off when rate limiting. The user property
will be used.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
Returns:
A tuple containing:
* A bool indicating if they can perform the action now
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return True, -1.0

return self.can_do_action(
requester.user.to_string(), rate_hz, burst_count, update, _time_now_s
)

def can_do_action(
self,
key: Any,
Expand Down
22 changes: 14 additions & 8 deletions synapse/config/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -961,11 +961,10 @@ def generate_config_section(
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
# Retention policy limits. If set, and the state of a room contains a
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
# to these limits when running purge jobs.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
Expand All @@ -991,12 +990,19 @@ def generate_config_section(
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
# If any purge job is configured, it is strongly recommended to have at least
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
# set, or one job without 'shortest_max_lifetime' and one job without
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
# room's policy to these values is done after the policies are retrieved from
# Synapse's database (which is done using the range specified in a purge job's
# configuration).
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# - longest_max_lifetime: 3d
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
Expand Down
Loading

0 comments on commit 3f4e350

Please sign in to comment.