Skip to content

Latest commit

 

History

History
1001 lines (796 loc) · 41.4 KB

whitepaper.org

File metadata and controls

1001 lines (796 loc) · 41.4 KB

The Extended Sphinx Protocol

SPHINX is a beautiful protocol for password storage querying and inserting of exactly one password.

However in a real-life scenario users want to store more than one password, or sometimes user want to change the password or delete it. Users might have more than one account at a site and users do not want to remember the list of user names, nor do they want to synchronize them over all of their devices. These are some things that SPHINX does not provide. Hence it is necessary to extend the original SPHINX protocol.

Terminology

Sphinx Records

Records are data structures stored at the sphinx server which contain all the necessary information to provide a user with means to query and manage passwords.

User

User is the human attempting to manage its passwords.

Service

A service is some interface where the user wants to login.

Account

An account is a combination of a username and a password authenticating a user at a service.

Account password

An account password is the password that authenticates the user with a service. It is derived from the SPHINX key in a way that makes the output compatible with the password rules mandated by the service.

rwd (Random word)

High entropy 32 bytes derived from the SPHINX protocol by calculating: hash(pwd||hash2curve(pwd)*k), where k is a server seed contributed by the sphinx server.

Master Password

The master password is the password which is used in SPHINX to query and derive the account password for a service. Unlike other password managers it is possible with sphinx to have multiple master passwords, since they are not used to open an encrypted database but only contribute to the derivation of the rwd.

Client Master-key

A client master-key is a key shared by all devices of a user which access and manage their SPHINX records on a server.

Core functions

The two core functions that are defined by SPHINX are initialize SPHINX record and query SPHINX.

Initialize SPHINX record

This core SPHINX function creates new record at the server for later usage, this is what you want to run when registering a new users at a service.

Query SPHINX

This core SPHINX function queries the SPHINX server (oracle) which has an associated record previously setup by the other SPHINX core function Initialize SPHINX record.

Management functions

A password manager has other functions besides creating and querying them, such functions are deletion of accounts, changing of account passwords, committing a changed password, undoing a commit to the previous password, listing usernames associated with a host.

Core Sphinx Protocol

The most essential operation in the Extended Sphinx Protocol is the Sphinx query. This is the basis for all other operations in the protocol - the only exception being the READ operation.

Let’s recap how the SPHINX query works:

  1. The user hashes their password and then blinds it using the blinding factor `r`, the resulting alpha value is passed to the server.
r = random(32)
alpha = (r*hash(pwd))

The * operation denotes scalar multiplication over an elliptic curve like curve 25519. Our concrete implementation uses ristretto255. It’s also worth noting, that the hash operation here is really a hash-to-curve operation.

  1. The server has a secret seed which it contributes into the

blinded hash of the users password:

beta = seed*alfa

The server returns the calculated beta value to the client.

  1. The user unblinds the servers output (beta)
pwd_k = 1/r * beta

and hashes it again with the master password, which results in the rwd:

rwd = hash(pwd,pwd_k)

In our implementation `rwd` results in a high entropy 32 byte array.

The Extended Sphinx Protocol Details

In the following sections we specify our extensions we added to the core Sphinx Protocol and why we did so.

Password rules

The output of the core sphinx protocol (at least in our instantiation) is an array of 32 random bytes. Most services are not expecting binary data (although they should!) and the worse ones even have some rules of what kind of character classes to expect or forbid or how long the password should be. In order to store these per-service information without having to sync this across different devices used by a user we store these per-service password rules on the sphinx server.

We extended the core Sphinx Protocol by having the server send along the password rules together with the beta value at the end of step two of the core sphinx protocol - so that the client can derive the correct password.

Rules are compacted in the following way:

|---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---|
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c | d | e | f |
|---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---|
|       password size       | U | L | D |' '| ! | " | # | $ | % |
|---------------------------+---+---+---+---+-------------------|

|---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---|
|10 |11 |12 |13 |14 |15 |16 |17 |18 |19 |1a |1b |1c |1d |1e |1f |
|---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---|
| & | ' | ( | ) | * | + | , | - | . | / | : | ; | < | = | > | ? |
|---------------------------+---+---+---+---+-------------------|

|---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---|
|20 |21 |22 |23 |24 |25 |26 |27 |28 |29 |2a |2b |2c |2d |2e |2f |
|---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---|
| @ | [ | \ | ] | ^ | _ | ` | { | | | } | ~ |    check-digit    |
|---------------------------+---+---+---+---+-------------------|

and a 32 byte long xor mask.

The longest password we can derive out of 32 bytes, is only 76 chars long if only digits are allowed. Thus 7 bits are enough to encode the size of a password.

The bits 7-9 encode the character classes: upper-case, lower-case, digits.

Bits 0xa - 0x2a are a boolean array storing if a particular symbol is allowed or not.

Check digit

The check-digit is a 5 bit number that is used as a simple check if the derived password is correct or not. There is a 1/32 chance that a wrong password goes on undetected. The check-digit is calculated as such:

check-digit = blake2b("sphinx check digit context", rwd, 1)[0] & 0x1ff

rwd to ASCII conversion

The conversion from the 32 byte array to an ASCII printable string containing only the characters allowed by the rules interprets the 32 bytes as an integer in big-endian order and then encodes it base-n, where n is the number of characters allowed by the rules. The digits are in the order - if allowed -: upper-case, lower-case, digits, and the symbols in the order as they are listed in the compacted rules blob.

In pseudo-code:

v = int.from_bytes(rwd, 'big')
result = ''
while (size > 0 and len(result) < size) or (size == 0 and v > 0):
    idx = v % len(chars)
    v //= len(chars)
    result = chars[idx] + result
return result

RWD Xor Masks and Pre-defined Account Passwords

Since the conversion from the binary rwd to the ASCII account password is simple arithmetic it is also possible to reverse. This allows us to calculate the binary rwd that SPHINX should output in order to generate a specified password. It is however not possible do actually make SPHINX output this value directly. In order to allow certain non-random pre-defined passwords to be output by SPHINX we xor the rwd with the xor mask from the rules blob. Normally the xor mask is all zeroes, and thus the output of the SPHINX query is unchanged and thus totally random. If the user specifies a pre-defined password during Create or Change operations, we first calculate backwards the target rwd that would produce that pre-defined output, and then we xor this target-rwd with the rwd of the SPHINX query and store the result in the xor mask. It is important to notice, that the maximum length of such pre-definied output passwords is maximum 38 characters long.

__Warning__ this mode generates passwords that violate the security guarantees of the sphinx protocol and should be avoided at all costs. This mode only exists as a convience function and a such a perfect example of convenience violating security.

Converting a base-n number back into an rwd is as simple as the following example pseudo-code:

le_str = string[::-1]
result = sum(chars.find(le_str[i]) * len(chars)**i for i in range(len(le_str)))
return int.to_bytes(result, 32, 'big')

Encrypted blobs

Although Password Rules contain little information, we decided to keep them confidential from the server, some special combination of rules might leak information regarding which service this password authenticates to.

The symmetric encryption key (aka sealkey) for blobs - used for protecting password rules and user records (see later) - is derived from the client master-key (see below) as follows:

enc_key = blake2b("sphinx encryption key", masterkey)

Encryption/decryption of blobs is done using `crypto_aead_xchacha20poly1305_ietf_encrypt()`. Thus all encrypted blobs are prefixed with a 24 bytes nonce and extended with a 16 bytes authentication tag.

Blobs are prepended with a version byte, that is authenticated but not encrypted.

The client master-key

As mentioned in the previous “Encrypted blobs” section the sealkeys are derived from a client master-key.

Although SPHINX itself is a protocol which does not require any state stored at the client. It lends itself to use SPHINX itself to authenticate any management operation. Unfortunately this means that the server would be able to mount offline bruteforce attacks against the master password, and hence it is not possible to use SPHINX to authenticate with a SPHINX server directly without throwing out one of the most important security guarantees of the SPHINX protocol.

One possibility would be to use a threshold-version of SPHINX to authenticate to a SPHINX server, however this brings up a few complications and availability issues.

Another solution is to introduce state in the client, which can be used to create a simple and boring-crypto wrapper around SPHINX, and it also helps solve a few other challenges.

We decided for the latter solution and thus our Extended Sphinx Protocol requires a master-key at the client. This key must be synched to other devices that belong to the same user in case the user wants to use the same accounts on multiple devices. This master-key can (and should) be backed up by the user to provide access to their passwords and to be able to manage them.

It is important to note that the passwords themselves to not depend in any way on this master-key. The master-key is only used for generating the record IDs, to derive authentication keys for management operations and to generate encryption keys for the encryption of user and rules blobs.

For more information on what the loss of confidentiality of this master-key means see our section: Bruteforce attacks against our Sphinx implementation.

Record IDs

In order to store different accounts, we need to be able to refer to them somehow. Record IDs should be calculated by the client in the following way:

id_key = blake2b("sphinx host salt", masterkey)
id = blake2b(user||host, id_key)

The id_key is necessary to prohibit guessing IDs and pre-computation dictionary attacks against these ids.

This way of generating record ids should also protect against phishing as long as the hostname is directly taken from the URL-bar, it should not match the correct hostname, and thus the protocol will fail because no appropriate record is found. The lack of a record where you expect one is also a warning-sign for being phished.

Authentication keys for management operations

Management operations change the records stored at the sphinx server, these need to be somehow authenticated to prevent denial of service for legitimate users. The following operations are authenticated: Change, Commit, Undo, Delete, Read

Updates to user records need to be signed with the private key for which the corresponding pub key is already stored at the user record.

Our protocol provides two authentication mechanisms, one requiring knowledge of the master password for a record, the other one only requires knowledge of the client secret. Both have their benefits and drawbacks.

Authentication-keys without master passwords

If the rwd_keys configuration option is set to false, management operations are authenticated in the following way:

When creating a new record, the client sends along a unique pubkey, that is used to authenticate all later management operations. The pubkey is generated as such:

key0 = blake2b("sphinx signing key", masterkey)
seed = blake2b(key0, id)
pk, sk = e25519_keypair(seed)

The parameter id used to calculate key1 is the record id, we use this to derive unique keys for each record that cannot be linked to other keys derived from the same master key.

Drawback of this method is that anyone with the master key can enumerate accounts at the sphinx server and run authenticated management operations against them. However in this case the attacker neither learns the master nor the account password, this can only be used to cause a denial of service by deleting the record or changing the server seed.

Authentication-keys with master passwords

If the configuration option rwd_keys is enabled, then the rwd is also added to the key:

key0 = blake2b("sphinx signing key", masterkey)
key1 = blake2b(key0, id)
seed = blake2b(key1, rwd)
pk, sk = e25519_keypair(seed)

The rwd is the raw output of the SPHINX protocol, and by mixing it into the authentication key we make sure only users knowing the master password can execute management operations.

The drawback of this authentication method is that this allows anyone with the authentication public key, the sphinx seed (both of which are on the server) and the client master key to mount an offline bruteforce attack against the master password.

Sphinx Records

Sphinx records are referenced by the record ID. All Sphinx Records stored at the sphinx server have the following three components:

  • the sphinx seed
  • authentication public key
  • encrypted password rules

The seed (also referred to as simply the “key”), is the secret component which during the Sphinx Query is contributed by the server to the final rwd. The authentication public key is a unique ed25519 key used to authenticate management operations on this record.

User records

A convenience function by password managers is to offer the user the list of usernames known by the manager when logging into a site. While not strictly necessary, it is a feature that users expect. In our extended protocol we provide a special kind of record, which we call user records, these are encrypted blobs, which contain a list of usernames. The record id for these records is generated as normal records, with the user component provided as an empty string.

Our extended protocol provides a READ primitive to fetch these blobs, however writing these blobs is only possible implicitly through the CREATE and DELETE sphinx record management operations. It is of course possible to create some bogus sphinx entry, just to store some “secret” instead of a username in the user record, but the protection of these records is not very strong, there are countless better methods to do so.

The usernames in these records a separated by 0x0 bytes, and the whole record cannot be bigger than (64KB - 40) bytes - the 40 bytes are reserved for the nonce and authentication tag of encrypted blobs. Furthermore these user records are prefixed by their length in 2 bytes network order, but these two bytes do not count towards the maximum size of the user record. The structure of these encrypted user record blobs thus looks like this:

+--------------+----------+------------+--------------------+
| 2 bytes      | 24 bytes | n bytes    | 16 bytes           |
|--------------+----------+------------+--------------------|
| size of blob | nonce    | ciphertext | authentication tag |
+--------------+----------+------------+--------------------+

The Extended Sphinx Protocol Messages

The following operations make up the extended protocol:

  • Create: create a new record
  • Get: query a record
  • Change: change the seed and update the password rules and auth pubkey associated with the record
  • Commit: activate the changed seed, password rules and auth pubkey, saving a backup copy of the previous values
  • Undo: restore the backup seed, password rules and auth pubkey activated by a Commit operation.
  • Delete: delete a record
  • Read: query the list of registered users with a host

The user needs a client master-key and their master password to successfully address records and to authenticate management operations.

Initial messages

All initial messages (except the `read` and `create` operation) sent from the client to the server have the same structure:

u8   ratelimit_opcode
u8   opcode
u8   id[32]
u8   alpha[32]

All operations - except the create operation - are subject to ratelimiting, and the initial message is part of the puzzle that must be solved, before the operation can be processed. For the create operation there is no ratelimit and hence the initial message for `create` operations lacks the first `ratelimit_op` field.

Since there is no rwd necessary - only the client master-key - for querying the user list the initial message of the read operation is lacking the last `alpha` member.

The ratelimit_opcodes are the following:

CHALLENGE_CREATE = 0x5a
CHALLENGE_VERIFY = 0xa5

Here `CHALLENGE_CREATE` requests a new ratelimiting challenge, and `CHALLENGE_VERIFY` presents a solution. For more information on ratelimiting see the dedicated chapter below.

The opcodes for the messages are the following:

CREATE   =  0x00
READ     =  0x33
UNDO     =  0x55
GET      =  0x66
COMMIT   =  0x99
CHANGE   =  0xaa
DELETE   =  0xff

The `id` member of the message is the record ID as specified above, and the `alpha` value is the hashed and blinded master password as required by the sphinx Protocol.

TLS

All messages between the client and the sphinx server are conducted over a TLS connection. The original SPHINX protocol was supposed to not need any extra encryption - since the blinding itself already provides confidentiality. However already the fact that the sphinx records need to be indexed by some identifier break this nice property of the original SPHINX protocol. Using TLS provides confidentiality against passive attackers collecting statistics about which IDs are being used.

Authentication

The following management operations require authentication: Change, Commit, Undo, Delete, Read.

Authentication always starts with a basic Sphinx query, so that in case the client uses Authentication-keys with master passwords (see above) it can derive the correct key depending on the master password. Since this is always executed the server does not learn which kind of authentication key method the client uses.

When the server sends back the `beta` value from it’s part of the Sphinx query, it also sends along a random nonce to the client.

The client derives its authentication key, and signs the nonce, then sends back the signature to the server. The server takes the authentication public key from the Sphinx record and verifies the signature over the nonce with this authentication public key. If this verification fails, the server aborts, otherwise it resumes control to the management operation requested.

Management Operations

Creation of records

Creation of a record is a quite straight-forward matter:

  1. The client initiates a CREATE operation on the server - including a run of the SPHINX Query (see above), but the server instead of loading a sphinx seed (which doesn’t exist yet) just generates one randomly.
  2. The client derives the encryption and authentication key. The password rules are encrypted and appended to the authentication public key. The auth pubkey and the encrypted rules are signed with the auth private key and sent to the server
  3. The client updates the user record (see below) for this host, requesting the current user record, decrypting it, and appending the new user to this record, finally sending the encrypted blob back to the server.
  4. If everything went well, the server stores the pubkey and the password generation rules next to the seed already generated in the first step of the CREATE operation.
  5. Finally the client uses the rwd to derive the password using the password rules, and returns the newly generated account password to the user.

Notable is that neither ratelimiting nor authentication happen during creation - (note there is authentication when updating the user record, but not when creating a user record).

Changing of passwords/records

  1. Changing of a record requires authentication.
  2. Successful authentication is followed by the client initiating a second Sphinx Query - possibly with a changed master password and also sending along a newly encrypted password rules blob. This allows a client - if required - to change either of these, but they can also stay the same.
  3. The server generates a new sphinx seed and executes its part of the Sphinx query on it - sending back the resulting `beta` value to the client.
  4. The client can finish the Sphinx query using the new sphinx rwd. Using this it can generate a new authentication key-pair. It signs the new public key with the new secret key - just to prove ownership of this keypair, and sends the signed public key back to the server.
  5. The server checks if the public key can be used to verify the signature over it, if successful the server stores the new sphinx seed, the new auth public key and the new encrypted password rules blob marking them all as `new` - still keeping the original values active. If all this succeeds the server finally sends back the string “ok” to the client.
  6. Upon receiving the “ok” message from the server, the client using the possibly changed password rules derives the new rwd from the result of the second sphinx query and returns the new password to the user.

The above procedure allows a user to change or keep their master password, or to change the password rules if needed. But it is also possible to just generate a new password by keeping the old values, the fact that the server generates a new sphinx seed guarantees that the new password will be different from the old one.

Also notable is, that this operation in fact does not change the password despite its name, it merely generates a new one, which still needs to be activated using the Commit operation. Client implementations may automatically call Commit after a successful Change operation.

Commit new record

To allow for errors during the changing of passwords on a service, the old password is still active until the user commits the change, which effectively replaces the current record with the new one.

The Commit operation is a simple flow, it starts with an authentication using the current master password and if that succeeds the server replaces the current record. The old password, authentication public key and password rules are marked as old, in case the change of the password fails and the account is still stuck with the old password, the Commit operation can be reverted by the Undo operation.

Undo commit record

To allow for errors during the changing of passwords on a service, the old user record is retained after the user commits the change. This allows to revert the Commit and use the old password. This function is provided in case the password change at the service fails for some reason.

Undoing is a simple flow and very similar to the Commit operation, it starts with and authentication using the currently active master password. If the authentication succeeds the server marks the current record as new. The current password, authentication public key and password rules are replaced bye the old one. The Undo operation can be redone by the Commit operation to accomodate confusion when updating an account password.

Deletion of keys

The Delete operation deletes a sphinx record and updates the user record. The operation starts with an authentication, if it succeeds it updates the user record and finally deletes the sphinx record.

User Record Operations

The record id for user ids is calculated similarly to sphinx record ids, with the only difference that the username is set to an empty string.

Reading of user records

Reading of user is a simple flow which after successful authentication returns the encrypted user record blob.

Updating of user records

Updating user records can only be done by creating or deleting sphinx records. During an create or delete operation:

  1. An update is initiated by sending the user record id to the server.
  2. The server responds with the user record if there is such, or an empty user record if there is none. User records are always prefixed with 2 bytes representing their size, empty user records are thus signaled by responding with two zero bytes.

If there was no existing user record, then a create user record flow is executed:

The client

  1. derives an authentication key-pair for this user record.
  2. it encrypts the user name as an encrypted blob.
  3. This blob is prefixed with its size represented by 2 bytes in network order.
  4. The prefixed blob is concatenated after the public authentication key.
  5. This is then signed by the authentication secret key.
  6. And finally this sent to the server

In pseudo-code this looks as such

id = getid(host)
authkey, pubkey = getauthkey(id)
send(signed_message(authkey, pubkey || (uint16_t) sizeof(blob) || blob))

The server:

  1. receives the authentication pubkey, the size of the blob, the blob itself, and the signature over the whole message.
  2. using the authentication pubkey the server verifies the signature, if this verification fails the server aborts.
  3. the server stores the auth pubkey and the user record blob under the user record id.

If there already was an existing user record, then an update user record flow is executed which is simpler than the create flow, since we do not have to generate or send authentication keys.

The client

  1. decrypts the user record blob sent by the server
  2. it adds the new user to the decrypted list of users
  3. it encrypts the list of users into an encrypted user record blob
  4. the encrypted user record blob is prefixed by its size represented in two bytes in network order.
  5. the size-prefixed blob is signed by the authentication key
  6. the signed blob is sent to the server.

In pseudo-code this looks as such

id = getid(host)
authkey, pubkey = getauthkey(id)
send(signed_message(authkey, (uint16_t) sizeof(blob) || blob))

The server:

  1. receives the size of the blob, the blob, and the signature over this.
  2. it loads the authentication public key from the user record
  3. it then verifies the signature over the blob, if this verification fails the server aborts.
  4. it stores the user record blob under the user record id.

Weaknesses

  1. When using server side user-lists the server can correlate which records belong to the same user and target server.
  2. Server can collect usage statistics on sphinx records.
  3. Management operations have unique communication patterns, even through the TLS encryption it can be deduced which operation is being run. The info leakage is due to the size and direction of data being passed between the server and the client.
  4. When updating user records the requested record (if it exists) is returned without any authentication. It is thus possible to use a create sphinx record and then send an arbitrary user record id, the update user record flow can then be aborted by just closing the connection, or sending an invalid user record that cannot be authenticated by the pubkey known to the server. The server in this case will abort the update user record and the create sphinx record operation without changing anything. And thus it is possible for an attacker to circumvent the authentication required during the READ operation.

Bruteforce attacks against our Sphinx implementation

Given the following abstract model of the SPHINX protocol:

Sphinx(seed) <--<[get password]--> Client(secret) <--[login]--> Server(userdb)
    ^                                  ^                            ^
     \----------------------------> Attacker <---------------------/

None of the 3 parties are compromised

We know that simply bruteforcing the user password on the Server is infeasable for the attacker, since the server password is independent and of high entropy.

Lacking any other information makes any online bruteforce attacks involving the Sphinx storage also unfeasable since the user ids under which the seeds are stored an practially ungueassable.

The Sphinx storage is compromised and the attacker has access to the Sphinx seeds.

An attacker can only run online attacks against the Server to recover a single login password to the Server recovering also the master password. The following defenses can make an attack more difficult by:

a) using unique master passwords for each account - which is unreasonable. b) using a few master passwords, one for less valuable, and a few for high value accounts. c) using a memory-hard password hashing function on the Client, which also the attacker has to use - slowing down the attack. d) rate-limiting on the Server. e) Account lock-down after a certain threshold of failed logins.

Of these defenses a) and b) are up to the user to implement, c) is implemented in our library using argon2i and d) and e) is up to the Server to implement.

It is worth noting, that in our Sphinx implementation, the userids for the Sphinx seeds are derived from a client-secret. Thus an attacker having access to the Sphinx seeds but none of the client secrets, has no way of knowing which seed belongs to which user/server account, and thus make the online queries shots in the dark.

The Client secret is available to the attacker

This can happen for example by leaking your client secret while scanning it as a QR code.

Using a leaked client secret an attacker can enumerate the username/host combinations known by a sphinx server. This attack is online-bruteforce only though, although a dictionary can significantly aid such an attack.

Having recovered an ID allows an attacker to mount an online bruteforce attack against the master password. This attack requires the attacker to first do an online query to the Sphinx server then using the derived password in an online query against the Server to check if the derived password is correct, thus revealing the master password.

The only obvious defense against this attack is ratelimiting and (b)locking bruteforce attackers in the enumeration phase and the master password recovery phase.

The Service user db is leaked

Lacking the Client secret makes any online bruteforce attacks involving the Sphinx storage unfeasable since the user ids under which the seeds are stored an practially ungueassable.

Both the Client secret and the Server user db is available to the attacker

Online Bruteforcing the master password means the attacker first

  • using the Client secret - finds an existing userid on the Sphinx

storage belonging to a username/server pair. The attacker then uses online the Sphinx storage to derive the candidate password and then validates the candidate online against the Server with the guessed username.

To defend against this case we can deploy rate-limiting on both the Sphinx storage and the Server.

The Server is compromised and the user db is available to the attacker

a) offline dictionary attacks against the Server password are infeasable, since the Server password is unique and of high entropy.

b) Online attacks against the master password are possible, and are similar to the Online attack against the master password in the case where the Client Secret and the Server user db is compromised without a need to do online verification against the Server, thus making this attack slightly easier than the online master password guessing attack than that case.

To protect against case b) the attacker can be slowed down by

  • using a memory-hard password hashing function in the protocol, which in our implementation is argon2i.
  • deploying a rate-limiter at the Sphinx storage.

Both the Sphinx Storage and a Client secret are available to the attacker

In this case the attacker can bruteforce the userids using the client secret to figure out which seed belongs to which username at which server. Having found out a seed belonging to the client secret enables an attacker to mount an offline bruteforce attack against the master password belonging to this account. The attacker simply calculates the OPRF directly without the blinding ( hash(master + hash(master)*seed)), derives the client signing key from it and the client secret, and checks if the resulting client signing pubkey is the same as stored with the seed.

Both the Sphinx Storage and the Servers user db is available to the attacker

The attacker does not know which sphinx seeds contribute to which passwords in the Server user db. This means the attacker can run ab offline bruteforce attack in which each seed must be bruteforced against all the target accounts from the server user database. Although this can be parallelized the attacker is slowed down by the memory-hard password hashing function which is used in our implementation.

Lucky jackpot: The Sphinx seeds, the Client secret and the Server user db is available to the attacker

Using the Client seed and the usernames in the Server user db the attacker can trivially find out which Sphinx seed belongs to which Server user db account. The attacker then can recover the master password used for this specific account by running a targeted offline bruteforce attack. Having recovered the master password, the attacker can offline bruteforce the other username/server combinations of the Sphinx seeds that share the same master password and Client secret, and thus recover all Server username/password/hostnames that share the same master password.

Rate-limiting the Sphinx Storage

In the section “Bruteforce attacks against our Sphinx implementation”, we identified three cases when the Sphinx storage is not available to the attacker an online bruteforce attack can be futher slowed down by deploying rate-limiting.

IP address based rate-limiting

IP address-based rate-limiting is a common measure. It is supported at the kernel level, but also on application level there are solutions for this (e.g. haproxy). However the problem with ip-based rate-limiting is that it does not protect against botnets with many different ip addresses, and if the server to be protected also can be reached via IPv6, attackers can simply exploit a /64 address space or even more. On the other hand, IP based rate-limiting is computationally very cheap, and can be done without changing the sphinx protocol. It can be a simple defense-in-depth measure.

UserID based rate-limiting

Another approach could be to rate-limit access to userids. The problem with querying the Sphinx password store is that the password store has no knowledge whether the users password input is correct or not. Thus we cannot limit only failing attempts. This also means we need to store state about number of access within a time-window and exponentially increase or decay a rate-limit.

Proof-of-Work client puzzles

Another approach could be to require the client to solve a small puzzle before the Sphinx server processes any requests.

It must be noted, that - in the case of an online bruteforce attack - the client must already compute one elliptic curve scalar multiplication before the request, and one scalar multiplication, one scalar modular invert and one argon2i password hash after receiving the response from the sphinx server. While the Sphinx server only needs to do one scalar multiplication thus the server load is smaller than the client load..

An important aspect is that the cost to verify the client puzzle must be negligible, but solving the puzzle must be hard. The Equihash protocol[1] seems to be a suitable candidate for such since it can be tuned to various difficulties and it provides also memory hardness.

[1] https://eprint.iacr.org/2015/946

Equihash client puzzles can be applied against requests based on IP addresses or UserIds with a dynamic difficulty based on number of access within a certain time-window.

An open question remains whether to prohibit offline precomputation of equihash puzzles or not. Pre-computation could be prohibited by the Sphinx storage providing a nonce to a rate-limited client. The drawback is, that this nonce needs to be preserved by the Sphinx storage for the duration of the connection and this adds one extra round-trip to the protocol and ties up one worker process possibly leading to quick resource exhaustion. However it would allow to abort any request where the puzzle is not solved in time. An alternative approach would be the non-interactive approach, where the puzzle is the session transcript consisting of the userid and the blinded password together with a fresh timestamp, this approach would not require extra round-trips nor maintaining state at the Sphinx storage, but it would allow precomputation for an attacker.

Equihash puzzle-wrapped Sphinx

The current implementation of the wraps our Extended Sphinx protocol in the following way.

  1. The server has three configuration settings that affect the speed at which the ratelimiting gets more difficult or easier:
    • rl_decay: decrease ratelimiting difficulty for each full rl_decay seconds passed without any requests coming in.
    • rl_threshold: increase difficulty after rl_threshold attempts if not decaying
    • rl_gracetime: when checking freshness of puzzle solution, allow this extra gracetime in addition to the hardness max solution time

    The server also has a private puzzle key, with which it signs puzzles using a keyed blake2b hash.

  2. all operations - except create, which makes no sense to bruteforce - are wrapped in the ratelimiting protocol.
  3. a client prepares their Extended Sphinx request, and if it is not a create operation prepends it with a 0x5a byte - which requests the server to respond with a challenge. This is sent to the sphinx oracle. (note create requests do not have a 0x5a prefix, and get directly handled)
  4. The server recognizing the 0x5a prefix as a ratelimiting puzzle request, checks if the userid in the extended sphinx request has already a ratelimiting context available and either loads it or creates one with the easiest possible difficulty. A corrupted context is automatically set to the most difficult hardness.
  5. If a correct context was loaded the hardness is either decayed or (slowly) increased.

    If the previous ratelimiting request was recorded longer than rl_decay seconds ago, the difficulty is decreased by each full rl_decay epoch that has passed since the last request.

    If the last recorded rate-liming request was less than rl_decay seconds ago, we increase a counter in the context, if this counter is greater than rl_threshold we reset this counter and increase the difficulty of the puzzle by one level.

  6. Based on the context difficulty level the puzzle is created as following:

    The original request and the equihash parameters - based on the context difficulty level - n and k (both unsigned 8bit integers) are concatenated with a 32bit timestamp. Using the servers puzzle key this concatenation is then signed using a keyed blake2b hash. The hash is appended to the concatenation forming the challenge.

challenge = n || k || timestamp
sig = blake2b(key, request || challenge)
challenge = challenge || sig
  1. The challenge is sent to the client, the socket is closed.
  2. The clients solves the equihash puzzle for the n and k parameters from the challenge, and uses the challenge concatenated to the original request as the seed.
  3. The client opens up a new connection to the server (the previous connection was closed by the server at the end of step 6.) and sends the following message:
'\xa5' || challenge || request || solution
  1. The server recognizing the 0xa5 prefix, first reads the challenge and the original request. The signature over the request and challenge is verified, the server aborts if this does not succeed.
  2. The server verifies that the timestamp in the challenge is not older than a difficulty-dependent timeout plus the configuration value rl_gracetime. These timeouts are measured by the average time to solve the challenge on a raspberry pi 1 - except the ones that require more than 256MB of ram, those values are extrapolated from the measurements that fit into this memory. If the timestamp is older than the timeout plus the gracetime, the server aborts.
  3. The server reads also the solution from the network and verifies it, if the verification fails, the server aborts.
  4. The server hands over the original request to the extended sphinx protocol handler.