About this documentation
This documentation is intended to give an overview of how the matrix-authentication-service
(MAS) works, both from an admin perspective and from a developer perspective.
MAS is an OAuth 2.0 and OpenID Provider server for Matrix. It has been created to support the migration of Matrix to an OpenID Connect (OIDC) based authentication layer as per MSC3861.
The documentation itself is built using mdBook. A hosted version is available at https://element-hq.github.io/matrix-authentication-service/.
How the documentation is organized
This documentation has four main sections:
- The installation guide will guide you through the process of setting up the
matrix-authentication-service
on your own infrastructure. - The topics sections goes into more details about how the service works, like the policy engine and how authorization sessions are managed.
- The reference documentation covers configuration options, the Admin API, the scopes supported by the service, and the command line interface.
- The developer documentation is intended for people who want to contribute to the project. Developers may also be interested in:
Planning the installation
This part of the documentation goes through installing the service, the important parts of the configuration file, and how to run the service.
Before going through the installation, it is important to understand the different components of an OIDC-native Matrix homeserver, and how they interact with each other. It is meant to complement the homeserver, replacing the internal authentication mechanism with the authentication service.
Making a homeserver deployment OIDC-native radically shifts the authentication model: the homeserver is no longer responsible for managing user accounts and sessions. The authentication service becomes the source of truth for user accounts and access tokens, and the homeserver only verifies the validity of the tokens it receives through the service.
At time of writing, the authentication service is meant to be run on a standalone domain name (e.g. auth.example.com
), and the homeserver on another (e.g. matrix.example.com
).
This domain will be user-facing as part of the authentication flow.
When a client initiates an authentication flow, it will discover the authentication service through the deployment .well-known/matrix/client
endpoint.
This file will refer to an issuer
, which is the canonical name of the authentication service instance.
Out of that issuer, it will discover the rest of the endpoints by calling the [issuer]/.well-known/openid-configuration
endpoint.
By default, the issuer
will match the root domain where the service is deployed (e.g. https://auth.example.com/
), but it can be configured to be different.
An example setup could look like this:
-
The deployment domain is
example.com
, so Matrix IDs look like@user:example.com
-
The issuer chosen is
https://example.com/
-
The homeserver is deployed on
matrix.example.com
-
The authentication service is deployed on
auth.example.com
-
Calling
https://example.com/.well-known/matrix/client
returns the following JSON:{ "m.homeserver": { "base_url": "https://matrix.example.com" }, "org.matrix.msc2965.authentication": { "issuer": "https://example.com/", "account": "https://auth.example.com/account" } }
-
Calling
https://example.com/.well-known/openid-configuration
returns a JSON document similar to the following:{ "issuer": "https://example.com/", "authorization_endpoint": "https://auth.example.com/authorize", "token_endpoint": "https://auth.example.com/oauth2/token", "jwks_uri": "https://auth.example.com/oauth2/keys.json", "registration_endpoint": "https://auth.example.com/oauth2/registration", "//": "..." }
With the installation planned, it is time to go through the installation and configuration process. The first section focuses on installing the service.
Installation
Pre-built binaries
Pre-built binaries can be found attached on each release, for Linux on both x86_64
and aarch64
architectures.
Each archive contains:
- the
mas-cli
binary - assets needed for running the service, including:
share/assets/
: the built frontend assetsshare/manifest.json
: the manifest for the frontend assetsshare/policy.wasm
: the built OPA policiesshare/templates/
: the default templatesshare/translations/
: the default translations
The location of all these assets can be overridden in the configuration file.
Example shell commands to download and extract the mas-cli
binary:
ARCH=x86_64 # or aarch64
OS=linux
VERSION=latest # or a specific version, like "v0.1.0"
# URL to the right archive
URL="https://github.com/element-hq/matrix-authentication-service/releases/${VERSION}/download/mas-cli-${ARCH}-${OS}.tar.gz"
# Create a directory and extract the archive in it
mkdir -p /path/to/mas
curl -sL "$URL" | tar xzC /path/to/mas
# This should display the help message
/path/to/mas/mas-cli --help
Using the Docker image
A pre-built Docker image is available here: ghcr.io/element-hq/matrix-authentication-service:latest
The latest
tag is built using the latest release.
The main
tag is built from the main
branch, and each commit on the main
branch is also tagged with a stable sha-<commit sha>
tag.
The image can also be built from the source:
- Get the source
git clone https://github.com/element-hq/matrix-authentication-service.git cd matrix-authentication-service
- Build the image
docker build -t mas .
Building from the source
Building from the source requires:
- The latest stable Rust toolchain
- Node.js (18 and later) and npm
- the Open Policy Agent binary (or alternatively, Docker)
- Get the source
git clone https://github.com/element-hq/matrix-authentication-service.git cd matrix-authentication-service
- Build the frontend
This will produce acd frontend npm ci npm run build cd ..
frontend/dist
directory containing the built frontend assets. This folder, along with thefrontend/dist/manifest.json
file, can be relocated, as long as the configuration file is updated accordingly. - Build the Open Policy Agent policies
OR, if you don't havecd policies make cd ..
opa
installed and want to build through the OPA docker image
This will produce acd policies make DOCKER=1 cd ..
policies/policy.wasm
file containing the built OPA policies. This file can be relocated, as long as the configuration file is updated accordingly. - Compile the CLI
cargo build --release
- Grab the built binary
cp ./target/release/mas-cli ~/.local/bin # Copy the binary somewhere in $PATH mas-cli --help # Should display the help message
Next steps
The service needs some configuration to work. This includes random, private keys and secrets. Follow the configuration guide to configure the service.
General configuration
Initial configuration generation
The service needs a few unique secrets and keys to work. It mainly includes:
- the various signing keys referenced in the
secrets.keys
section - the encryption key used to encrypt fields in the database and cookies, set in the
secrets.encryption
section - a shared secret between the service and the homeserver, set in the
matrix.secret
section
Although it is possible to generate these secrets manually, it is strongly recommended to use the config generate
command to generate a configuration file with unique secrets and keys.
mas-cli config generate > config.yaml
If you're using the docker container, the command mas-cli
can be invoked with docker run
:
docker run ghcr.io/element-hq/matrix-authentication-service config generate > config.yaml
This applies to all of the mas-cli
commands in this document.
Note: The generated configuration file is very extensive, and contains the default values for all the configuration options. This will be made easier to read in the future, but in the meantime, it is recommended to strip untouched options from the configuration file.
Using and inspecting the configuration file
When using the mas-cli
, multiple configuration files can be loaded, with the following rule:
- If the
--config
option is specified, possibly multiple times, load the file at the specified path, relative to the current working directory - If not, load the files specified in the
MAS_CONFIG
environment variable if set, separated by:
, relative to the current working directory - If not, load the file at
config.yaml
in the current working directory
The validity of the configuration file can be checked using the config check
command:
# This will read both the `first.yaml` and `second.yaml` files
mas-cli config check --config=first.yaml --config=second.yaml
# This will also read both the `first.yaml` and `second.yaml` files
MAS_CONFIG=first.yaml:second.yaml mas-cli config check
# This will only read the `config.yaml` file
mas-cli config check
To help understand what the resulting configuration looks like after merging all the configuration files, the config dump
command can be used:
mas-cli config dump
Configuration schema
The configuration file is validated against a JSON schema, which can be found here. Many tools in text editors can use this schema to provide autocompletion and validation.
Syncing the configuration file with the database
Some sections of the configuration file need to be synced every time the configuration file is updated.
This includes the clients
and upstream_oauth
sections.
The configuration is synced by default on startup, and can be manually synced using the config sync
command.
By default, this will only add new clients and upstream OAuth providers and update existing ones, but will not remove entries that were removed from the configuration file.
To do so, use the --prune
option:
mas-cli config sync --prune
Next step
After generating the configuration file, the next step is to set up a database.
Database configuration
The service uses a PostgreSQL database to store all of its state.
Although it may be possible to run with earlier versions, it is recommended to use PostgreSQL 13 or later.
Connection to the database is configured in the database
section of the configuration file.
A warning about database pooling software
MAS must not be connected to a database pooler (such as pgBouncer or pgCat) when it is configured in transaction pooling mode. This is because MAS uses advisory locks, which are not compatible with transaction pooling.
You should instead configure such poolers in session pooling mode.
Set up a database
You will need to create a dedicated PostgreSQL database for the service. The database can run on the same server as the service, or on a dedicated host. The recommended setup for this database is to create a dedicated role and database for the service.
Assuming your PostgreSQL database user is called postgres
, first authenticate as the database user with:
su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
Then, create a postgres user and a database with:
# this will prompt for a password for the new user
createuser --pwprompt mas_user
createdb --owner=mas_user mas
The above will create a user called mas_user
with a password of your choice, and a database called mas
owned by the mas_user
user.
Service configuration
Once the database is created, the service needs to be configured to connect to it.
Edit the database
section of the configuration file to match the database just created:
database:
# Full connection string as per
# https://www.postgresql.org/docs/13/libpq-connect.html#id-1.7.3.8.3.6
uri: postgres://<user>:<password>@<host>/<database>
# -- OR --
# Separate parameters
host: <host>
port: 5432
username: <user>
password: <password>
database: <database>
Database migrations
The service manages the database schema with embedded migrations.
Those migrations are run automatically when the service starts, but it is also possible to run them manually.
This is done using the database migrate
command:
mas-cli database migrate
Next steps
Once the database is up, the remaining steps are to:
- Set up the connection to the homeserver (recommended)
- Setup email sending (optional)
- Configure a reverse proxy (optional)
- Run the service
Homeserver configuration
The matrix-authentication-service
is designed to be run alongside a Matrix homeserver.
It currently only supports Synapse through the experimental OAuth delegation feature.
The authentication service needs to be able to call the Synapse admin API to provision users through a shared secret, and Synapse needs to be able to call the service to verify access tokens using the OAuth 2.0 token introspection endpoint.
Provision a client for the Homeserver to use
In the clients
section of the configuration file, add a new client with the following properties:
client_id
: a unique identifier for the client. It must be a valid ULID, and it happens that0000000000000000000SYNAPSE
is a valid ULID.client_auth_method
: set toclient_secret_basic
. Other methods are possible, but this is the easiest to set up.client_secret
: a shared secret used for the homeserver to authenticate
clients:
- client_id: 0000000000000000000SYNAPSE
client_auth_method: client_secret_basic
client_secret: "SomeRandomSecret"
Don't forget to sync the configuration file with the database after adding the client, using the config sync
command.
Configure the connection to the homeserver
In the matrix
section of the configuration file, add the following properties:
homeserver
: corresponds to theserver_name
in the Synapse configuration filesecret
: a shared secret the service will use to call the homeserver admin APIendpoint
: the URL to which the homeserver is accessible from the service
matrix:
homeserver: localhost:8008
secret: "AnotherRandomSecret"
endpoint: "http://localhost:8008"
Configure the homeserver to delegate authentication to the service
Set up the delegated authentication feature in the Synapse configuration in the experimental_features
section:
experimental_features:
msc3861:
enabled: true
# Synapse will call `{issuer}/.well-known/openid-configuration` to get the OIDC configuration
issuer: http://localhost:8080/
# Matches the `client_id` in the auth service config
client_id: 0000000000000000000SYNAPSE
# Matches the `client_auth_method` in the auth service config
client_auth_method: client_secret_basic
# Matches the `client_secret` in the auth service config
client_secret: "SomeRandomSecret"
# Matches the `matrix.secret` in the auth service config
admin_token: "AnotherRandomSecret"
# URL to advertise to clients where users can self-manage their account
account_management_url: "http://localhost:8080/account"
Set up the compatibility layer
The service exposes a compatibility layer to allow legacy clients to authenticate using the service. This works by exposing a few Matrix endpoints that should be proxied to the service.
The following Matrix Client-Server API endpoints need to be handled by the authentication service:
See the reverse proxy configuration guide for more information.
Configuring a reverse proxy
Although the service can be exposed directly to the internet, including handling the TLS termination, many deployments will want to run a reverse proxy in front of the service.
In those configuration, the service should be configured to listen on localhost
or Unix domain socket.
Example configuration
http:
public_base: https://auth.example.com/
listeners:
- name: web
resources:
- name: discovery
- name: human
- name: oauth
- name: compat
- name: graphql
- name: assets
binds:
# Bind on a local port
- host: localhost
port: 8080
# OR bind on a Unix domain socket
#- socket: /var/run/mas.sock
# OR bind on a systemd socket
#- fd: 0
# kind: tcp # or unix
# Optional: use the PROXY protocol
#proxy_protocol: true
Base nginx configuration
A basic configuration for nginx
, which proxies traffic to the service would look like this:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name auth.example.com;
ssl_certificate path/to/fullchain.pem;
ssl_certificate_key path/to/privkey.pem;
location / {
proxy_http_version 1.1;
proxy_pass http://localhost:8080;
# OR via the Unix domain socket
#proxy_pass http://unix:/var/run/mas.sock;
# Forward the client IP address
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# or, using the PROXY protocol
#proxy_protocol on;
}
}
Compatibility layer
For the compatibility layer, the following endpoints need to be proxied to the service:
/_matrix/client/*/login
/_matrix/client/*/logout
/_matrix/client/*/refresh
For example, a nginx configuration could look like:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name matrix.example.com;
# Forward to the auth service
location ~ ^/_matrix/client/(.*)/(login|logout|refresh) {
proxy_http_version 1.1;
proxy_pass http://localhost:8080;
# OR via the Unix domain socket
#proxy_pass http://unix:/var/run/mas.sock;
# Forward the client IP address
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# or, using the PROXY protocol
#proxy_protocol on;
}
# Forward to Synapse
# as per https://element-hq.github.io/synapse/latest/reverse_proxy.html#nginx
location ~ ^(/_matrix|/_synapse/client) {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
client_max_body_size 50M;
proxy_http_version 1.1;
}
}
Preserve the client IP
For rate-limiting and logging purposes, MAS needs to know the client IP address, which can be lost when using a reverse proxy. There are two ways to preserve the client IP address
X-Forwarded-For
header
MAS can infer the client IP address from the X-Forwarded-For
header.
It will trust the value for this header only if the request comes from a trusted reverse proxy.
The range of IPs that can be trusted is configured using the trusted_proxies
configuration option, which has the default private IP ranges.
http:
trusted_proxies:
- 192.168.0.0/16
- 172.16.0.0/12
- 10.0.0.0/10
- 127.0.0.1/8
- fd00::/8
- ::1/128
With nginx, this can be achieved by setting the proxy_set_header
directive to X-Forwarded-For $proxy_add_x_forwarded_for
.
Proxy protocol
MAS supports the PROXY protocol to preserve the client IP address.
To enable it, enable the proxy_protocol
option on the listener:
http:
listeners:
- name: web
resources:
- name: discovery
- name: human
- name: oauth
- name: compat
- name: graphql
- name: assets
binds:
- address: "[::]:8080"
proxy_protocol: true
With nginx, this can be achieved by setting the proxy_protocol
directive to on
in the location
block.
Serve assets directly
To avoid unnecessary round-trips, the assets can be served directly by nginx, and the assets
resource can be removed from the service configuration.
http:
listeners:
- name: web
resources:
- name: discovery
- name: human
- name: oauth
- name: compat
- name: graphql
# MAS doesn't need to serve the assets anymore
#- name: assets
binds:
- address: "[::]:8080"
proxy_protocol: true
Make sure the assets directory served by nginx is up to date.
server {
# --- SNIP ---
location / {
# --- SNIP ---
}
# Make nginx serve the assets directly
location /assets/ {
root /path/to/share/assets/;
# Serve pre-compressed assets
gzip_static on;
# With the ngx_brotli module installed
# https://github.com/google/ngx_brotli
#brotli_static on;
# Cache assets for a year
expires 365d;
}
}
.well-known configuration
A .well-known/matrix/client
file is required to be served to allow clients to discover the authentication service.
If no .well-known/matrix/client
file is served currently then this will need to be enabled.
If the homeserver is Synapse and serving this file already then the correct values will already be included when the homeserver is configured to use MAS.
If the .well-known is hosted elsewhere then org.matrix.msc2965.authentication
entries need to be included similar to the following:
{
"m.homeserver": {
"base_url": "https://matrix.example.com"
},
"org.matrix.msc2965.authentication": {
"issuer": "https://example.com/",
"account": "https://auth.example.com/account"
}
}
For more context on what the correct values are, see here.
Configure an upstream SSO provider
The authentication service supports using an upstream OpenID Connect provider to authenticate its users. Multiple providers can be configured, and can be used in conjunction with the local password database authentication.
Any OIDC compliant provider should work with the service as long as it supports the authorization code flow.
Note that the service does not support other SSO protocols such as SAML, and there is no plan to support them in the future. A deployment which requires SAML or LDAP-based authentication should use a service like Dex to bridge between the SAML provider and the authentication service.
General configuration
Configuration of upstream providers is done in the upstream_oauth2
section of the configuration file, which has a providers
list.
Additions and changes to this sections are synced with the database on startup.
Removals need to be applied using the mas-cli config sync --prune
command.
An exhaustive list of all the parameters is available in the configuration file reference.
The general configuration usually goes as follows:
- determine a unique
id
for the provider, which will be used as stable identifier between the configuration file and the database. Thisid
must be a ULID, and can be generated using online tools like https://www.ulidtools.com - create an OAuth 2.0/OIDC client on the provider's side, using the following parameters:
redirect_uri
:https://<auth-service-domain>/upstream/callback/<id>
response_type
:code
response_mode
:query
grant_type
:authorization_code
- fill the
upstream_oauth2
section of the configuration file with the following parameters:providers
:id
: the previously generated ULIDclient_id
: the client ID of the OAuth 2.0/OIDC client given by the providerclient_secret
: the client secret of the OAuth 2.0/OIDC client given by the providerissuer
: the issuer URL of the providerscope
: the scope to request from the provider.openid
is usually required, andprofile
andemail
are recommended to import a few user attributes.
- setup user attributes mapping to automatically fill the user profile with data from the provider. See the user attributes mapping section for more details.
User attributes mapping
The authentication service supports importing the following user attributes from the provider:
- The localpart/username (e.g.
@localpart:example.com
) - The display name
- An email address
For each of those attributes, administrators can configure a mapping using the claims provided by the upstream provider. They can also configure what should be done for each of those attributes. It can either:
ignore
: ignore the attribute, and let the user fill it manuallysuggest
: suggest the attribute to the user, but let them opt-out of importing itforce
: automatically import the attribute, but don't fail if it is not provided by the providerrequire
: automatically import the attribute, and fail if it is not provided by the provider
A Jinja2 template is used as mapping for each attribute. The template currently has one user
variable, which is an object with the claims got through the id_token
given by the provider.
The following default templates are used:
localpart
:{{ user.preferred_username }}
displayname
:{{ user.name }}
email
:{{ user.email }}
Multiple providers behaviour
Multiple authentication methods can be configured at the same time, in which case the authentication service will let the user choose which one to use.
This is true if both the local password database and an upstream provider are configured, or if multiple upstream providers are configured.
In such cases, the human_name
parameter of the provider configuration is used to display a human-readable name for the provider, and the brand_name
parameter is used to show a logo for well-known providers.
If there is only one upstream provider configured and the local password database is disabled (passwords.enabled
is set to false
), the authentication service will automatically trigger an authorization flow with this provider.
Sample configurations
This section contains sample configurations for popular OIDC providers.
Apple
Sign-in with Apple uses special non-standard for authenticating clients, which requires a special configuration.
upstream_oauth2:
providers:
- client_id: 01JAYS74TCG3BTWKADN5Q4518C
client_name: "<Service ID>" # TO BE FILLED
scope: "openid name email"
response_mode: "form_post"
token_endpoint_auth_method: "sign_in_with_apple"
sign_in_with_apple:
private_key: |
# Content of the PEM-encoded private key file, TO BE FILLED
team_id: "<Team ID>" # TO BE FILLED
key_id: "<Key ID>" # TO BE FILLED
claims_imports:
localpart:
action: ignore
displayname:
action: suggest
# SiWA passes down the user infos as query parameters in the callback
# which is available in the extra_callback_parameters variable
template: |
{%- set user = extra_callback_parameters["user"] | from_json -%}
{{- user.name.firstName }} {{ user.name.lastName -}}
email:
action: suggest
Authelia
These instructions assume that you have already enabled the OIDC provider support in Authelia.
Add a client for MAS to Authelia's configuration.yaml
(see the Authelia OIDC documentation for full details):
identity_providers:
oidc:
clients:
- client_id: "<client-id>" # TO BE FILLED
client_name: Matrix
client_secret: "<client-secret>" # TO BE FILLED
public: false
redirect_uris:
- https://<mas-fqdn>/upstream/callback/<id>
scopes:
- openid
- groups
- profile
- email
grant_types:
- 'refresh_token'
- 'authorization_code'
response_types:
- code
Authentication service configuration:
upstream_oauth2:
providers:
- id: <id>
human_name: Authelia
issuer: "https://<authelia-fqdn>" # TO BE FILLED W/O ANY TRAILING SLASHES
client_id: "<client-id>" # TO BE FILLED
client_secret: "<client-secret>" # TO BE FILLED
token_endpoint_auth_method: client_secret_basic
scope: "openid profile email"
discovery_mode: insecure
claims_imports:
localpart:
action: require
template: "{{ user.preferred_username }}"
displayname:
action: suggest
template: "{{ user.name }}"
email:
action: suggest
template: "{{ user.email }}"
set_email_verification: always
Authentik
Authentik is an open-source IdP solution.
- Create a provider in Authentik, with type OAuth2/OpenID.
- The parameters are:
- Client Type: Confidential
- Redirect URIs:
https://<auth-service-domain>/upstream/callback/<id>
- Create an application for the authentication service in Authentik and link it to the provider.
- Note the slug of your application, Client ID and Client Secret.
Authentication service configuration:
upstream_oauth2:
providers:
- id: 01HFRQFT5QFMJFGF01P7JAV2ME
human_name: Authentik
issuer: "https://<authentik-domain>/application/o/<app-slug>/" # TO BE FILLED
client_id: "<client-id>" # TO BE FILLED
client_secret: "<client-secret>" # TO BE FILLED
scope: "openid profile email"
claims_imports:
localpart:
action: require
template: "{{ user.preferred_username }}"
displayname:
action: suggest
template: "{{ user.name }}"
email:
action: suggest
template: "{{ user.email }}"
set_email_verification: always
- You will need a Facebook developer account. You can register for one here.
- On the apps page of the developer console, "Create App", and choose "Allow people to log in with their Facebook account".
- Once the app is created, add "Facebook Login" and choose "Web". You don't need to go through the whole form here.
- In the left-hand menu, open "Use cases" > "Authentication and account creation" > "Customize" > "Settings"
- Add
https://<auth-service-domain>/upstream/callback/<id>
as an OAuth Redirect URL.
- Add
- In the left-hand menu, open "App settings/Basic". Here you can copy the "App ID" and "App Secret" for use below.
Authentication service configuration:
upstream_oauth2:
providers:
- id: "01HFS3WM7KSWCEQVJTN0V9X1W6"
issuer: "https://www.facebook.com"
human_name: "Facebook"
brand_name: "facebook"
discovery_mode: disabled
pkce_method: always
authorization_endpoint: "https://facebook.com/v11.0/dialog/oauth/"
token_endpoint: "https://graph.facebook.com/v11.0/oauth/access_token"
jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/"
token_endpoint_auth_method: "client_secret_post"
client_id: "<app-id>" # TO BE FILLED
client_secret: "<app-secret>" # TO BE FILLED
scope: "openid"
claims_imports:
localpart:
action: ignore
displayname:
action: suggest
template: "{{ user.name }}"
email:
action: suggest
template: "{{ user.email }}"
set_email_verification: always
GitLab
- Create a new application.
- Add the
openid
scope. Optionally add theprofile
andemail
scope if you want to import the user's name and email. - Add this Callback URL:
https://<auth-service-domain>/upstream/callback/<id>
Authentication service configuration:
upstream_oauth2:
providers:
- id: "01HFS67GJ145HCM9ZASYS9DC3J"
issuer: "https://gitlab.com"
human_name: "GitLab"
brand_name: "gitlab"
token_endpoint_auth_method: "client_secret_post"
client_id: "<client-id>" # TO BE FILLED
client_secret: "<client-secret>" # TO BE FILLED
scope: "openid profile email"
claims_imports:
displayname:
action: suggest
template: "{{ user.name }}"
localpart:
action: ignore
email:
action: suggest
template: "{{ user.email }}"
- Set up a project in the Google API Console (see documentation)
- Add an "OAuth Client ID" for a Web Application under "Credentials"
- Add the following "Authorized redirect URI":
https://<auth-service-domain>/upstream/callback/<id>
Authentication service configuration:
upstream_oauth2:
providers:
- id: 01HFS6S2SVAR7Y7QYMZJ53ZAGZ
human_name: Google
brand_name: "google"
issuer: "https://accounts.google.com"
client_id: "<client-id>" # TO BE FILLED
client_secret: "<client-secret>" # TO BE FILLED
scope: "openid profile email"
claims_imports:
localpart:
action: ignore
displayname:
action: suggest
template: "{{ user.name }}"
email:
action: suggest
template: "{{ user.email }}"
Keycloak
Follow the Getting Started Guide to install Keycloak and set up a realm.
-
Click
Clients
in the sidebar and clickCreate
-
Fill in the fields as below:
Field Value Client ID matrix-authentication-service
Client Protocol openid-connect
-
Click
Save
-
Fill in the fields as below:
Field Value Client ID matrix-authentication-service
Enabled On
Client Protocol openid-connect
Access Type confidential
Valid Redirect URIs https://<auth-service-domain>/upstream/callback/<id>
-
Click
Save
-
On the Credentials tab, update the fields:
Field Value Client Authenticator Client ID and Secret
-
Click
Regenerate Secret
-
Copy Secret
upstream_oauth2:
providers:
- id: "01H8PKNWKKRPCBW4YGH1RWV279"
issuer: "https://<keycloak>/realms/<realm>" # TO BE FILLED
token_endpoint_auth_method: client_secret_basic
client_id: "matrix-authentication-service"
client_secret: "<client-secret>" # TO BE FILLED
scope: "openid profile email"
claims_imports:
localpart:
action: require
template: "{{ user.preferred_username }}"
displayname:
action: suggest
template: "{{ user.name }}"
email:
action: suggest
template: "{{ user.email }}"
set_email_verification: always
Microsoft Azure Active Directory
Azure AD can act as an OpenID Connect Provider.
Register a new application under App registrations in the Azure AD management console.
The RedirectURI
for your application should point to your authentication service instance:
https://<auth-service-domain>/upstream/callback/<id>
where <id>
is the same as in the config file.
Go to Certificates & secrets and register a new client secret. Make note of your Directory (tenant) ID as it will be used in the Azure links.
Authentication service configuration:
upstream_oauth2:
providers:
- id: "01HFRPWGR6BG9SAGAKDTQHG2R2"
human_name: Microsoft Azure AD
issuer: "https://login.microsoftonline.com/<tenant-id>/v2.0" # TO BE FILLED
client_id: "<client-id>" # TO BE FILLED
client_secret: "<client-secret>" # TO BE FILLED
scope: "openid profile email"
claims_imports:
localpart:
action: require
template: "{{ (user.preferred_username | split('@'))[0] }}"
displayname:
action: suggest
template: "{{ user.name }}"
email:
action: suggest
template: "{{ user.email }}"
set_email_verification: always
Rauthy
-
Click
Clients
in the Rauthy Admin sidebar and clickAdd new Client
-
Fill in the fields as below:
Field Value Client ID matrix-authentication-service
Client Name matrix-authentication-service
Redirect URI https://<auth-service-domain>/upstream/callback/<id>
-
Set the client to be
Confidential
. -
Click
Save
-
Select the client you just created from the clients list.
-
Enable the
authorization_code
, andrefresh_token
grant types. -
Set the allowed scopes to
openid
,profile
, andemail
. -
Set both Access Algorithm and ID Algorithm to
RS256
. -
Set PKCE challenge method to
S256
. -
Click
Save
-
Copy the
Client ID
from theConfig
tab and theClient Secret
from theSecret
tab.
Authentication service configuration:
upstream_oauth2:
providers:
- id: "01JFFHK7HJF70YSYF753GEWVRP"
human_name: Rauthy
issuer: "https://<rauthy>/auth/v1" # TO BE FILLED
client_id: "<client-id>" # TO BE FILLED
client_secret: "<client-secret>" # TO BE FILLED
scope: "openid profile email"
claims_imports:
localpart:
action: ignore
displayname:
action: suggest
template: "{{ user.given_name }}"
email:
action: suggest
template: "{{ user.email }}"
To use a Rauthy-supported Ephemeral Client, use this JSON document:
{
"client_id": "https://path.to.this.json",
"redirect_uris": [
"https://your-app.com/callback"
],
"grant_types": [
"authorization_code",
"refresh_token"
],
"access_token_signed_response_alg": "RS256",
"id_token_signed_response_alg": "RS256"
}
Running the service
To fully function, the service needs to run two main components:
- An HTTP server
- A background worker
By default, the mas-cli server
command will start both components.
It is possible to only run the HTTP server by setting the --no-worker
option, and run a background worker with the mas-cli worker
command.
Both components are stateless, and can be scaled horizontally by running multiple instances of each.
Runtime requirements
Other than the binary, the service needs a few files to run:
- The templates, referenced by the
templates.path
configuration option - The compiled policy, referenced by the
policy.path
configuration option - The frontend assets, referenced by the
path
option of theassets
resource in thehttp.listeners
configuration section - The frontend manifest file, referenced by tge
templates.assets_manifest
configuration option
Be sure to check the installation instructions for more information on how to get these files, and make sure the configuration file is updated accordingly.
If you are using the docker image, everything is already included in the image at the right place, so in most cases you don't need to do anything.
If you are using the pre-built binaries, those files are shipped alongside them in the share
directory.
The default configuration will look for them from the current working directory, meaning that you don't have to adjust the paths, as long as you are running the service from the parent directory of the share
directory.
Configure the HTTP server
The service can be configured to have multiple HTTP listeners, serving different resources.
See the http.listeners
configuration section for more information.
The service needs to be aware of the public URL it is served on, regardless of the HTTP listeners configuration.
This is done using the http.public_base
configuration option.
By default, the OIDC issuer advertised by the /.well-known/openid-configuration
endpoint will be the same as the public_base
URL, but can be configured to be different.
Tweak the remaining configuration
A few configuration sections might still require some tweaking, including:
telemetry
: to setup metrics, tracing and Sentry crash reportingemail
: to setup email sendingpassword
: to enable/disable password authenticationaccount
: to configure what account management features are enabledupstream_oauth
: to configure upstream OAuth providers
Run the service
Once the configuration is done, the service can be started with the mas-cli server
command:
mas-cli server
It is advised to run the service as a non-root user, using a tool like systemd
to manage the service lifecycle.
Troubleshoot common issues
Once the service is running, it is possible to check its configuration using the mas-cli doctor
command.
This should help diagnose common issues with the service configuration and deployment.
Migrating an existing homeserver
One of the design goals of MAS has been to allow it to be used to migrate an existing homeserver to an OIDC-based architecture.
Specifically without requiring users to re-authenticate and that non-OIDC clients continue to work.
Features that are provided to support this include:
- Ability to import existing password hashes from Synapse
- Ability to import existing sessions and devices
- Ability to import existing access tokens linked to devices (ie not including short-lived admin puppeted access tokens)
- Ability to import existing upstream IdP subject ID mappings
- Provides a compatibility layer for legacy Matrix authentication
There will be tools to help with the migration process itself. But these aren't quite ready yet.
Preparing for the migration
The deployment is non-trivial so it is important to read through and understand the steps involved and make a plan before starting.
Get syn2mas
The easiest way to get syn2mas
is through npm
:
npm install -g @vector-im/syn2mas
Run the migration advisor
You can use the advisor mode of the syn2mas
tool to identify extra configuration steps or issues with the configuration of the homeserver.
syn2mas --command=advisor --synapseConfigFile=homeserver.yaml
This will output WARN
entries for any identified actions and ERROR
entries in the case of any issues that will prevent the migration from working.
Install and configure MAS alongside your existing homeserver
Follow the instructions in the installation guide to install MAS alongside your existing homeserver.
Local passwords
Synapse uses bcrypt as its password hashing scheme while MAS defaults to using the newer argon2id. You will have to configure the version 1 scheme as bcrypt for migrated passwords to work. It is also recommended that you keep argon2id as version 2 so that once users log in, their hashes will be updated to the newer recommended scheme.
Example passwords configuration:
passwords:
enabled: true
schemes:
- version: 1
algorithm: bcrypt
- version: 2
algorithm: argon2id
Map any upstream SSO providers
If you are using an upstream SSO provider then you will need to provision the upstream provide in MAS manually.
Each upstream provider will need to be given as an --upstreamProviderMapping
command line option to the import tool.
Prepare the MAS database
Once the database is created, it still needs to have its schema created and synced with the configuration. This can be done with the following command:
mas-cli config sync
Do a dry-run of the import to test
syn2mas --command migrate --synapseConfigFile homeserver.yaml --masConfigFile config.yaml --dryRun
If no errors are reported then you can proceed to the next step.
Doing the migration
Having done the preparation, you can now proceed with the actual migration. Note that this will require downtime for the homeserver and is not easily reversible.
Backup your data
As with any migration, it is important to backup your data before proceeding.
Shutdown the homeserver
This is to ensure that no new sessions are created whilst the migration is in progress.
Configure the homeserver
Follow the instructions in the homeserver configuration guide to configure the homeserver to use MAS.
Do the import
Run syn2mas
in non-dry-run mode.
syn2mas --command migrate --synapseConfigFile homeserver.yaml --masConfigFile config.yaml --dryRun false
Start up the homeserver
Start up the homeserver again with the new configuration.
Update or serve the .well-known
The .well-known/matrix/client
needs to be served as described here.
Policy engine
A set of actions are controlled by a generic policy engine. A decision of the policy engine is deterministically made based on three components:
- The policy itself
- A static configuration
- The action to be performed
The policy is a Open Policy Agent (OPA) policy compiled into WebAssembly. Matrix Authentication Service ships with a default policy which should be sufficient for most deployments. It can be replaced with a custom policy if needed, which can be useful to implement custom authorization logic without recompiling the service.
Actions
The policy engine mainly restricts three operations:
- User attributes, which includes user registration, user profile updates, and user password changes.
- Client registration, when an OAuth 2.0 dynamic client registration is requested.
- Authorization requests, when a client requests an access token.
Policies are only evaluated in user-facing contexts, and not in administrative contexts. As such, they usually can be bypassed through the admin API or the CLI if needed.
User attributes
The policy is evaluated in three different scenarios:
register.rego
: During user registration, either with password credentials or with an upstream OAuth 2.0 provider. This calls theemail.rego
andpassword.rego
policies as well.email.rego
: When a user adds a new email address to their account.password.rego
: When a user changes their password.
Client registration
The policy (client_registration.rego
) is evaluated when a client sends their metadata through the OAuth 2.0 dynamic client registration API.
By default, it enforces a set of strict rules to make sure clients provide enough information about themselves, with coherent URLs.
This is useful in production environments, but can be relaxed in development environments.
Authorization requests
The policy (authorization_grant.rego
) is evaluated when a client requests an access token.
This only covers OAuth 2.0 sessions, not compatibility sessions.
It is evaluated for the authorization code grant, the client credentials grant and the device authorization grant.
This is probably the most interesting policy, as it defines which scope can be granted to which user and which client.
On evaluation, three main entities are available:
- details about the grant, such as the type of grant and the requested scopes
- the client making the request
- the user with their attributes (only for the authorization code grant and the device authorization grant)
The policy evaluation cannot modify the grant, only allow or deny it. Therefore the client must know in advance which scope they want to request.
This is an important concept to understand: what access a token has is stored in the session itself, therefore access to privileged scopes is only based on policy evaluation, not on user attributes.
If we take the Synapse admin API access as an example, the fact that an access token has admin API access doesn't depend on attributes on the user directly. Instead, it is during the creation of the session that:
- the client asks for the corresponding scope (e.g.
urn:synapse:admin:*
) - the policy engine decides whether to grant it or not
The default policy shipped with the service does gate access to this scope based on a user attributes (can_request_admin
), but this is not a requirement.
It does make reasoning about admin access more complicated compared to a simple boolean flag on the user like what Synapse does, but it also allows for more complex authorization logic. This is especially important as in the future it will make it possible to implement a more granular role-based access control system to fit more complex use cases.
To understand the authorization process and how sessions are created, refer to the authorization and sessions section.
Authorization and sessions
The main job of the authentication service is to grant access to resources to clients, and to let resources know who is accessing them. In less abstract terms, this means that the service is responsible for issuing access tokens and letting the homeserver (and other services) introspect those access tokens.
How access tokens work
In MAS, the access token is an opaque string for which the service has metadata associated with it. An access token has:
- a subject, which is the user the token is issued for
- a list of scopes
- a client for which the token is issued
- a timeframe for which the token is valid
On a single token, metadata is immutable: it doesn't change over time. One exception is the validity of the token: the service may revoke a token before its expiration date.
A typical client will get a short-lived access token (valid 5 minutes) along with a refresh token. The refresh token can then be used to get a new access token without the user having to re-authenticate.
How Synapse behaves
When an incoming request is made to Synapse, it will introspect the access token through the Matrix Authentication Service. This is using a standard OAuth 2.0 introspection request (RFC 7662).
Out of this request, Synapse will care about the following:
- the
active
field, which tells if the token is valid or not - the
sub
field, which tells which user the token is issued for. This is an opaque string, and Synapse saves the mapping between the Matrix user ID and the subject of the token in its own database - in case Synapse doesn't know the presented subject, it will look at the
username
field, which it will use as the localpart for the user as fallback - the
scope
field, which tells which scopes are granted to the token. More specifically, it will look for the following scopes:urn:matrix:org.matrix.msc2967.client:api:*
, which grants broad access to the whole Matrix C-S APIurn:matrix:org.matrix.msc2967.client:device:AABBCC
, which encodes the Matrix device ID used by the clienturn:synapse:admin:*
, which grants access to the Synapse admin API
It's important to understand that when Synapse delegates authentication to MAS, Synapse no longer manages many user attributes. This includes the user admin, locked, and deactivated status.
Compatibility sessions
In addition to OAuth 2.0 sessions, for which we'll go into more details later, MAS also supports the legacy /_matrix/client/v3/login
API.
This exists as a compatibility layer for clients that don't yet support OAuth 2.0, but has some restrictions compared to the way those sessions behaved in Synapse.
When a client presents a compatibility access token to Synapse, MAS will make it look like to Synapse as if the token had the following scopes:
Which corresponds to the broad access to the Matrix C-S API and the device ID of the client, as one would expect from the legacy login API.
One important missing scope is urn:synapse:admin:*
, which means that the client won't have access to the Synapse admin API.
This is the case even if the user has the can_request_admin
attribute set to true
, and this is by design:
the legacy login API doesn't have a way to request specific scopes, and we don't want to grant admin access to all clients that have a compatibility session.
This was the case in the past with Synapse, as the admin status was set on the user itself, but this is not the case anymore with MAS.
OAuth 2.0 sessions
Modern clients are expected to use OAuth 2.0 to authenticate with the homeserver. In OAuth 2.0/OIDC, there are multiple ways to start an OAuth 2.0 session called grants.
An OAuth 2.0 session has three important properties:
- the client, which is the application accessing the resource
- the user, which is the user for which the client is accessing the resource
- a set of scopes, which are the permission granted to the client
There are two main ways to create a client in MAS:
- through the OAuth 2.0 Dynamic Client Registration Protocol (RFC 7591)
- statically defined in the configuration file
Authorized as a user or authorized as a client
OAuth 2.0 has an interesting concept where a session can be authorized not just as a user, but also as a client. This means an OAuth 2.0 session can be created without a user, and only with a client. It is useful for automated machine-to-machine communication, and is often referred to as "service accounts".
Synapse doesn't yet support this concept, and as such requesting any Synapse API, even the admin API, requires a user attached to the session.
This isn't the case with MAS' GraphQL API, which can be accessed with a client-only session:
the API can be requested by a session which has the urn:mas:graphql:*
and the urn:mas:admin
scope without being backed by a user.
Supported authorization grants
MAS supports a few different authorization grants for OAuth 2.0 sessions. Whilst this section won't go into the technical details of how those grants work, it's important to understand what they are and what they are used for.
Grant type | Entity | User interaction | Matrix C-S API | Synapse Admin API | MAS Admin API | MAS Internal GraphQL API |
---|---|---|---|---|---|---|
Authorization code | User | Same device | Yes | Yes | Yes | Yes |
Device authorization | User | Other device | Yes | Yes | Yes | Yes |
Client credentials | Client | None | No | No1 | Yes | Yes |
The Synapse admin API doesn't strictly require a user, but Synapse doesn't support client-only sessions yet. In the future, it will be possible to leverage the client credentials grant to access the Synapse admin API.
Authorization code grant
The authorization code grant (RFC 6749 section 4.1) is used to interactively log in the user on the same device as the client. This is the most common grant for most Matrix clients and is targeted at human end users.
The general idea is that the client (after registering itself) crafts an authorization URL that the user will visit in their web browser. The authentication service does whatever it needs to do to authenticate the user, and once the user is authenticated and consented to the access request, the service redirects the user back to the client with an authorization code. The client then exchanges this authorization code for an access token and a refresh token.
This grant is not meant for automation: it requires user interaction on the same device as where the client lives.
Device authorization grant
The device authorization grant (RFC 8628) is similar to the authorization code grant, but separates the user interaction from where the client lives.
A classic example of this grant is when a client is on a TV or a game console, where the user wouldn't want to enter their credentials on the device itself. Instead, the user is shown a code on the device, which they then enter on a different device (like a phone or a computer) to authenticate.
For Matrix, it has two main use cases:
- for CLI tools (or other constrained clients) which can't open a web browser or can't catch a redirect
- for a "login from another existing device" feature, like the "login via QR code" described in MSC4108
This grant isn't meant for automation either, as it still requires user interaction.
Client credentials grant
The client credentials grant (RFC 6749 section 4.4) is a bit special, as it lets a client authenticate as itself, without a user.
This has no meaning yet in the Matrix C-S API, but is useful for other APIs like the MAS GraphQL API.
It may also be used in the future as a foundation for a new Application Service API, replacing the current hs_token
/as_token
mechanism.
This works by presenting the client credentials to get back an access token. The simplest type of client credentials is a client ID and client secret pair, but MAS also supports client authentication with a JWT (RFC 7523), which is a robust way to authenticate clients without a shared secret.
Admin API
MAS provides a REST-like API for administrators to manage the service. This API is intended to build tools on top of MAS, and is only available to administrators.
Note: This Admin API is now the correct way for external tools to interact with MAS. External access to the Internal GraphQL API is deprecated and will be removed in a future release.
Enabling the API
The API isn't exposed by default, and must be added to either a public or a private HTTP listener.
It is considered safe to expose the API to the public, as access to it is gated by the urn:mas:admin
scope.
To enable the API, tweak the http.listeners
configuration section to add the adminapi
resource:
http:
listeners:
- name: web
resources:
# Other public resources
- name: discovery
# …
- name: adminapi
binds:
- address: "[::]:8080"
# or to a separate, internal listener:
- name: internal
resources:
# Other internal resources
- name: health
- name: prometheus
# …
- name: adminapi
binds:
- host: localhost
port: 8081
Reference documentation
The API is documented using the OpenAPI specification. The API schema is available here. This schema can be viewed in tools like Swagger UI, available here.
If admin API is enabled, MAS will also serve the specification at /api/spec.json
, with a Swagger UI available at /api/doc/
.
Authentication
All requests to the admin API are gated using access tokens obtained using OAuth 2.0 grants.
They must have the urn:mas:admin
scope.
User-interactive tools
If the intent is to build admin tools where the administrator logs in themselves, interactive grants like the authorization code grant or the device authorization grant should be used.
In this case, whether the user can request admin access or not is defined by the can_request_admin
attribute of the user.
To try it out in Swagger UI, a client can be defined statically in the configuration file like this:
clients:
- client_id: 01J44Q10GR4AMTFZEEF936DTCM
# For the authorization_code grant, Swagger UI uses the client_secret_post authentication method
client_auth_method: client_secret_post
client_secret: wie9oh2EekeeDeithei9Eipaeh2sohte
redirect_uris:
# The Swagger UI callback in the hosted documentation
- https://element-hq.github.io/matrix-authentication-service/api/oauth2-redirect.html
# The Swagger UI callback hosted by the service
- https://mas.example.com/api/doc/oauth2-redirect
Then, in Swagger UI, click on the "Authorize" button.
In the modal, enter the client ID and client secret in the authorizationCode
section, select the urn:mas:admin
scope and click on the "Authorize" button.
Automated tools
If the intent is to build tools that are not meant to be used by humans, the client credentials grant should be used.
In this case, the client must be listed in the policy.data.admin_clients
configuration option.
policy:
data:
admin_clients:
- 01J44QC8BCY7FCFM7WGHQGKMTJ
To try it out in Swagger UI, a client can be defined statically in the configuration file like this:
clients:
- client_id: 01J44QC8BCY7FCFM7WGHQGKMTJ
# For the client_credentials grant, Swagger UI uses the client_secret_basic authentication method
client_auth_method: client_secret_basic
client_secret: eequie6Oth4Ip2InahT5zuQu8OuPohLi
Then, in Swagger UI, click on the "Authorize" button.
In the modal, enter the client ID and client secret in the clientCredentials
section, select the urn:mas:admin
scope and click on the "Authorize" button.
General API shape
The API takes inspiration from the JSON API specification for its request and response shapes.
Single resource
When querying a single resource, the response is generally shaped like this:
{
"data": {
"type": "type-of-the-resource",
"id": "unique-id-for-the-resource",
"attributes": {
"some-attribute": "some-value"
},
"links": {
"self": "/api/admin/v1/type-of-the-resource/unique-id-for-the-resource"
}
},
"links": {
"self": "/api/admin/v1/type-of-the-resource/unique-id-for-the-resource"
}
}
List of resources
When querying a list of resources, the response is generally shaped like this:
{
"meta": {
"count": 42
},
"data": [
{
"type": "type-of-the-resource",
"id": "unique-id-for-the-resource",
"attributes": {
"some-attribute": "some-value"
},
"links": {
"self": "/api/admin/v1/type-of-the-resource/unique-id-for-the-resource"
}
},
{ "...": "..." },
{ "...": "..." }
],
"links": {
"self": "/api/admin/v1/type-of-the-resource?page[first]=10&page[after]=some-id",
"first": "/api/admin/v1/type-of-the-resource?page[first]=10",
"last": "/api/admin/v1/type-of-the-resource?page[last]=10",
"next": "/api/admin/v1/type-of-the-resource?page[first]=10&page[after]=some-id",
"prev": "/api/admin/v1/type-of-the-resource?page[last]=10&page[before]=some-id"
}
}
The meta
will have the total number of items in it, and the links
object contains the links to the next and previous pages, if any.
Pagination is cursor-based, where the ID of items is used as the cursor.
Resources can be paginated forwards using the page[after]
and page[first]
parameters, and backwards using the page[before]
and page[last]
parameters.
Error responses
Error responses will use a 4xx or 5xx status code, with the following shape:
{
"errors": [
{
"title": "Error title"
}
]
}
Well-known error codes are not yet specified.
Example
With the following configuration:
clients:
- client_id: 01J44RKQYM4G3TNVANTMTDYTX6
client_auth_method: client_secret_basic
client_secret: phoo8ahneir3ohY2eigh4xuu6Oodaewi
policy:
data:
admin_clients:
- 01J44RKQYM4G3TNVANTMTDYTX6
curl
example to list the users that are not locked and have the can_request_admin
flag set to true
:
CLIENT_ID=01J44RKQYM4G3TNVANTMTDYTX6
CLIENT_SECRET=phoo8ahneir3ohY2eigh4xuu6Oodaewi
# Get an access token
curl \
-u "$CLIENT_ID:$CLIENT_SECRET" \
-d "grant_type=client_credentials&scope=urn:mas:admin" \
https://mas.example.com/oauth2/token \
| jq -r '.access_token' \
| read -r ACCESS_TOKEN
# List users (The -g flag prevents curl from interpreting the brackets in the URL)
curl \
-g \
-H "Authorization: Bearer $ACCESS_TOKEN" \
'https://mas.example.com/api/admin/v1/users?filter[can_request_admin]=true&filter[status]=active&page[first]=100' \
| jq
Sample output
{
"meta": {
"count": 2
},
"data": [
{
"type": "user",
"id": "01J2KDPHTZYW3TAT1SKVAD63SQ",
"attributes": {
"username": "kilgore-trout",
"created_at": "2024-07-12T12:11:46.911578Z",
"locked_at": null,
"can_request_admin": true
},
"links": {
"self": "/api/admin/v1/users/01J2KDPHTZYW3TAT1SKVAD63SQ"
}
},
{
"type": "user",
"id": "01J3G5W8MRMBJ93ZYEGX2BN6NK",
"attributes": {
"username": "quentin",
"created_at": "2024-07-23T16:13:04.024378Z",
"locked_at": null,
"can_request_admin": true
},
"links": {
"self": "/api/admin/v1/users/01J3G5W8MRMBJ93ZYEGX2BN6NK"
}
}
],
"links": {
"self": "/api/admin/v1/users?filter[can_request_admin]=true&filter[status]=active&page[first]=100",
"first": "/api/admin/v1/users?filter[can_request_admin]=true&filter[status]=active&page[first]=100",
"last": "/api/admin/v1/users?filter[can_request_admin]=true&filter[status]=active&page[last]=100"
}
}
Configuration file reference
http
Controls the web server.
http:
# Public URL base used when building absolute public URLs
public_base: https://auth.example.com/
# OIDC issuer advertised by the service. Defaults to `public_base`
issuer: https://example.com/
# List of HTTP listeners, see below
listeners:
# ...
http.listeners
Each listener can serve multiple resources, and listen on multiple TCP ports or UNIX sockets.
http:
listeners:
# The name of the listener, used in logs and metrics
- name: web
# List of resources to serve
resources:
# Serves the .well-known/openid-configuration document
- name: discovery
# Serves the human-facing pages, such as the login page
- name: human
# Serves the OAuth 2.0/OIDC endpoints
- name: oauth
# Serves the Matrix C-S API compatibility endpoints
- name: compat
# Serve the GraphQL API used by the frontend,
# and optionally the GraphQL playground
- name: graphql
playground: true
# Serve the given folder on the /assets/ path
- name: assets
path: ./share/assets/
# Serve the admin API on the /api/admin/v1/ path. Disabled by default
#- name: adminapi
# List of addresses and ports to listen to
binds:
# First option: listen to the given address
- address: "[::]:8080"
# Second option: listen on the given host and port combination
- host: localhost
port: 8081
# Third option: listen on the given UNIX socket
- socket: /tmp/mas.sock
# Fourth option: grab an already open file descriptor given by the parent process
# This is useful when using systemd socket activation
- fd: 1
# Kind of socket that was passed, defaults to tcp
kind: tcp # or unix
# Whether to enable the PROXY protocol on the listener
proxy_protocol: false
# If set, makes the listener use TLS with the provided certificate and key
tls:
#certificate: <inline PEM>
certificate_file: /path/to/cert.pem
#key: <inline PEM>
key_file: /path/to/key.pem
#password: <password to decrypt the key>
#password_file: /path/to/password.txt
The following additional resources are available, although it is recommended to serve them on a separate listener, not exposed to the public internet:
name: prometheus
: serves a Prometheus-compatible metrics endpoint on/metrics
, if the Prometheus exporter is enabled intelemetry.metrics.exporter
.name: health
: serves the health check endpoint on/health
.
database
Configure how to connect to the PostgreSQL database.
MAS must not be connected to a database pooler (such as pgBouncer or pgCat) when it is configured in transaction pooling mode. See the relevant section of the database page for more information.
database:
# Full connection string as per
# https://www.postgresql.org/docs/13/libpq-connect.html#id-1.7.3.8.3.6
uri: postgresql://user:password@hostname:5432/database?sslmode=require
# -- OR --
# Separate parameters
host: hostname
port: 5432
#socket:
username: user
password: password
database: database
# Whether to use SSL to connect to the database
ssl_mode: require # or disable, prefer, verify-ca, verify-full
#ssl_ca: # PEM-encoded certificate
ssl_ca_file: /path/to/ca.pem # Path to the root certificate file
# Client certificate to present to the server when SSL is enabled
#ssl_certificate: # PEM-encoded certificate
ssl_certificate_file: /path/to/cert.pem # Path to the certificate file
#ssl_key: # PEM-encoded key
ssl_key_file: /path/to/key.pem # Path to the key file
# Additional parameters for the connection pool
min_connections: 0
max_connections: 10
connect_timeout: 30
idle_timeout: 600
max_lifetime: 1800
matrix
Settings related to the connection to the Matrix homeserver
matrix:
# The homeserver name, as per the `server_name` in the Synapse configuration file
homeserver: example.com
# Shared secret used to authenticate the service to the homeserver
# This must be of high entropy, because leaking this secret would allow anyone to perform admin actions on the homeserver
secret: "SomeRandomSecret"
# URL to which the homeserver is accessible from the service
endpoint: "http://localhost:8008"
templates
Allows loading custom templates
templates:
# From where to load the templates
# This is relative to the current working directory, *not* the config file
path: /to/templates
# Path to the frontend assets manifest file
assets_manifest: /to/manifest.json
# From where to load the translation files
# Default in Docker distribution: `/usr/local/share/mas-cli/translations/`
# Default in pre-built binaries: `./share/translations/`
# Default in locally-built binaries: `./translations/`
translations_path: /to/translations
clients
List of OAuth 2.0/OIDC clients and their keys/secrets. Each client_id
must be a ULID.
clients:
# Confidential client
- client_id: 000000000000000000000FIRST
client_auth_method: client_secret_post
client_secret: secret
# List of authorized redirect URIs
redirect_uris:
- http://localhost:1234/callback
# Public client
- client_id: 00000000000000000000SEC0ND
client_auth_method: none
Note: any additions or modifications in this list are synced with the database on server startup. Removed entries are only removed with the config sync --prune
command.
secrets
Signing and encryption secrets
secrets:
# Encryption secret (used for encrypting cookies and database fields)
# This must be a 32-byte long hex-encoded key
encryption: c7e42fb8baba8f228b2e169fdf4c8216dffd5d33ad18bafd8b928c09ca46c718
# Signing keys
keys:
# It needs at least an RSA key to work properly
- kid: "ahM2bien"
key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAuf28zPUp574jDRdX6uN0d7niZCIUpACFo+Po/13FuIGsrpze
yMX6CYWVPalgXW9FCrhxL+4toJRy5npjkgsLFsknL5/zXbWKFgt69cMwsWJ9Ra57
bonSlI7SoCuHhtw7j+sAlHAlqTOCAVz6P039Y/AGvO6xbC7f+9XftWlbbDcjKFcb
pQilkN9qtkdEH7TLayMAFOsgNvBlwF9+oj9w5PIk3veRTdBXI4GlHjhhzqGZKiRp
oP9HnycHHveyT+C33vuhQso5a3wcUNuvDVOixSqR4kvSt4UVWNK/KmEQmlWU1/m9
ClIwrs8Q79q0xkGaSa0iuG60nvm7tZez9TFkxwIDAQABAoIBAHA5YkppQ7fJSm0D
wNDCHeyABNJWng23IuwZAOXVNxB1bjSOAv8yNgS4zaw/Hx5BnW8yi1lYZb+W0x2u
i5X7g91j0nkyEi5g88kJdFAGTsM5ok0BUwkHsEBjTUPIACanjGjya48lfBP0OGWK
LJU2Acbjda1aeUPFpPDXw/w6bieEthQwroq3DHCMnk6i9bsxgIOXeN04ij9XBmsH
KPCP2hAUnZSlx5febYfHK7/W95aJp22qa//eHS8cKQZCJ0+dQuZwLhlGosTFqLUm
qhPlt/b1EvPPY0cq5rtUc2W31L0YayVEHVOQx1fQIkH2VIUNbAS+bfVy+o6WCRk6
s1XDhsECgYEA30tykVTN5LncY4eQIww2mW8v1j1EG6ngVShN3GuBTuXXaEOB8Duc
yT7yJt1ZhmaJwMk4agmZ1/f/ZXBtfLREGVzVvuwqRZ+LHbqIyhi0wQJA0aezPote
uTQnFn+IveHGtpQNDYGL/UgkexuCxbc2HOZG51JpunCK0TdtVfO/9OUCgYEA1TuS
2WAXzNudRG3xd/4OgtkLD9AvfSvyjw2LkwqCMb3A5UEqw7vubk/xgnRvqrAgJRWo
jndgRrRnikHCavDHBO0GAO/kzrFRfw+e+r4jcLl0Yadke8ndCc7VTnx4wQCrMi5H
7HEeRwaZONoj5PAPyA5X+N/gT0NNDA7KoQT45DsCgYBt+QWa6A5jaNpPNpPZfwlg
9e60cAYcLcUri6cVOOk9h1tYoW7cdy+XueWfGIMf+1460Z90MfhP8ncZaY6yzUGA
0EUBO+Tx10q3wIfgKNzU9hwgZZyU4CUtx668mOEqy4iHoVDwZu4gNyiobPsyDzKa
dxtSkDc8OHNV6RtzKpJOtQKBgFoRGcwbnLH5KYqX7eDDPRnj15pMU2LJx2DJVeU8
ERY1kl7Dke6vWNzbg6WYzPoJ/unrJhFXNyFmXj213QsSvN3FyD1pFvp/R28mB/7d
hVa93vzImdb3wxe7d7n5NYBAag9+IP8sIJ/bl6i9619uTxwvgtUqqzKPuOGY9dnh
oce1AoGBAKZyZc/NVgqV2KgAnnYlcwNn7sRSkM8dcq0/gBMNuSZkfZSuEd4wwUzR
iFlYp23O2nHWggTkzimuBPtD7Kq4jBey3ZkyGye+sAdmnKkOjNILNbpIZlT6gK3z
fBaFmJGRJinKA+BJeH79WFpYN6SBZ/c3s5BusAbEU7kE5eInyazP
-----END RSA PRIVATE KEY-----
- kid: "iv1aShae"
key: |
-----BEGIN EC PRIVATE KEY-----
MHQCAQEEIE8yeUh111Npqu2e5wXxjC/GA5lbGe0j0KVXqZP12vqioAcGBSuBBAAK
oUQDQgAESKfUtKaLqCfhK+p3z870W59yOYvd+kjGWe+tK16SmWzZJbRCgdHakHE5
MC6tJRnvedsYoKTrYoDv/XZIBI9zlA==
-----END EC PRIVATE KEY-----
secrets.keys
The service can use a number of key types for signing. The following key types are supported:
- RSA
- ECDSA with the P-256 (
prime256v1
) curve - ECDSA with the P-384 (
secp384r1
) curve - ECDSA with the K-256 (
secp256k1
) curve
Each entry must have a unique (and arbitrary) kid
, plus the key itself.
The key can either be specified inline (with the key
property), or loaded from a file (with the key_file
property).
The following key formats are supported:
- PKCS#1 PEM or DER-encoded RSA private key
- PKCS#8 PEM or DER-encoded RSA or ECDSA private key, encrypted or not
- SEC1 PEM or DER-encoded ECDSA private key
For PKCS#8 encoded keys, the password
or password_file
properties can be used to decrypt the key.
passwords
Settings related to the local password database
passwords:
# Whether to enable the password database.
# If disabled, users will only be able to log in using upstream OIDC providers
enabled: true
# Minimum complexity required for passwords, estimated by the zxcvbn algorithm
# Must be between 0 and 4, default is 3
# See https://github.com/dropbox/zxcvbn#usage for more information
minimum_complexity: 3
# List of password hashing schemes being used
# /!\ Only change this if you know what you're doing
# TODO: document this section better
schemes:
- version: 1
algorithm: argon2id
account
Configuration related to account management
account:
# Whether users are allowed to change their email addresses.
#
# Defaults to `true`.
email_change_allowed: true
# Whether users are allowed to change their display names
#
# Defaults to `true`.
# This should be in sync with the policy in the homeserver configuration.
displayname_change_allowed: true
# Whether to enable self-service password registration
#
# Defaults to `false`.
# This has no effect if password login is disabled.
password_registration_enabled: false
# Whether users are allowed to change their passwords
#
# Defaults to `true`.
# This has no effect if password login is disabled.
password_change_allowed: true
# Whether email-based password recovery is enabled
#
# Defaults to `false`.
# This has no effect if password login is disabled.
password_recovery_enabled: false
captcha
Settings related to CAPTCHA protection
captcha:
# Which service to use for CAPTCHA protection. Set to `null` (or `~`) to disable CAPTCHA protection
service: ~
# Use Google reCAPTCHA v2
#service: recaptcha_v2
#site_key: "6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI"
#secret_key: "6LeIxAcTAAAAAGG"-vFI1TnRWxMZNFuojJ4WifJWe
# Use Cloudflare Turnstile
#service: cloudflare_turnstile
#site_key: "1x00000000000000000000AA"
#secret_key: "1x0000000000000000000000000000000AA"
# Use hCaptcha
#service: hcaptcha
#site_key: "10000000-ffff-ffff-ffff-000000000001"
#secret_key: "0x0000000000000000000000000000000000000000"
policy
Policy settings
policy:
# Path to the WASM module
# Default in Docker distribution: `/usr/local/share/mas-cli/policy.wasm`
# Default in pre-built binaries: `./share/policy.wasm`
# Default in locally-built binaries: `./policies/policy.wasm`
wasm_module: ./policies/policy.wasm
# Entrypoint to use when evaluating client registrations
client_registration_entrypoint: client_registration/violation
# Entrypoint to use when evaluating user registrations
register_entrypoint: register/violation
# Entrypoint to use when evaluating authorization grants
authorization_grant_entrypoint: authorization_grant/violation
# Entrypoint to use when changing password
password_entrypoint: password/violation
# Entrypoint to use when adding an email address
email_entrypoint: email/violation
# This data is being passed to the policy
data:
# Users which are allowed to ask for admin access. If possible, use the
# can_request_admin flag on users instead.
admin_users:
- person1
- person2
# Client IDs which are allowed to ask for admin access with a
# client_credentials grant
admin_clients:
- 01H8PKNWKKRPCBW4YGH1RWV279
- 01HWQCPA5KF10FNCETY9402WGF
# Dynamic Client Registration
client_registration:
# don't require URIs to be on the same host. default: false
allow_host_mismatch: false
# allow non-SSL and localhost URIs. default: false
allow_insecure_uris: false
# don't require clients to provide a client_uri. default: false
allow_missing_client_uri: false
# Restrict emails on registration to a specific domain
# Items in this array are evaluated as a glob
allowed_domains:
- *.example.com
# Ban specific domains from registration
banned_domains:
- *.banned.example.com
rate_limiting
Settings for limiting the rate of user actions to prevent abuse.
Each rate limiter consists of two options:
burst
: a base amount of how many actions are allowed in one go.per_second
: how many units of the allowance replenish per second.
rate_limiting:
# Limits how many account recovery attempts are allowed.
# These limits can protect against e-mail spam.
#
# Note: these limit also apply to recovery e-mail re-sends.
account_recovery:
# Controls how many account recovery attempts are permitted
# based on source IP address.
per_ip:
burst: 3
per_second: 0.0008
# Controls how many account recovery attempts are permitted
# based on the e-mail address that is being used for recovery.
per_address:
burst: 3
per_second: 0.0002
# Limits how many login attempts are allowed.
#
# Note: these limit also applies to password checks when a user attempts to
# change their own password.
login:
# Controls how many login attempts are permitted
# based on source IP address.
# This can protect against brute force login attempts.
per_ip:
burst: 3
per_second: 0.05
# Controls how many login attempts are permitted
# based on the account that is being attempted to be logged into.
# This can protect against a distributed brute force attack
# but should be set high enough to prevent someone's account being
# casually locked out.
per_account:
burst: 1800
per_second: 0.5
# Limits how many registrations attempts are allowed,
# based on source IP address.
# This limit can protect against e-mail spam and against people registering too many accounts.
registration:
burst: 3
per_second: 0.0008
telemetry
Settings related to metrics and traces
telemetry:
tracing:
# List of propagators to use for extracting and injecting trace contexts
propagators:
# Propagate according to the W3C Trace Context specification
- tracecontext
# Propagate according to the W3C Baggage specification
- baggage
# Propagate trace context with Jaeger compatible headers
- jaeger
# The default: don't export traces
exporter: none
# Export traces to an OTLP-compatible endpoint
#exporter: otlp
#endpoint: https://localhost:4318
metrics:
# The default: don't export metrics
exporter: none
# Export metrics to an OTLP-compatible endpoint
#exporter: otlp
#endpoint: https://localhost:4317
# Export metrics by exposing a Prometheus endpoint
# This requires mounting the `prometheus` resource to an HTTP listener
#exporter: prometheus
sentry:
# DSN to use for sending errors and crashes to Sentry
dsn: https://public@host:port/1
email
Settings related to sending emails
email:
from: '"The almighty auth service" <auth@example.com>'
reply_to: '"No reply" <no-reply@example.com>'
# Default transport: don't send any emails
transport: blackhole
# Send emails using SMTP
#transport: smtp
#mode: plain | tls | starttls
#hostname: localhost
#port: 587
#username: username
#password: password
# Send emails by calling a local sendmail binary
#transport: sendmail
#command: /usr/sbin/sendmail
# Send emails through the AWS SESv2 API
# This uses the AWS SDK, so the usual AWS environment variables are supported
#transport: aws_ses
upstream_oauth2
Settings related to upstream OAuth 2.0/OIDC providers.
Additions and modifications within this section are synced with the database on server startup.
Removed entries are only removed with the config sync --prune
command.
upstream_oauth2.providers
A list of upstream OAuth 2.0/OIDC providers to use to authenticate users.
Sample configurations for popular providers can be found in the upstream provider setup guide.
upstream_oauth2:
providers:
- # A unique identifier for the provider
# Must be a valid ULID
id: 01HFVBY12TMNTYTBV8W921M5FA
# The issuer URL, which will be used to discover the provider's configuration.
# If discovery is enabled, this *must* exactly match the `issuer` field
# advertised in `<issuer>/.well-known/openid-configuration`.
issuer: https://example.com/
# A human-readable name for the provider,
# which will be displayed on the login page
#human_name: Example
# A brand identifier for the provider, which will be used to display a logo
# on the login page. Values supported by the default template are:
# - `apple`
# - `google`
# - `facebook`
# - `github`
# - `gitlab`
# - `twitter`
#brand_name: google
# The client ID to use to authenticate to the provider
client_id: mas-fb3f0c09c4c23de4
# The client secret to use to authenticate to the provider
# This is only used by the `client_secret_post`, `client_secret_basic`
# and `client_secret_jwk` authentication methods
#client_secret: f4f6bb68a0269264877e9cb23b1856ab
# Which authentication method to use to authenticate to the provider
# Supported methods are:
# - `none`
# - `client_secret_basic`
# - `client_secret_post`
# - `client_secret_jwt`
# - `private_key_jwt` (using the keys defined in the `secrets.keys` section)
token_endpoint_auth_method: client_secret_post
# Which signing algorithm to use to sign the authentication request when using
# the `private_key_jwt` or the `client_secret_jwt` authentication methods
#token_endpoint_auth_signing_alg: RS256
# The scopes to request from the provider
# In most cases, it should always include `openid` scope
scope: "openid email profile"
# How the provider configuration and endpoints should be discovered
# Possible values are:
# - `oidc`: discover the provider through OIDC discovery,
# with strict metadata validation (default)
# - `insecure`: discover through OIDC discovery, but skip metadata validation
# - `disabled`: don't discover the provider and use the endpoints below
#discovery_mode: oidc
# Whether PKCE should be used during the authorization code flow.
# Possible values are:
# - `auto`: use PKCE if the provider supports it (default)
# Determined through discovery, and disabled if discovery is disabled
# - `always`: always use PKCE (with the S256 method)
# - `never`: never use PKCE
#pkce_method: auto
# The provider authorization endpoint
# This takes precedence over the discovery mechanism
#authorization_endpoint: https://example.com/oauth2/authorize
# The provider token endpoint
# This takes precedence over the discovery mechanism
#token_endpoint: https://example.com/oauth2/token
# The provider JWKS URI
# This takes precedence over the discovery mechanism
#jwks_uri: https://example.com/oauth2/keys
# How user attributes should be mapped
#
# Most of those attributes have two main properties:
# - `action`: what to do with the attribute. Possible values are:
# - `ignore`: ignore the attribute
# - `suggest`: suggest the attribute to the user, but let them opt out
# - `force`: always import the attribute, and don't fail if it's missing
# - `require`: always import the attribute, and fail if it's missing
# - `template`: a Jinja2 template used to generate the value. In this template,
# the `user` variable is available, which contains the user's attributes
# retrieved from the `id_token` given by the upstream provider.
#
# Each attribute has a default template which follows the well-known OIDC claims.
#
claims_imports:
# The subject is an internal identifier used to link the
# user's provider identity to local accounts.
# By default it uses the `sub` claim as per the OIDC spec,
# which should fit most use cases.
subject:
#template: "{{ user.sub }}"
# The localpart is the local part of the user's Matrix ID.
# For example, on the `example.com` server, if the localpart is `alice`,
# the user's Matrix ID will be `@alice:example.com`.
localpart:
#action: force
#template: "{{ user.preferred_username }}"
# The display name is the user's display name.
displayname:
#action: suggest
#template: "{{ user.name }}"
# An email address to import.
email:
#action: suggest
#template: "{{ user.email }}"
# Whether the email address must be marked as verified.
# Possible values are:
# - `import`: mark the email address as verified if the upstream provider
# has marked it as verified, using the `email_verified` claim.
# This is the default.
# - `always`: mark the email address as verified
# - `never`: mark the email address as not verified
#set_email_verification: import
experimental
Settings that may change or be removed in future versions. Some of which are in this section because they don't have a stable place in the configuration yet.
experimental:
# Time-to-live of OAuth 2.0 access tokens in seconds. Defaults to 300, 5 minutes.
#access_token_ttl: 300
# Time-to-live of compatibility access tokens in seconds, when refresh tokens are supported. Defaults to 300, 5 minutes.
#compat_token_ttl: 300
OAuth 2.0 scopes
The default policy shipped with MAS supports the following scopes:
openid
email
urn:matrix:org.matrix.msc2967.client:api:*
urn:matrix:org.matrix.msc2967.client:device:[device id]
urn:matrix:org.matrix.msc2967.client:guest
urn:synapse:admin:*
urn:mas:admin
urn:mas:graphql:*
OpenID Connect scopes
MAS supports the following standard OpenID Connect scopes, as defined in OpenID Connect Core 1.0:
openid
The openid
scope is a special scope that indicates that the client is requesting an OpenID Connect id_token
.
The userinfo endpoint as described by the same specification requires this scope to be present in the request.
The default policy allows any client and any user to request this scope.
email
Requires the openid
scope to be present in the request.
It adds the user's email address to the id_token
and to the claims returned by the userinfo endpoint.
The default policy allows any client and any user to request this scope.
Matrix-related scopes
Those scopes are specific to the Matrix protocol and are part of MSC2967.
urn:matrix:org.matrix.msc2967.client:api:*
This scope grants access to the full Matrix client-server API.
The default policy allows any client and any user to request this scope.
urn:matrix:org.matrix.msc2967.client:device:[device id]
This scope sets the device ID of the session, where [device id]
is the device ID of the session.
Currently, MAS only allows the following characters in the device ID: a-z
, A-Z
, 0-9
and -
.
It also needs to be at least 10 characters long.
There can only be one device ID in the scope list of a session.
The default policy allows any client and any user to request this scope.
urn:matrix:org.matrix.msc2967.client:guest
This scope grants access to a restricted set of endpoints that are available to guest users.
It is mutually exclusive with the urn:matrix:org.matrix.msc2967.client:api:*
scope.
Note that MAS doesn't yet implement any special semantic around guest users, but this scope is reserved for future use.
The default policy allows any client and any user to request this scope.
Synapse-specific scopes
MAS also supports one Synapse-specific scope, which aren't formally defined in any specification.
urn:synapse:admin:*
This scope grants access to the Synapse admin API.
Because of how Synapse works for now, this scope by itself isn't sufficient to access the admin API.
A session wanting to access the admin API also needs to have the urn:matrix:org.matrix.msc2967.client:api:*
scope.
The default policy doesn't allow everyone to request this scope. It allows:
- users with the
can_request_admin
attribute set totrue
in the database - users listed in the
policy.data.admin_users
configuration option
MAS-specific scopes
MAS also has a few scopes that are specific to the MAS implementation.
urn:mas:admin
This scope grants full access to the MAS Admin API.
The default policy doesn't allow everyone to request this scope. It allows:
- for the "authorization code" and "device authorization" grants:
- users with the
can_request_admin
attribute set totrue
in the database - users listed in the
policy.data.admin_users
configuration option
- users with the
- for the "client credentials" grant:
- clients that are listed in the
policy.data.admin_clients
configuration option
- clients that are listed in the
urn:mas:graphql:*
This scope grants access to the whole MAS Internal GraphQL API.
What permission the session has on the API is determined by the entity that the session is authorized as.
When authorized as a user (and without the mas:urn:admin
scope), this will usually allow querying and mutating the user's own data.
The default policy allows any client and any user to request this scope.
However, as noted in the Internal GraphQL API documentation, access to the Internal GraphQL API from outside of MAS itself is deprecated in favour of the Admin API.
Command line tool
The command line interface provides subcommands that helps running the service.
Logging
The overall log level of the CLI can be changed via the RUST_LOG
environment variable.
Default log level is info
.
Valid levels from least to most verbose are error
, warn
, info
, debug
and trace
.
Global flags
--config
Sets the configuration file to load. It can be repeated multiple times to merge multiple files together.
Usage: mas-cli [OPTIONS] [COMMAND]
Commands:
config Configuration-related commands
database Manage the database
server Runs the web server
worker Run the worker
manage Manage the instance
templates Templates-related commands
doctor Run diagnostics on the deployment
help Print this message or the help of the given subcommand(s)
Options:
-c, --config <CONFIG> Path to the configuration file
-h, --help Print help
config
Helps to deal with the configuration
config check
Check the validity of configuration files.
$ mas-cli config check --config=config.yaml
INFO mas_cli::config: Configuration file looks good path=["config.yaml"]
config dump
Dump the merged configuration tree.
$ mas-cli config dump --config=first.yaml --config=second.yaml
---
clients:
# ...
config generate
Generate a sample configuration file.
It generates random signing keys (.secrets.keys
) and the cookie encryption secret (.secrets.encryption
).
$ mas-cli config generate > config.yaml
INFO generate: mas_config::oauth2: Generating keys...
INFO generate:rsa: mas_config::oauth2: Done generating RSA key
INFO generate:ecdsa: mas_config::oauth2: Done generating ECDSA key
config sync [--prune] [--dry-run]
Synchronize the configuration with the database.
This will synchronize the clients
and upstream_oauth
sections of the configuration with the database.
By default, it does not delete clients and upstreams that are not in the configuration anymore. Use the --prune
option to do so.
The --dry-run
option will log the changes that would be made, without actually making them.
$ mas-cli config sync --prune --config=config.yaml
INFO cli.config.sync: Syncing providers and clients defined in config to database prune=true dry_run=false
INFO cli.config.sync: Updating provider provider.id=01H3FDH2XZJS8ADKRGWM84PZTY
INFO cli.config.sync: Adding provider provider.id=01H3FDH2XZJS8ADKRGWM84PZTF
INFO cli.config.sync: Deleting client client.id=01GFWRB9MYE0QYK60NZP2YF905
INFO cli.config.sync: Updating client client.id=01GFWRB9MYE0QYK60NZP2YF904
database
Run database-related operations
database migrate
Run the pending database migrations
$ mas-cli database migrate
manage
Includes admin-related subcommands.
manage verify-email <username> <email>
Mark a user email address as verified
server
Runs the authentication service.
$ mas-cli server
INFO mas_cli::server: Starting task scheduler
INFO mas_core::templates: Loading builtin templates
INFO mas_cli::server: Listening on http://0.0.0.0:8080
templates
templates check
Check the validity of the templates loaded by the config. It compiles the templates and then renders them with different contexts.
$ mas-cli templates check
INFO mas_core::templates: Loading templates from filesystem path=./templates/**/*.{html,txt}
INFO mas_core::templates::check: Rendering template name="login.html" context={"csrf_token":"fake_csrf_token","form":{"fields_errors":{},"form_errors":[],"has_errors":false}}
INFO mas_core::templates::check: Rendering template name="register.html" context={"__UNUSED":null,"csrf_token":"fake_csrf_token"}
INFO mas_core::templates::check: Rendering template name="index.html" context={"csrf_token":"fake_csrf_token","current_session":{"active":true,"created_at":"2021-09-24T13:26:52.962135085Z","id":1,"last_authd_at":"2021-09-24T13:26:52.962135316Z","user_id":2,"username":"john"},"discovery_url":"https://example.com/.well-known/openid-configuration"}
...
doctor
Run diagnostics on the live deployment. This tool should help diagnose common issues with the service configuration and deployment.
When running this tool, make sure it runs from the same point-of-view as the service, with the same configuration file and environment variables.
$ mas-cli doctor
Contributing
This document aims to get you started with contributing to the Matrix Authentication Service!
1. Who can contribute to MAS?
We ask that everybody who contributes to this project signs off their contributions, as explained below.
Everyone is welcome to contribute code to matrix.org projects, provided that they are willing to license their contributions under the same license as the project itself. We follow a simple 'inbound=outbound' model for contributions: the act of submitting an 'inbound' contribution means that the contributor agrees to license the code under the same terms as the project's overall 'outbound' license - in our case, this is almost always Apache Software License v2 (see LICENSE).
In order to have a concrete record that your contribution is intentional and you agree to license it under the same terms as the project's license, we've adopted the same lightweight approach used by the Linux Kernel, Docker, and many other projects: the Developer Certificate of Origin (DCO). This is a simple declaration that you wrote the contribution or otherwise have the right to contribute it to Matrix:
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you agree to this for your contribution, then all that's needed is to include the line in your commit or pull request comment:
Signed-off-by: Your Name <your@email.example.org>
Git allows you to add this signoff automatically when using the -s
flag to git commit
, which uses the name and email set in your user.name
and user.email
git configs.
2. What do I need?
To get MAS running locally from source you will need:
3. Get the source
- Clone this repository
4. Build and run MAS
- Build the frontend
cd frontend npm ci npm run build cd ..
- Build the Open Policy Agent policies
cd policies make # OR, if you don't have `opa` installed and want to build through the OPA docker image make DOCKER=1 cd ..
- Generate the sample config via
cargo run -- config generate > config.yaml
- Run a PostgreSQL database locally
docker run -p 5432:5432 -e 'POSTGRES_USER=postgres' -e 'POSTGRES_PASSWORD=postgres' -e 'POSTGRES_DATABASE=postgres' postgres
- Update the database URI in
config.yaml
topostgresql://postgres:postgres@localhost/postgres
- Run the database migrations via
cargo run -- database migrate
- Run the server via
cargo run -- server -c config.yaml
- Go to http://localhost:8080/
5. Learn about MAS
You can learn about the architecture and database of MAS here.
Architecture
The service is meant to be easily embeddable, with only a dependency to a database. It is also meant to stay lightweight in terms of resource usage and easily scalable horizontally.
Scope and goals
The Matrix Authentication Service has been created to support the migration of Matrix to an OpenID Connect (OIDC) based architecture as per MSC3861.
It is not intended to be a general purpose Identity Provider (IdP) and instead focuses on the specific needs of Matrix.
Furthermore, it is only intended that it would speak OIDC for authentication and not other protocols. Instead, if you want to connect to an upstream SAML, CAS or LDAP backend then you need to pair MAS with a separate service (such as Dex or Keycloak) which does that translation for you.
Whilst it only supports use with Synapse today, we hope that other homeservers will become supported in future.
If you need some other feature that MAS doesn't support (such as TOTP or WebAuthn), then you should consider pairing MAS with another IdP that does support the features you need.
Workspace and crate split
The whole repository is a Cargo Workspace that includes multiple crates under the /crates
directory.
This includes:
mas-cli
: Command line utility, main entry pointmas-config
: Configuration parsing and loadingmas-data-model
: Models of objects that live in the database, regardless of the storage backendmas-email
: High-level email sending abstractionmas-handlers
: Main HTTP application logicmas-iana
: Auto-generated enums from IANA registriesmas-iana-codegen
: Code generator for themas-iana
cratemas-jose
: JWT/JWS/JWE/JWK abstractionmas-static-files
: Frontend static files (CSS/JS). Includes some frontend toolingmas-storage
: Abstraction of the storage backendsmas-storage-pg
: Storage backend implementation for a PostgreSQL databasemas-tasks
: Asynchronous task runner and scheduleroauth2-types
: Useful structures and types to deal with OAuth 2.0/OpenID Connect endpoints. This might end up published as a standalone library as it can be useful in other contexts.
Important crates
The project makes use of a few important crates.
Async runtime: tokio
Tokio is the async runtime used by the project. The choice of runtime does not have much impact on most of the code.
It has an impact when:
- spawning asynchronous work (as in "not awaiting on it immediately")
- running CPU-intensive tasks. They should be ran in a blocking context using
tokio::task::spawn_blocking
. This includes password hashing and other crypto operations. - when dealing with shared memory, e.g. mutexes, rwlocks, etc.
Logging: tracing
Logging is handled through the tracing
crate.
It provides a way to emit structured log messages at various levels.
#![allow(unused)] fn main() { use tracing::{info, debug}; info!("Logging some things"); debug!(user = "john", "Structured stuff"); }
tracing
also provides ways to create spans to better understand where a logging message comes from.
In the future, it will help building OpenTelemetry-compatible distributed traces to help with debugging.
tracing
is becoming the standard to log things in Rust.
By itself it will do nothing unless a subscriber is installed to -for example- log the events to the console.
The CLI installs tracing-subcriber
on startup to log in the console.
It looks for a RUST_LOG
environment variable to determine what event should be logged.
Error management: thiserror
/ anyhow
thiserror
helps defining custom error types.
This is especially useful for errors that should be handled in a specific way, while being able to augment underlying errors with additional context.
anyhow
helps dealing with chains of errors.
It allows for quickly adding additional context around an error while it is being propagated.
Both crates work well together and complement each other.
Database interactions: sqlx
Interactions with the database are done through sqlx
, an async, pure-Rust SQL library with compile-time check of queries.
It also handles schema migrations.
Templates: tera
Tera was chosen as template engine for its simplicity as well as its ability to load templates at runtime. The builtin templates are embedded in the final binary through some macro magic.
The downside of Tera compared to compile-time template engines is the possibility of runtime crashes. This can however be somewhat mitigated with unit tests.
Crates from RustCrypto
The RustCrypto team offer high quality, independent crates for dealing with cryptography. The whole project is highly modular and APIs are coherent between crates.
Database
Interactions with the database goes through sqlx
.
It provides async database operations with connection pooling, migrations support and compile-time check of queries through macros.
Writing database interactions
All database interactions are done through repositoriy traits. Each repository trait usually manages one type of data, defined in the mas-data-model
crate.
Defining a new data type and associated repository looks like this:
- Define new structs in
mas-data-model
crate - Define the repository trait in
mas-storage
crate - Make that repository trait available via the
RepositoryAccess
trait inmas-storage
crate - Setup the database schema by writing a migration file in
mas-storage-pg
crate - Implement the new repository trait in
mas-storage-pg
crate - Write tests for the PostgreSQL implementation in
mas-storage-pg
crate
Some of those steps are documented in more details in the mas-storage
and mas-storage-pg
crates.
Compile-time check of queries
To be able to check queries, sqlx
has to introspect the live database.
Usually it does so by having the database available at compile time, but to avoid that we're using the offline
feature of sqlx
, which saves the introspection informatons as a flat file in the repository.
Preparing this flat file is done through sqlx-cli
, and should be done everytime the database schema or the queries changed.
# Install the CLI
cargo install sqlx-cli --no-default-features --features postgres
cd crates/storage-pg/ # Must be in the mas-storage-pg crate folder
export DATABASE_URL=postgresql:///matrix_auth
cargo sqlx prepare
Migrations
Migration files live in the migrations
folder in the mas-core
crate.
cd crates/storage-pg/ # Again, in the mas-storage-pg crate folder
export DATABASE_URL=postgresql:///matrix_auth
cargo sqlx migrate run # Run pending migrations
cargo sqlx migrate add [description] # Add new migration files
Note that migrations are embedded in the final binary and can be run from the service CLI tool.
Internal GraphQL API
Note: This API used to be the way for external tools to interact with MAS. However, external usage is now deprecated in favour of the REST based Admin API. External access to this API will be removed in a future release.
MAS uses an internal GraphQL API which is used by the self-service user interface (usually accessible on /account/
), for users to manage their own account.
The endpoint for this API can be discovered through the OpenID Connect discovery document, under the org.matrix.matrix-authentication-service.graphql_endpoint
key.
Though it is usually hosted at https://<mas-host>/graphql
.
GraphQL uses a self-describing schema, which means that the API can be explored in tools like the GraphQL Playground.
If enabled, MAS hosts an instance of the playground at https://<mas-host>/graphql/playground
.
Authorization
There are two ways to authorize a request to the GraphQL API:
- if you are requesting from the self-service user interface (or the MAS-hosted GraphQL Playground), it will use the session cookies to authorize as the current user. This mode only allows the user to access their own data, and will never provide admin access.
- else you will need to provide an OAuth 2.0 access token in the
Authorization
header, with theBearer
scheme.
The access token must have the urn:mas:graphql:*
scope to be able to access the GraphQL API.
With only this scope, the session will be authorized as the user who owns the access token, and will only be able to access their own data.
To get full access to the GraphQL API, the access token must have the urn:mas:admin
scope in addition to the urn:mas:graphql:*
scope.
About Application Services login
Encrypted Application Services/Bridges currently leverage the m.login.application_service
login type to create devices for users.
This API is not available in the Matrix Authentication Service.
We're working on a solution to support this use case, but in the meantime, this means encrypted bridges will not work with the Matrix Authentication Service. A workaround is to disable E2EE support in your bridge setup.