This way it will be possible to write CSS and JavaScript code that will
only apply to specific tenants.
Note that CSS customization is still limited because it isn't possible
to use different SCSS variables per tenant.
We forgot to do so in commit d827768c0. In order to avoid the same
mistake in the future, we're extracting a method to get these
attributes. We're also adding tests, since we didn't have any tests to
check that the `dir` attribute was properly set.
While this is not strictly necessary, it can help moving the data of one
tenant to a different server or removing it.
Note we're using subfolders inside the `tenants` subfolder. If we simply
used subfolders with the schema names, if the schema names were, for
instance, language codes like `es`, `en`, `it`, ... they would conflict
with the default subfolders used by Active Storage.
Note we aren't allowing to delete a tenant because it would delete all
its data, so this action is a very dangerous one. We might need to add a
warning when creating a tenant, indicating the tenant cannot be
destroyed. We can also add an action to delete a tenant which forces the
admin to write the name of the tenant before deleting it and with a big
warning about the danger of this operation.
For now, we're letting administrators of the "main" (default) tenant to
create other tenants. However, we're only allowing to manage tenants
when the multitenancy configuration option is enabled. This way the
interface won't get in the way on single-tenant applications.
We've thought about creating a new role to manage tenants or a new URL
out of the admin area. We aren't doing so for simplicity purposes and
because we want to keep CONSUL working the same way it has for
single-tenant installations, but we might change it in the future.
There's also the fact that by default we create one user with a known
password, and if by default we create a new role and a new user to
handle tenants, the chances of people forgetting to change the password
of one of these users increases dramatically, particularly if they
aren't using multitenancy.
We had some of the logic in the ApplicationMailer. Since we're going to
use this logic in more places, we're moving it to the Tenant model,
which is the model handling everything related to hosts.
While we ping some search engines (currently, only Google) when
generating the sitemap files, we weren't telling search engines
accessing through the `robots.txt` file where to find the sitemap. Now
we're doing so, using the right sitemap file for the right tenant.
Note that the `sitemap:refresh` task only pings search engines at the
end, so it only does so for the `Sitemap.default_host` defined last. So
we're using the `sitemap:refresh:no_ping` task instead and pinging
search engines after creating each sitemap.
Note we're pinging search engines in staging and preproduction
environments. I'm leaving it that way because that's what we've done
until now, but I wonder whether we should only do so on production.
Since we're creating a new method to get the current url_options, we're
also using it in the dev_seeds.
Testing that the sitemap is valid (which we test in the following test)
also checks that the sitemap has been generated. The test will also fail
with different errors depending on whether no file was generated or the
generated file is invalid.
Some tasks don't have to run on every tenant. The task to calculate the
TSV is only done for records which were present before we added the TSV
column, and that isn't going to happen in any tenants because we added
the TSV column before adding the tenants table. Similarly, the migration
needed for existing polls isn't necessary because there weren't any
tenants before we allowed to set the starting/ending time to polls.
We aren't adding any tests for these tasks because tests for rake tasks
are slow and tests creating tenants are also slow, making the
combination of the two even slower, particularly if we add tests for
every single task we're changing. We're adding tests for the
`.run_on_each` method instead.
The `budgets📧selected` and `budgets📧unselected` tasks are
supposed to be run manually because they only make sense at a specific
point during the life of a budget.
However, they would only run on the default tenant, and it was
impossible to run them on a different tenant.
So we're introducing an argument in the rake task accepting the name of
the tenant whose users we want to send emails to.
We were using `Budget.last`, but the last budget might not be published
yet.
I must admit I don't know whether these tasks are useful, but I'm not
removing them because I'm not sure that won't harm any CONSUL
installations.
Until now, running `db:dev_seed` created development data for the
default tenant but it was impossible to create this data for other
tenants.
Now the tenant can be provided as a parameter.
Note that, in order to be able to execute this task twice while running
the tests, we need to use `load` instead of `require_relative`, since
`require_relative` doesn't load the file again if it's already loaded.
Also note that having two optional parameters in a rake task results in
a cumbersome syntax to execute it. To avoid this, we're simply removing
the `print_log` parameter, which was used mainly for the test
environment. Now we use a different logic to get the same result.
From now on it won't be possible to pass the option to avoid the log in
the development environment. I don't know a developer who's ever used
this option, though, and it can always be done using `> /dev/null`.
The subdomain elevator we were using, which is included in apartment,
didn't work on hosts already including a subdomain (like
demo.consul.dev, for instance). In those cases, we would manually add
the subdomain to the list of excluded subdomains. Since these subdomains
will be different for different CONSUL installations, it meant each
installation had to customize the code. Furthermore, existing
installations using subdomains would stop working.
So we're using a custom method to find the current tenant, based on the
host defined in `default_url_options`.
In order to avoid any side-effects on single-tenant applications, we're
adding a new configuration option to enable multitenancy
We're enabling two ways to handle this configuration option:
a) Change the application_custom.rb file, which is under version control
b) Change the secrets.yml file, which is not under version control
This way people prefering to handle configuration options through
version control can do so, while people who prefer handling
configuration options through te secrets.yml file can do so as well.
We're also disabling the super-annoying warnings mentioning there are no
tenants which we got every time we run migrations on single-tenant
applications. These messages will only be enabled when the multitenancy
feature is enabled too. For this reason, we're also disabling the
multitenancy feature in the development environment by default.
Note we're using the `:HOST` regular expression since subdomains can
contain the same characters as domains do. This isn't 100% precise,
though, since subdomains have a maximum length of 63 characters, but is
good enough for our purposes.
This way all tenants will be able to access them instead of just the
default one.
The apartment gem recommends using a rake task instead of a migration,
but that's a solution which is primarily meant for new installations.
Migrations are easier to execute on existing installations.
However, since this migration doesn't affect the `schema.rb` file, we
still need to make sure the shared schema is created in tasks which do
not execute migrations, like `db:schema:load` or `db:test:prepare`, just
like the apartment gem recommends. That's why we're enhancing these
tasks so they execute this migration.
Note that there might be cases where the database user isn't a superuser
(as it's usually the case on production environments), meaning commands
to create, alter or drop extensions will fail. There's also the case
where users don't have permissions to create schemas, which is needed in
order to create the shared extensions schema and the schemas used by the
tenants. For these reasons, we're minimizing the number of commands, and
so we only alter or create extensions when it is really necessary.
When users don't have permission, we aren't running the commands but
showing a warning with the steps needed to run the migration manually.
This is only necessary on installations which are going to use
multitenancy; single-tenant applications upgrading don't need to run
this migration, and that's why we aren't raising exceptions when we
can't run it.
For new installations, we'll change the CONSUL installer so extensions
are automatically created in the shared schema.
Also note the plpgsql extension is not handled here. This is a special
extension which must be installed on the pg_catalog schema, which is
always in the search path and so is shared by all tenants.
Finally, we also need to change the `database.yml` file in order to
search for shared extensions while running migrations or model tests,
since none of our enabled extensions are executed during migrations;
we're also adding a rake task for existing installations. Quoting the
apartment documentation:
> your database.yml file must mimic what you've set for your default and
> persistent schemas in Apartment. When you run migrations with Rails,
> it won't know about the extensions schema because Apartment isn't
> injected into the default connection, it's done on a per-request
> basis.
They were added in Rubocop 1.24.0.
Even if we were already applying FileRead everywhere, this is something
we've manually fixed in the past. Another reason to add it is that these
rules are deeply related.
This rule was added in Rubocop 1.18.0, but we didn't add it back then.
Since we're applying it most of the time, we might as well be consistent
and apply it everywhere.
The scripts crashed when the `data` folder wasn't present, which is the
common situation in development environments or production environments
not using Capistrano, since this folder isn't under version control.
The `reload` method added to max_votes validation is needed because the
author gets here with some changes because of the around_action
`switch_locale`, which adds some changes to the current user record and
therefore, the lock method raises an exception when trying to lock it
requiring us to save or discard those record changes.
In case we receive consecutive requests we are locking the poll author record
until the first request transaction ends, so the author answers count during
subsequent requests validations is up to date.
The Google response contains an `email_verified` field instead of a
`verified_email` field, and so we weren't treating verified Google
accounts as verified.
We were duplicating the asset host and the URL host in all environments,
but we can make it so the asset host uses the URL host unless we
specifically set it.
Note that, inside the `ApplicationMailer`, the `root_url` method already
uses `default_url_options` to generate the URL.
In the rare case of CONSUL installations who have changed the asset
host, this change has no effect since they'll get a conflict in the
environment files when upgrading and they'll choose to use their own
asset host.
We've been using the `url` Setting for a long time, but since then we've
added a few references to `root_url` to this file, so we're now adding
consistency. We're also removing a now unnecessary condition.
We were using `Setting["url"]` to verify the content belonged to the
application URL, but we can use `root_url` instead.
Note that means we need to include the port when filling in forms in the
tests, since in tests URL helpers like `polymorphic_url` don't include
the port, but a port is automatically added when actually making the
request.
This task was "temporarily" removed in commit 7b6619528. Since that was
done three and a half years ago, right after the dashboard was
introduced, I think it's time to make this "temporary" measure a bit
more permanent ;).
By using the Rails `button_to` helper (which generates a form), and adapting the
response to `html` and `js` formats, the feature works with or without javascript
enabled.