The subdomain elevator we were using, which is included in apartment,
didn't work on hosts already including a subdomain (like
demo.consul.dev, for instance). In those cases, we would manually add
the subdomain to the list of excluded subdomains. Since these subdomains
will be different for different CONSUL installations, it meant each
installation had to customize the code. Furthermore, existing
installations using subdomains would stop working.
So we're using a custom method to find the current tenant, based on the
host defined in `default_url_options`.
In order to avoid any side-effects on single-tenant applications, we're
adding a new configuration option to enable multitenancy
We're enabling two ways to handle this configuration option:
a) Change the application_custom.rb file, which is under version control
b) Change the secrets.yml file, which is not under version control
This way people prefering to handle configuration options through
version control can do so, while people who prefer handling
configuration options through te secrets.yml file can do so as well.
We're also disabling the super-annoying warnings mentioning there are no
tenants which we got every time we run migrations on single-tenant
applications. These messages will only be enabled when the multitenancy
feature is enabled too. For this reason, we're also disabling the
multitenancy feature in the development environment by default.
Note we're using the `:HOST` regular expression since subdomains can
contain the same characters as domains do. This isn't 100% precise,
though, since subdomains have a maximum length of 63 characters, but is
good enough for our purposes.
This way all tenants will be able to access them instead of just the
default one.
The apartment gem recommends using a rake task instead of a migration,
but that's a solution which is primarily meant for new installations.
Migrations are easier to execute on existing installations.
However, since this migration doesn't affect the `schema.rb` file, we
still need to make sure the shared schema is created in tasks which do
not execute migrations, like `db:schema:load` or `db:test:prepare`, just
like the apartment gem recommends. That's why we're enhancing these
tasks so they execute this migration.
Note that there might be cases where the database user isn't a superuser
(as it's usually the case on production environments), meaning commands
to create, alter or drop extensions will fail. There's also the case
where users don't have permissions to create schemas, which is needed in
order to create the shared extensions schema and the schemas used by the
tenants. For these reasons, we're minimizing the number of commands, and
so we only alter or create extensions when it is really necessary.
When users don't have permission, we aren't running the commands but
showing a warning with the steps needed to run the migration manually.
This is only necessary on installations which are going to use
multitenancy; single-tenant applications upgrading don't need to run
this migration, and that's why we aren't raising exceptions when we
can't run it.
For new installations, we'll change the CONSUL installer so extensions
are automatically created in the shared schema.
Also note the plpgsql extension is not handled here. This is a special
extension which must be installed on the pg_catalog schema, which is
always in the search path and so is shared by all tenants.
Finally, we also need to change the `database.yml` file in order to
search for shared extensions while running migrations or model tests,
since none of our enabled extensions are executed during migrations;
we're also adding a rake task for existing installations. Quoting the
apartment documentation:
> your database.yml file must mimic what you've set for your default and
> persistent schemas in Apartment. When you run migrations with Rails,
> it won't know about the extensions schema because Apartment isn't
> injected into the default connection, it's done on a per-request
> basis.
They were added in Rubocop 1.24.0.
Even if we were already applying FileRead everywhere, this is something
we've manually fixed in the past. Another reason to add it is that these
rules are deeply related.
The scripts crashed when the `data` folder wasn't present, which is the
common situation in development environments or production environments
not using Capistrano, since this folder isn't under version control.
The `reload` method added to max_votes validation is needed because the
author gets here with some changes because of the around_action
`switch_locale`, which adds some changes to the current user record and
therefore, the lock method raises an exception when trying to lock it
requiring us to save or discard those record changes.
In case we receive consecutive requests we are locking the poll author record
until the first request transaction ends, so the author answers count during
subsequent requests validations is up to date.
The Google response contains an `email_verified` field instead of a
`verified_email` field, and so we weren't treating verified Google
accounts as verified.
In the dev seeds, we were using `Setting["url"]/proposals`, but we can
use `proposals_url` instead, similar to what we do in the rest of the
application.
We can do a similar thing in the sitemap. This way the sitemap will also
work on installations who haven't manually set the "url" setting.
Since we aren't using `Setting["url"]` anywhere anymore, we're removing
it.
This setting was mainly redundant, since we already had the
`server_name` in the secrets. Furthermore, `server_name` is
automatically configured when running the installer, while
`Setting["url"]` had to be manually set in the admin section the
application was installed.
Note we're using `ActionMailer::Base` setting to generate URLs. Sounds a
bit strange, but it's a standard way Rails provides to generate URLs
outside the context of a request.
Note that the `create` action doesn't create an image but updates an
answer instead. We're removing the references to `:create` in the
abilities since it isn't used.
In the future we might change the form to add an image to an answer
because it's been broken for ages since it shows all the attached
images.
Adding, modifiying, and/or deleting questions for an already started
poll is far away from being democratic and can lead to unwanted side
effects like missing votes in the results or stats.
So, from now on, only modifiying questions will be possible only if
the poll has not started yet.
We need to update a couple of tests because a poll is created in the
tests with a timestamp that includes nanoseconds and in the form to edit
the time of the poll the nanoseconds are not sent, meaning it was
detected as a change.
We were already saving it as a time, but we didn't offer an interface to
select the time due to lack of decent browser support for this field
back when this feature was added.
However, nowadays all major browsers support this field type and, at the
time of writing, at least 86.5% of the browsers support it [1]. This
percentage could be much higher, since support in 11.25% of the browsers
is unknown.
Note we still need to support the case where this field isn't supported,
and so we offer a fallback and on the server side we don't assume we're
always getting a time. We're doing a strange hack so we set the field
type to text before changing its value; otherwise old Firefox browsers
crashed.
Also note that, until now, we were storing end dates in the database as
a date with 00:00 as its time, but we were considering the poll to be
open until 23:59 that day. So, in order to keep backwards compatibility,
we're adding a task to update the dates of existing polls so we get the
same behavior we had until now.
This also means budget polls are now created so they end at the
beginning of the day when the balloting phase ends. This is consistent
with the dates we display in the budget phases table.
Finally, there's one test where we're using `beginning_of_minute` when
creating a poll. That's because Chrome provides an interface to enter a
time in a `%H:%M` format when the "seconds" value of the provided time
is zero. However, when the "seconds" value isn't zero, Chrome provides
an interface to enter a time in a `%H:%M:%S` format. Since Capybara
doesn't enter the seconds when using `fill_in` with a time, the test
failed when Capybara tried to enter a time in the `%H:%M` format when
Chrome expected a time in the `%H:%M:%S` format.
To solve this last point, an alternative would be to manually provide
the format when using `fill_in` so it includes the seconds.
[1] https://caniuse.com/mdn-html_elements_input_type_datetime-local
This was the only place where we used the `? <= field` format; we use
the `field >= ?` format everywhere else.
We're also adding a variable to make it easier to understand that the
value is the same in both cases and to make the line shorter ;).
Actions :search_booths and :search_officers in admin polls controller
are moved to other controllers since commit 20e31133a for
:search_booths and commit 19ec7f93b for :search_officers.
This then allows us to remove the code that references these actions in
the controller and in the administrator abilities.
It's now used by default to handle image variants. We were getting a
warning:
DEPRECATION WARNING: Generating image variants will require the
image_processing gem in Rails 6.1. Please add `gem 'image_processing',
'~> 1.2'` to your Gemfile.
Note `mini_magick` is required in order to use the `analyze` method [1].
Since we use it in our image (and site customization image) validations,
we're still keeping the explicit dependency in our Gemfile.
[1] https://guides.rubyonrails.org/v6.0/active_storage_overview.html#analyzing-files
All the code in the `bin/` and the `config/` folders has been generated
running `rake app:update`. The only exception is the code in
`config/application.rb` where we've excluded the engines that Rails 6.0
has added, since we don't use them.
There are a few changes in Active Storage which aren't compatible with
the code we were using until now.
Since the method to assign an attachment in ActiveStorage has changed
and is incompatible with the hack we used to allow assigning `nil`
attachments, and since ActiveStorage now supports assigning `nil`
attachments, we're removing the mentioned hack. This makes the
HasAttachment module redundant, so we're removing it.
Another change in ActiveStorage is files are no longer saved before
saving the `ActiveStorage::Attachment` record. This means we need to
manually upload the file when using direct uploads. We also have to
change the width and height validations we used for images; however,
doing so results in very complex code, and we currently have to write
that code for both images and site customization images.
So, for now, we're just uploading the file before checking its
dimensions. Not ideal, though. We might use active_storage_validations
in the future to fix this issue (when they support a proc/lambda, as
mentioned in commit 600f5c35e).
We also need to update a couple of tests due to a small change in
response headers. Now the content disposition returns something like:
```
attachment; filename="budget_investments.csv"; filename*=UTF-8''budget_investments.csv
```
So we're updating regular expression we use to check the filename.
Finally, Rails 6.0.1 changed the way the host is set in integration
tests [1] and so both `Capybara.app_host` and `Capybara.default_host`
were ignored when generating URLs in the relationable examples. The only
way I've found to make it work is to explicitely assign the host to the
integration session. Rails 6.1 will change this setup again, so maybe
then we can remove this hack.
[1] https://github.com/rails/rails/pull/36283/commits/fe00711e9
We introduced this bug in commit 55d339572, since we didn't take hidden
records into consideration.
I've tried to use `update_column` to simplify the code, but got a syntax
error `unnamed portal parameter` and didn't find how to fix it.
When we added this setting in commit 9979b5399, we disabled it by
default so it would be compatible with existing installations.
Since then, we've released version 1.4, which adds the settings to
existing databases. That means we can now enable it by default and
existing installations won't be affected.
The current consul GraphQL API has two problems.
1) It uses some unnecessary complicated magic to automatically create
the GraphQL types and querys using an `api.yml` file. This approach
is over-engineered, complex and has no benefits. It's just harder to
understand the code for people which are not familiar with the
project (like me, lol).
2) It uses a deprecated DSL [1] that is soon going to be removed from
`graphql-ruby` completely. We are already seeing deprecation warning
because of this (see References).
There was one problem. I wanted to create the API so that it is fully
backwards compatible with the old one, BUT the old one uses field names
which are directly derived from the ruby code, which results in
snake_case field names - not the GraphQL way. When I'm using the
graphql-ruby Class-based syntax, it automatically creates the fields in
camelCase, which breaks backwards-compatibility.
So I've added deprecated snake_case field names to keep it
backwards-compatible.
[1] https://graphql-ruby.org/schema/class_based_api.html
On the Configuration settings page three settings appeared without
description:
* Comments Summary: No description.
* Related Content: No description.
* Tags: No description.
These settings are related with the AI / Machine learning feature. They
only should appear on AI / Machine learning setting page when the
feature is enabled.