Some tasks don't have to run on every tenant. The task to calculate the
TSV is only done for records which were present before we added the TSV
column, and that isn't going to happen in any tenants because we added
the TSV column before adding the tenants table. Similarly, the migration
needed for existing polls isn't necessary because there weren't any
tenants before we allowed to set the starting/ending time to polls.
We aren't adding any tests for these tasks because tests for rake tasks
are slow and tests creating tenants are also slow, making the
combination of the two even slower, particularly if we add tests for
every single task we're changing. We're adding tests for the
`.run_on_each` method instead.
The `budgets📧selected` and `budgets📧unselected` tasks are
supposed to be run manually because they only make sense at a specific
point during the life of a budget.
However, they would only run on the default tenant, and it was
impossible to run them on a different tenant.
So we're introducing an argument in the rake task accepting the name of
the tenant whose users we want to send emails to.
We were using `Budget.last`, but the last budget might not be published
yet.
I must admit I don't know whether these tasks are useful, but I'm not
removing them because I'm not sure that won't harm any CONSUL
installations.
Until now, running `db:dev_seed` created development data for the
default tenant but it was impossible to create this data for other
tenants.
Now the tenant can be provided as a parameter.
Note that, in order to be able to execute this task twice while running
the tests, we need to use `load` instead of `require_relative`, since
`require_relative` doesn't load the file again if it's already loaded.
Also note that having two optional parameters in a rake task results in
a cumbersome syntax to execute it. To avoid this, we're simply removing
the `print_log` parameter, which was used mainly for the test
environment. Now we use a different logic to get the same result.
From now on it won't be possible to pass the option to avoid the log in
the development environment. I don't know a developer who's ever used
this option, though, and it can always be done using `> /dev/null`.
This way all tenants will be able to access them instead of just the
default one.
The apartment gem recommends using a rake task instead of a migration,
but that's a solution which is primarily meant for new installations.
Migrations are easier to execute on existing installations.
However, since this migration doesn't affect the `schema.rb` file, we
still need to make sure the shared schema is created in tasks which do
not execute migrations, like `db:schema:load` or `db:test:prepare`, just
like the apartment gem recommends. That's why we're enhancing these
tasks so they execute this migration.
Note that there might be cases where the database user isn't a superuser
(as it's usually the case on production environments), meaning commands
to create, alter or drop extensions will fail. There's also the case
where users don't have permissions to create schemas, which is needed in
order to create the shared extensions schema and the schemas used by the
tenants. For these reasons, we're minimizing the number of commands, and
so we only alter or create extensions when it is really necessary.
When users don't have permission, we aren't running the commands but
showing a warning with the steps needed to run the migration manually.
This is only necessary on installations which are going to use
multitenancy; single-tenant applications upgrading don't need to run
this migration, and that's why we aren't raising exceptions when we
can't run it.
For new installations, we'll change the CONSUL installer so extensions
are automatically created in the shared schema.
Also note the plpgsql extension is not handled here. This is a special
extension which must be installed on the pg_catalog schema, which is
always in the search path and so is shared by all tenants.
Finally, we also need to change the `database.yml` file in order to
search for shared extensions while running migrations or model tests,
since none of our enabled extensions are executed during migrations;
we're also adding a rake task for existing installations. Quoting the
apartment documentation:
> your database.yml file must mimic what you've set for your default and
> persistent schemas in Apartment. When you run migrations with Rails,
> it won't know about the extensions schema because Apartment isn't
> injected into the default connection, it's done on a per-request
> basis.
This task was added in commit 2b9b78e38. According to the notes in the
pull request introducing the change:
> If you have had [map validations] in production, you might want to run
> the rake task votes:reset_vote_counter to reset the cached votes
> counter. Otherwise, until there is another vote for a proposal, the
> counter will not be updated, and there might be some votes which are
> not yet displayed
In short, this was a task that was introduced for a specific release and
had to be run manually. So we can remove it now.
This task was "temporarily" removed in commit 7b6619528. Since that was
done three and a half years ago, right after the dashboard was
introduced, I think it's time to make this "temporary" measure a bit
more permanent ;).
We were already saving it as a time, but we didn't offer an interface to
select the time due to lack of decent browser support for this field
back when this feature was added.
However, nowadays all major browsers support this field type and, at the
time of writing, at least 86.5% of the browsers support it [1]. This
percentage could be much higher, since support in 11.25% of the browsers
is unknown.
Note we still need to support the case where this field isn't supported,
and so we offer a fallback and on the server side we don't assume we're
always getting a time. We're doing a strange hack so we set the field
type to text before changing its value; otherwise old Firefox browsers
crashed.
Also note that, until now, we were storing end dates in the database as
a date with 00:00 as its time, but we were considering the poll to be
open until 23:59 that day. So, in order to keep backwards compatibility,
we're adding a task to update the dates of existing polls so we get the
same behavior we had until now.
This also means budget polls are now created so they end at the
beginning of the day when the balloting phase ends. This is consistent
with the dates we display in the budget phases table.
Finally, there's one test where we're using `beginning_of_minute` when
creating a poll. That's because Chrome provides an interface to enter a
time in a `%H:%M` format when the "seconds" value of the provided time
is zero. However, when the "seconds" value isn't zero, Chrome provides
an interface to enter a time in a `%H:%M:%S` format. Since Capybara
doesn't enter the seconds when using `fill_in` with a time, the test
failed when Capybara tried to enter a time in the `%H:%M` format when
Chrome expected a time in the `%H:%M:%S` format.
To solve this last point, an alternative would be to manually provide
the format when using `fill_in` so it includes the seconds.
[1] https://caniuse.com/mdn-html_elements_input_type_datetime-local
This fixes a few issues we've had for years.
First, when attaching an image and then sending a form with validation
errors, the image preview would not be rendered when the form was
displayed once again. Now it's rendered as expected.
Second, when attaching an image, removing it, and attaching a new
one, browsers were displaying the image preview of the first one. That's
because Paperclip generated the same URL from both files (as they both
had the same hash data and prefix). Browsers usually cache images and
render the cached image when getting the same URL.
Since now we're storing each image in a different Blob, the images have
different URLs and so the preview of the second one is correctly
displayed.
Finally, when users downloaded a document, they were getting files with
a very long hexadecimal hash as filename. Now they get the original
filename.
In commit 5a4921a1a we replaced `URI.parse` with `URI.open` due to some
issues during our tests with S3.
However, there are some security issues with `URI.open` [1], since it
might allow some users to execute code on the server.
So we're using `URI.parse#open` instead.
[1] https://docs.rubocop.org/rubocop/cops_security.html#securityopen
This column wasn't used in any released Consul version since it was only
used during development. For the same reason, the task to migrate the
information in the `link` column to the `links` table isn't needed
either.
It was never used and its data is based on Madrid's requirements for
successful proposals. It was written during the development of the
dashboard, probably for testing purposes.
There could be inconsistencies in the database and an attachment might
have a `record_id` pointing to a record which no longer exist. We were
getting an exception in this case.
We were getting a duplicate prepared statement error when trying to
execute more than one test using this task. So we're checking whether
the prepared statement exists before defining it.
Just like we add the `storage_` prefix for new records so we can use
both Active Storage and Paperclip at the same time.
Now the migration actually works, at least for basic cases.
This model is added by the devise-security gem, but we don't use it. We
were getting an error while migrating:
ActiveRecord::StatementInvalid: PG::UndefinedTable: ERROR: relation "old_passwords" does not exist
LINE 8: WHERE a.attrelid = '"old_passwords"'::regclas...
^
SELECT a.attname, format_type(a.atttypid, a.atttypmod),
pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod,
c.collname, col_description(a.attrelid, a.attnum) AS comment
FROM pg_attribute a
LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum
LEFT JOIN pg_type t ON a.atttypid = t.oid
LEFT JOIN pg_collation c ON a.attcollation = c.oid AND a.attcollation <> t.typcollation
WHERE a.attrelid = '"old_passwords"'::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum
lib/tasks/active_storage.rake:27:in `block (4 levels) in <top (required)>'
lib/tasks/active_storage.rake:26:in `each'
lib/tasks/active_storage.rake:26:in `block (3 levels) in <top (required)>'
lib/tasks/active_storage.rake:25:in `block (2 levels) in <top (required)>'
Caused by:
PG::UndefinedTable: ERROR: relation "old_passwords" does not exist
We can use the `ActiveStorage::Blob` class to find where the file is
supposed to be stored instead of manually setting the path. This is also
more robust because it works with Active Storage configurations which
don't store files in the default folder.
We're not adding the rule because it would apply the current line length
rule of 110 characters per line. We still haven't decided whether we'll
keep that rule or make lines shorter so they're easier to read,
particularly when vertically splitting the editor window.
So, for now, I'm applying the rule to lines which are about 90
characters long.
Since version 2.0 introduced many breaking changes, we're upgrading to
it first.
The changes have been done by installing the rubocop-faker gem and
running:
```
rubocop \
--require rubocop-faker \
--only Faker/DeprecatedArguments \
--auto-correct
```
Our previous system to delete cached attachments didn't work for
documents because the `custom_hash_data` is different for files created
from a file and files created from cached attachments.
When creating a document attachment, the name of the file is taken into
account to calculate the hash. Let's say the original file name is
"logo.pdf", and the generated hash is "123456". The cached attachment
will be "123456.pdf", so the generated hash using the cached attachment
will be something different, like "28af3". So the file that will be
removed will be "28af3.pdf", and not "123456.pdf", which will still be
present.
Furthermore, there are times where users choose a file and then they
close the browser or go to a different page. In those cases, we weren't
deleting the cached attachments either.
So we're adding a rake task to delete these files once a day. This way
we can simplify the logic we were using to destroy cached attachments.
Note there's related a bug in documents: when editing a record (for
example, a proposal), if the title of the document changes, its hash
changes, and so it will be impossible to generate a link to that
document. Changing the way this hash is generated is not an option
because it would break links to existing files. We'll try to fix it when
moving to Active Storage.
People who have already upgraded to version 1.3.0 don't need to execute
them again.
We're not deleting the tasks yet in case some people would like to
upgrade from version 1.2.0 to version 1.3.1. In this case, they'll have
to execute the tasks manually.
Note we're making the validation rule dynamic so it's affected by the
way we stub the constant in the tests to emulate data created in old
applications.
Co-Authored-By: Javi Martín <javim@elretirao.net>
These tasks are not needed for new installations, and in existing
installations they've already been executed when upgrading to version
1.1.
One of them also raises a warning in Rails 5.2:
DEPRECATION WARNING: Dangerous query method (method whose arguments are
used as raw SQL) called with non-attribute argument(s): "MIN(id) as id".
Non-attribute arguments will be disallowed in Rails 6.0. This method
should not be called with user-provided values, such as request
parameters or model attributes. Known-safe values can be passed by
wrapping them in Arel.sql()
While this is not a secret and in theory should be in a file under
version control, currently the CONSUL installer disables delayed jobs by
default, meaning we were keeping two versions of the delayed jobs
configuration file, and some existing configurations have their settings
defined in a file in capistrano's `shared` folder.
So we're moving existing settings to the secrets file.
Existing installations having their configuration settings in the
capistrano shared folder needed this migration.
Note we can't just use `YAML.load` because we'd lose the anchors defined
in the file. So we have to parse the file the hard way.