Note that, while we're no longer including them as part of the
`execute_release_2.2.0_tasks` task, we're keeping the tasks to remove
duplicate poll voters and poll options just in case there are some
unexpected issues when adding a unique database index while upgrading to
version 2.3.0. We'll remove them in version 2.4.0.
There are many possible ways to implement this feature:
* Adding a custom middleware
* Using rack-attack with a blocklist
* Using routes constraints
We're choosing to use a controller concern with a redirect because it's
what we do to handle unauthorized cancancan exceptions.
Back in commit c984e666f, we reorganized the code related to the GraphQL
API, but we didn't reorganize the tests.
So we're doing it now, since we're going to fix a potential issue and
add some tests for it.
These tests were always passing because they were stubbing the response
of the same method they were testing. For example, we were testing the
result of `Comment.public_for_api` and stubbing it at the same time.
So we're now stubbing the result of the associations; for example, in
order to test `Comment.public_for_api`, we're stubbing the response of
`Debate.public_for_api`. Now the tests fail if, for instance, the
implementation of `Comment.public_for_api` returns all comments.
- added 2 new types
- modified the models to get data through graphQL
- modified the corresponding spec
- also testing that hidden comments do not show up
- modified comments specs bc now it returns comments on budget
investments
Note: to avoid confusion, "answer" will mean a row in the poll_answers
table and "choice" will mean whatever is in the "answer" column of that
table (I'm applying the same convention in the code of the task).
In order make this task perform reasonably on installations with
millions of votes, we're using `update_all` to update all the answers
with the same choice at once. In order to do that, we first need to
check the existing choices and what are the possible option_ids for
those choices.
Note that, in order for this task to work, we need to remote the
duplicate answers first. Otherwise, we will run into a RecordNotUnique
exception when trying to add the same option_id to two duplicate
answers.
So we're making this task depend on the one that removes duplicate
answers. That means we no longer need to specify the task to remove
duplicate answers in the release tasks; it will automatically be
executed when running the task to add an option_id.
Note that, since poll answers belong to a user and not to a voter, we
aren't doing anything regarding poll answers. This is a separate topic
that might be dealt with in a separate pull request.
Also note that, since there are no records belonging to poll voters, and
poll voters don't use `acts_as_paranoia` and don't have any callbacks on
destroy, it doesn't really matter whether we call `destroy!` or
`delete`. We're using `delete` so there are no unintended side-effects
that might affect voters with the same `user_id` and `poll_id` on
Consul Democracy installations customizing this behavior.
We use caching in our application in two different ways:
1. Rails.cache.fetch in models/controllers/libraries
2. Fragment caching in the views
When using Rails.cache.fetch, we usually set an expiration date or
provide a precise way to invalidate it. If the code changes and the
information stored in the cache is different from what the new code
would return, it's usually not a big deal because the cache will expire
in an hour or a day. Until commit a4461a1a5, the statistics were an
exception to this rule, but that's no longer the case. The actual
exception to this rule are the i18n translations, but the code caching
them is very simple and hasn't changed for more than three years (when
it was written for the first time).
When using fragment caching, on the other hand, Rails automatically
invalidates the cache when the associated _view code_ changes. That is,
if a view contains cache, the view renders a partial, and the partial
changes, the cache is correctly invalidated. The cache isn't invalidated
when the code in helpers, models or controllers change, though, which
the Rails team considers a compromise solution.
However, we've been moving view partials (and even views) to components,
and the cache isn't invalidated when a component changes (it doesn't
matter whether the component Ruby file or the component ERB file
changes). That means that the cache will keep rendering the HTML
generated by the old code.
So, now, we're clearing the cache when upgrading to a new version of
Consul Democracy, as part of the release tasks. That way, institutions
upgrading to a new version don't have to worry about possible issues
with the cache due to the new code not being executed.
I was thinking of adding it to a Capistrano task, but that would mean
that every time people deploy new code, even if it's a hotfix that
doesn't affect the cache at all, the cache would be cleared, which could
be inconvenient. Doing it in a release, that usually changes thousands
of lines of code, sounds like a good compromise.
Note that, for everything to work consistently, we need to make sure
that the default locale is one of the available locales.
Also note that we aren't overwriting the `#save ` method set by
globalize. I didn't feel too comfortable changing a monkey-patch which
ideally shouldn't be there in the first place, I haven't found a case
where `Globalize.locale` is `nil` (since it defaults to `I18n.locale`,
which should never be `nil`), so using `I18n.default_locale` probably
doesn't affect us.
We were defining one builder in the `app/lib/` folder and another one
inside a helper module.
So now we're grouping them together. This way we're following the "one
class per file" convention that we follow most of the time. And, by
extracting the `TranslatableFormBuilder` class to its own file, it'll be
easier to add tests for it.
Note that, for consistency, we're renaming the
`TranslationsFieldsBuilder` class so it ends in `FormBuilder`.
Back in commit 383909e16, we said:
> Even if this class looks very simple now, we're trying a few things
> related to these stats. Having a class for it makes future changes
> easier and, if there weren't any future changes, at least it makes
> current experiments easier.
Since there haven't been any changes in the last 5 years and we've found
cases where using the GeozoneStats class results in a slightly worse
performance, we're removing this class. The code is now a bit easier to
read, and is consistent with the way we calculate participants by age.
This reverts commit 383909e16.
According to the README [1]:
> To mask previously collected IPs, use:
> Ahoy::Visit.find_each do |visit|
> visit.update_column :ip, Ahoy.mask_ip(visit.ip)
> end
We're adapting the code with our version, since we use the `Visit` model
instead of the `Ahoy::Visit` model.
[1] https://github.com/ankane/ahoy/blob/v5.0.2/README.md#ip-masking
We moved this file to `app/lib/` in commit cb477149c so it would be in
the autoload_paths. However, this class is loaded by ActiveStorage, with
the following method:
```
def resolve(class_name)
require "active_storage/service/#{class_name.to_s.underscore}_service"
ActiveStorage::Service.const_get(:"#{class_name.camelize}Service")
rescue LoadError
raise "Missing service adapter for #{class_name.inspect}"
end
``
So this file needs to be in the $LOAD_PATH, or else ActiveStorage won't
be able to load it when we disable the `add_autoload_paths_to_load_path`
option, which is the default in Rails 7.1 [1].
Moving it to the `lib` folder solves the issue; as mentioned in the
guide to upgrade to Rails 7.1 [2]:
> The lib directory is not affected by this flag, it is added to
> $LOAD_PATH always.
However, we were also referencing this class in the `Tenant` model,
meaning we needed to autoload it as well somehow. So, instead of
directly referencing this class, we're using `respond_to?` in the Tenant
model.
We're changing the test so it fails when the code calls
`is_a?(ActiveStorage::Service::TenantDiskService)`. We need to change
the active storage configurations in the test because, otherwise, the
moment `ActiveStorage::Blob` is loaded, the `TenantDiskService` class is
also loaded, meaning the test will pass when using `is_a?`.
Note that, since this class isn't in the autoload paths anymore, we need
to add a `require` in the tests. We could add an initializer to require
it; we're not doing it in order to be consistent with what ActiveStorage
does: it only loads the service that's going to be used in the current
Rails environment. If somebody changed their production environment in
order to use (for example), S3, and we added an initializer to require
the TenantDiskService, we would still load the TenantDiskService even if
it isn't going to be used.
[1] https://guides.rubyonrails.org/v7.1/configuring.html#config-add-autoload-paths-to-load-path
[2] https://guides.rubyonrails.org/v7.1/upgrading_ruby_on_rails.html#autoloaded-paths-are-no-longer-in-$load-path
This way we simplify the code a little bit and we create a method unique
to the `TenantDiskService` class, which can be used to check whether
we're using this class without using `is_a?` or similar.
We used to open these links in new tabs, but accidentally stopped doing
so in commit 75a28fafc.
While, in general, automatically opening a link in a new tab/window is a
bad idea, the exception comes when people are filling in a form and
there are links to pages that contain information which will help them
fill in a form.
There are mainly two advantages of this approach. First, it makes less
likely for people to accidentally lose the information they were filling
in. And, second, having both the form and a help page open at the same
time can make it easier to fill in the form.
However, opening these links in new tabs also has disadvantages, like
taking control away from people or making it harder to navigate through
pages when using a mobile phone.
So this is a compromise solution.
This rule was added in rubocop-rspec 2.9.0.
We were using `be_nil` 50% of the time, and `be nil` the rest of the
time. No strong preference for either one, but IMHO we don't lose
anything be being consistent.
Note we're excluding a few files:
* Configuration files that weren't generated by us
* Migration files that weren't generated by us
* The Gemfile, since it includes an important comment that must be on
the same line as the gem declaration
* The Budget::Stats class, since the heading statistics are a mess and
having shorter lines would require a lot of refactoring
For the HashAlignment rule, we're using the default `key` style (keys
are aligned and values aren't) instead of the `table` style (both keys
and values are aligned) because, even if we used both in the
application, we used the `key` style a lot more. Furthermore, the
`table` style looks strange in places where there are both very long and
very short keys and sometimes we weren't even consistent with the
`table` style, aligning some keys without aligning other keys.
Ideally we could align hashes to "either key or table", so developers
can decide whether keeping the symmetry of the code is worth it in a
case-per-case basis, but Rubocop doesn't allow this option.
We were already applying these rules in most cases.
Note we aren't enabling the `MultilineArrayLineBreaks` rule because
we've got places with many elements whire it isn't clear whether
having one element per line would make the code more readable.
Since IRB has improved its support for multiline, the main argument
towars using a trailing dot no longer affects most people.
It still affects me, though, since I use Pry :), but I agree
leading dots are more readable, so I'm enabling the rule anyway.
* Add Tables option to Redcarpet in Legislation draft
* Allow table tags in Admin Legislation Sanitizer
* Add Test to render markdown tables in Legislation drafts
* Add Test for Admin Legislation Sanitizer
We include test for image, table and h1 to h6 tags and additional tests to strengthen the allowed and disallowed parameters
* Add Table from markdown test in System and Factories
* Add test to render tables for admin user
* Remove comment line about Redcarpet options
* Edit custom css for legislation draft table to make it responsive
It has been detected that for the :pt-BR, :zh-CN and :zh-TW locales,
the translate button was being displayed, but when requesting the
translation, the remote translation validation failed due to:
'''
validates :locale, inclusion: { in: ->(_) {
RemoteTranslations::Microsoft::AvailableLocales.available_locales }}
'''
That available_locales method did not contemplate these 3 languages
in the format used by the application.
To solve this problem the api response is mapped to return all
locales in the format expected by the application.
Add remote translation model test to ensure that a remote translation
is valid when its locale is pt-BR.
Co-Authored-By: Javi Martín <35156+javierm@users.noreply.github.com>
TranslatorText isn't compatible with Ruby 3, so we need to use a
different gem.
The 'translator-text' gem response was an array of one or more objects.
Now with the 'bing_translator' gem the response is an array with one or
several translated texts.
We remove the concept of object in the code. And we also remove the
"create_response" method from the specs since it is no longer necessary
to emulate that object and we can simply use arrays with texts to emulate
the new response.
We've noticed the following warning while testing the upgrade to
Ruby 3.0:
warning: File.exists? is deprecated; use File.exist? instead
We're adding a Rubocop rule so we don't call the deprecated method
in the future.
On my machine, seeding a tenant takes about one second, so skipping this
action when it isn't necessary makes tests creating tenants faster
(although creating a tenant still takes about 3-4 seconds on my
machine).
We had some of the logic in the ApplicationMailer. Since we're going to
use this logic in more places, we're moving it to the Tenant model,
which is the model handling everything related to hosts.
Note that the `sitemap:refresh` task only pings search engines at the
end, so it only does so for the `Sitemap.default_host` defined last. So
we're using the `sitemap:refresh:no_ping` task instead and pinging
search engines after creating each sitemap.
Note we're pinging search engines in staging and preproduction
environments. I'm leaving it that way because that's what we've done
until now, but I wonder whether we should only do so on production.
Since we're creating a new method to get the current url_options, we're
also using it in the dev_seeds.
Testing that the sitemap is valid (which we test in the following test)
also checks that the sitemap has been generated. The test will also fail
with different errors depending on whether no file was generated or the
generated file is invalid.
The `budgets📧selected` and `budgets📧unselected` tasks are
supposed to be run manually because they only make sense at a specific
point during the life of a budget.
However, they would only run on the default tenant, and it was
impossible to run them on a different tenant.
So we're introducing an argument in the rake task accepting the name of
the tenant whose users we want to send emails to.
We were using `Budget.last`, but the last budget might not be published
yet.
I must admit I don't know whether these tasks are useful, but I'm not
removing them because I'm not sure that won't harm any CONSUL
installations.
Until now, running `db:dev_seed` created development data for the
default tenant but it was impossible to create this data for other
tenants.
Now the tenant can be provided as a parameter.
Note that, in order to be able to execute this task twice while running
the tests, we need to use `load` instead of `require_relative`, since
`require_relative` doesn't load the file again if it's already loaded.
Also note that having two optional parameters in a rake task results in
a cumbersome syntax to execute it. To avoid this, we're simply removing
the `print_log` parameter, which was used mainly for the test
environment. Now we use a different logic to get the same result.
From now on it won't be possible to pass the option to avoid the log in
the development environment. I don't know a developer who's ever used
this option, though, and it can always be done using `> /dev/null`.
This task was "temporarily" removed in commit 7b6619528. Since that was
done three and a half years ago, right after the dashboard was
introduced, I think it's time to make this "temporary" measure a bit
more permanent ;).
We were already saving it as a time, but we didn't offer an interface to
select the time due to lack of decent browser support for this field
back when this feature was added.
However, nowadays all major browsers support this field type and, at the
time of writing, at least 86.5% of the browsers support it [1]. This
percentage could be much higher, since support in 11.25% of the browsers
is unknown.
Note we still need to support the case where this field isn't supported,
and so we offer a fallback and on the server side we don't assume we're
always getting a time. We're doing a strange hack so we set the field
type to text before changing its value; otherwise old Firefox browsers
crashed.
Also note that, until now, we were storing end dates in the database as
a date with 00:00 as its time, but we were considering the poll to be
open until 23:59 that day. So, in order to keep backwards compatibility,
we're adding a task to update the dates of existing polls so we get the
same behavior we had until now.
This also means budget polls are now created so they end at the
beginning of the day when the balloting phase ends. This is consistent
with the dates we display in the budget phases table.
Finally, there's one test where we're using `beginning_of_minute` when
creating a poll. That's because Chrome provides an interface to enter a
time in a `%H:%M` format when the "seconds" value of the provided time
is zero. However, when the "seconds" value isn't zero, Chrome provides
an interface to enter a time in a `%H:%M:%S` format. Since Capybara
doesn't enter the seconds when using `fill_in` with a time, the test
failed when Capybara tried to enter a time in the `%H:%M` format when
Chrome expected a time in the `%H:%M:%S` format.
To solve this last point, an alternative would be to manually provide
the format when using `fill_in` so it includes the seconds.
[1] https://caniuse.com/mdn-html_elements_input_type_datetime-local