Just like we do with the rest of the phases.
The reason why we're making this change right now is that we were
getting an accessibility error with processes with no result publication
date:
```
link-name: Links must have discernible text (serious)
https://dequeuniversity.com/rules/axe/4.10/link-name?application=axeAPI
The following 1 node violate this rule:
Selector: p:nth-child(6) > a
HTML: <a href="/legislation/processes/39/result_publication">
<strong></strong>
</a>
Fix all of the following:
- Element is in tab order and does not have accessible text
Fix any of the following:
- Element does not have text that is visible to screen readers
- aria-label attribute does not exist or is empty
- aria-labelledby attribute does not exist, references elements that
do not exist or references elements that are empty
- Element has no title attribute
```
These tests were testing model methods, but were inside files for system
tests.
We're moving them now because these tests use the `open_last_email`
method, and we're looking for places where using this method might
result in a flaky test when used inside a system test.
According to the GeoJSON specification [1]:
> * A linear ring is a closed LineString with four or more positions.
> * The first and last positions are equivalent, and they MUST contain
> identical values; their representation SHOULD also be identical.
> (...)
> * For type "Polygon", the "coordinates" member MUST be an array of
> linear ring coordinate arrays.
Note that, for simplicity, right now we aren't checking whether the
coordinates are defined counterclockwise for exterior rings and
clockwise for interior rings, which is what the specification expects.
[1] https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6
We're reworking the format validation to correctly interpret feature
collection, feature, and geometry, according to RFC 7946 [1].
Since Leaflet interprets GeoJSON format, we're rendering the GeoJSON as
a layer instead of as a set of points. For that, we're normalizing the
GeoJSON to make sure it contains either a Feature or a
FeatureCollection. We're also adding the Leaflet images to the assets
path so the markers used for point geometries are rendered correctly.
Note we no longer allow a GeoJSON containing a geometry but not a
defined type. Since there might be invalid GeoJSON in existing Consul
Democracy databases, we're normalizing these existing geometry objects
to be part of a feature object.
We're also wrapping the outline points in a FeatureCollection object
because most of the large GIS systems eg ArcGIS, QGIS export geojson as
a complete FeatureCollection.
[1] https://datatracker.ietf.org/doc/html/rfc7946
Co-authored-by: Javi Martín <javim@elretirao.net>
The default `date_select` used in fields presents an accessibility
issue, because in generates three select controls but only one label.
That means that there are two controls without a label.
So we're using a date field instead. This type is field is supported by
about 99% of the browsers, and we've already got JavaScript code
converting this field to a jQuery UI datepicker in case the browser
doesn't support date fields.
Note that, since we no longer need to parse the three date fields into
one, we can simplify the code in both the models and the tests.
Another slight improvement is that, previously, we couldn't restrict the
month and day controls in order to set the minimum date, so the maximum
selectable date was always the 31st of December of the year set by the
minimum age setting. As seen in the component test, now that we use only
one field, we can set a specific date as the maximum one.
After the previous commit, we were using `double` in many places but
were only using `instance_double` in one file. So, for consistency,
we're using `double` in this file as well.
This is consistent to what we usually do. Also, we're applying the same
criteria mentioned in commit 72704d776:
> We're also making these actions idempotent, so sending many requests
> to the same action will get the same result, which wasn't the case
> with the `toggle` action. Although it's a low probability case, the
> `toggle` action could result in [selecting an investment] when trying
> to [deselect] it if someone else has [deselected it] it between the
> time the page loaded and the time the admin clicked on the
> "[Selected]" button.
We were checking it in the view, meaning that it was possible to toggle
the selection by sending a custom request even when the investment
wasn't feasible.
Since the PR "Do not use third-party cookies in embedded videos #5548", the logic from
"embed_videos_helper" was extracted to the "embedded_video_component" and the
"videoable" model concern.
However, during this refactor, the "regex" method, which uses record.class:: to handle
video embeds, was left inaccessible for Legislation Proposals.
This commit fixes the issue by including the concern in the Legislation Proposal model.
- added 2 new types
- modified the models to get data through graphQL
- modified the corresponding spec
- also testing that hidden comments do not show up
- modified comments specs bc now it returns comments on budget
investments
It was a bit strange to leave the end date blank and have a message
associated with the start date, so we're using presence validations
instead.
For the range validation, we're using the comparison validator included
in Rails 7.0.
Until now, we've stored the text of the answer somebody replied to. The
idea was to handle the scenarios where the user voters for an option but
then that option is deleted and restored, or the texts of the options
are accidentally edited and so the option "Yes" is now "Now" and vice
versa.
However, since commit 3a6e99cb8, options can no longer be edited once
the poll starts, so there's no risk of the option changing once somebody
has voted.
This means we can now store the ID of the option that has been voted.
That'll also help us deal with a bug introduced int 673ec075e, since
answers in different locales are not counted as the same answer. Note we
aren't dealing with this bug right now.
We're still keeping (and storing) the answer as well. There are two
reasons for that.
First, we might add an "open answer" type of questions in the future and
use this column for it.
Second, we've still got logic depending on the answer, and we need to be
careful when changing it because there are existing installations where
the answer is present but the option_id is not.
Note that we're using `dependent: nullify`. The reasoning is that, since
we're storing both the option_id and the answer text, we can still use
the answer text when removing the option. In practice, this won't matter
much, though, since we've got a validation rule that makes it impossible
to destroy options once the poll has started.
Also note we're still allowing duplicate records when the option is nil.
We need to do that until we've removed every duplicate record in the
database.
Note that, when taking votes from an erased user, since poll answers
don't belong to poll voters, we were not migrating them in the
`take_votes_from` method (and we aren't migrating them now either).
We were checking we didn't have more votes than allowed in the case of
questions with multiple answers, but we weren't checking it in the case
of questions with a single answer. This made it possible to create more
than one answer to the same question. This could happen because the
method `find_or_initialize_user_answer` might initialize two answers in
different threads, due to a race condition.
Having a class named `Poll::Question::Answer` and another class named
`Poll::Answer` was so confusing that no developer working on the project
has ever been capable of remembering which is which for more than a few
seconds.
Furthermore, we're planning to add open answers to polls, and we might
add a reference from the `poll_answers` table to the
`poll_question_answers` table to property differentiate between open
answers and closed answers. Having yet another thing named answer would
be more than what our brains can handle (we know it because we did this
once in a prototype).
So we're renaming `Poll::Question::Answer` to `Poll::Question::Option`.
Hopefully that'll make it easier to remember. The name is also (more or
less) consistent with the `Legislation::QuestionOption` class, which is
similar.
We aren't changing the table or columns names for now in order to avoid
possible issues when upgrading (old code running with the new database
tables/columns after running the migrations but before deployment has
finished, for instance). We might do it in the future.
I've tried not to change the internationalization keys either so
existing translations would still be valid. However, since we have to
change the keys in `activerecord.yml` so methods like
`human_attribute_name` keep working, I'm also changing them in places
where similar keys were used (like `poll_question_answer` or
`poll/question/answer`).
Note that it isn't clear whether we should use `option` or
`question_option` in some cases. In order to keep things simple, we're
using `option` where we were using `answer` and `question_option` where
we were using `question_answer`.
Also note we're adding tests for the admin menu component, since at
first I forgot to change the `answers` reference there and all tests
passed.
This way we can simplify the view, particularly the form. However, we're
still adding some complexity to the form so inputs are inside labels and
so the collection is easier to style with CSS.
Note that, for everything to work consistently, we need to make sure
that the default locale is one of the available locales.
Also note that we aren't overwriting the `#save ` method set by
globalize. I didn't feel too comfortable changing a monkey-patch which
ideally shouldn't be there in the first place, I haven't found a case
where `Globalize.locale` is `nil` (since it defaults to `I18n.locale`,
which should never be `nil`), so using `I18n.default_locale` probably
doesn't affect us.
Note that, currently, we take these settings from the database but we
don't provide a way to edit them through the admin interface, so the
locales must be manually introduced through a Rails console.
While we did consider using a comma-separated list, we're using spaces
in order to be consistent with the way we store the allowed content
types settings.
The `enabled_locales` nomenclature, which contrasts with
`available_locales`, is probably subconsciously based on similar
patterns like the one Nginx uses to enable sites.
Note that we aren't using `Setting.enabled_locales` in the globalize
initializer when setting the fallbacks. This means the following test
(which we could add to the shared globalizable examples) would fail:
```
it "Falls back to an enabled locale if the fallback is not enabled" do
Setting["locales.default"] = "en"
Setting["locales.enabled"] = "fr en"
allow(I18n.fallbacks).to receive(:[]).and_return([:fr, :es])
Globalize.set_fallbacks_to_all_available_locales
I18n.with_locale(:fr) do
expect(record.send(attribute)).to eq "In English"
end
end
```
The reason is that the code making this test pass could be:
```
def Globalize.set_fallbacks_to_all_available_locales
Globalize.fallbacks = I18n.available_locales.index_with do |locale|
((I18n.fallbacks[locale] & Setting.enabled_locales) + Setting.enabled_locales).uniq
end
end
```
However, this would make it impossible to run `rake db:migrate` on new
applications because the initializer would try to load the `Setting`
model but the `settings` table wouldn't exist at that point.
Besides, this is a really rare case that IMHO we don't need to support.
For this scenario, an installation would have to enable a locale, create
records with contents in that locale, then disable that locale and have
that locale as a fallback for a language where content for that record
wasn't created. If that happened, it would be solved by creating content
for that record in every enabled language.
When calculating the stats on, say, January 1st 2024, and using a group
age of, say, between 20 and 24 years, we were considering that everyone
born between 2000 and 2004 had between 20 and 24 years. This wasn't
accurate, since most people born in 2004 would have 19 years at that
point, and most people born in 1999 would have 24 years.
The age of the participants should refer to the time where the voting
took place, and not the time when the budget was created.
Note that now the tests pass even when the budget is created in a year
but the balloting phase takes place in a different year. By default,
each budget phase lasts for one month, so when we create a budget in a
test, its balloting phase takes place a few months after the date when
the budget is created. Since the `between_ages` scope uses the date of
the balloting phase as a reference, and the test was using the date when
the budget was created as a reference, the test failed depending on
whether those dates took place in the same year or not.
In commit e51e03446, we started using the same code to show stats in the
public area and in the admin area. However, in doing so we introduced a
bug, since stats in the public area are only shown after a certain part
of the process has finished, meaning the stats appearing on the page
never change (in theory), so it's perfectly fine to cache them. However,
in the admin area stats can be accessed while the process is still
ongoing, so caching the stats will lead to the wrong results being
displayed.
We've thought about expiring the cache when new supports or ballot lines
are added; however, that means the methods calculating the stats for the
supporting phase would expire when supports are added/removed but the
methods calculating the stats for the voting phase would expire when
ballot lines are added/removed. It gets even more complex because the
`headings` method calculates stats for both the supporting and the
voting phases.
So, since loading stats in the admin section is fast even without the
cache because they only load very basic statistics, we're taking the
simple approach of disabling the cache in this case, so everything works
the same way it did before commit e51e03446.
Co-authored-by: Javi Martín <javim@elretirao.net>
When we first started caching the stats, generating them was a process
that took several minutes, so we never expired the cache.
However, there have been cases where we run into issues where the stats
shown on the screen were outdated. That's why we introduced a task to
manually expire the cache.
But now, generating the stats only takes a few seconds, so we can
automatically expire them every day, remove all the logic needed to
manually expire them, and get rid of most of the issues related to the
cache being outdated.
We're expiring them every day because it's the same day we were doing in
public stats (which we removed in commit 631b48f58), only we're using
`expires_at:` to set the expiration time, in order to simplify the code.
Note that, in the test, we're using `travel_to(time)` so the test passes
even when it starts an instant before midnight. We aren't using
`:with_frozen_time` because, in similar cases (although not in this
case, but I'm not sure whether that's intentional), `travel_to` shows
this error:
> Calling `travel_to` with a block, when we have previously already made
> a call to `travel_to`, can lead to confusing time stubbing.
Debugging shows that the bottleneck in the stats calculation is the
number of times we're querying the users table using the same array of
IDs in the `where` condition but each time combined with other
conditions.
So we're inserting the results of querying the users table with the
array of IDs in a temporary table and using this temporary table for the
other calculations. When querying this temporary table, there's no need
to filter for IDs anymore.
For budget stats, the `generate` method is now about 10-20 times faster
for a budget with 20,000 participants. For budgets with only a few dozen
participants, there's no significant difference in performance.
I thought about modifying the `participants` method and use the
temporary table there. The problem, however, is that in this case it
isn't clear when to drop the temporary table, and we could end up with
thousands of temporary tables in the database if we don't do it right.
Creating and dropping the temporary table in the same transaction, on
the other hand, guarantees that won't be the case.
Note there's no risk of duplicate tables since they're created and
dropped inside a transaction, so we're always using the same table name
for the same resource. We're adding a test that fails with a
`PG::DuplicateTable: ERROR: relation "participants__1"` error if we
don't use a transaction.
We were calculating the age stats based on the age of the users who
participated... at the moment where we were calculating the stats. That
means that, if 20 years ago, 1000 people who were 16 years old
participated, they would be shown as having 36 years in the stats.
Instead, we want to show the stats at the time when the process took
place, so we're implementing a `participation_date` method.
Note that, for polls, we could actually use the `age` column in the
`poll_voters` table. However, doing so would be harder, would only work
for polls but not for budgets, and it wouldn't be statistically very
relevant, since the stats are shown by age groups, and only a small
percentage of people would change their age group (and only to the
nearest one) between the time they participate and the time the process
ends.
We might use the `poll_voters` table in the future, though, since we
have a similar issue with geozones and genders, and using the
information in `poll_voters` would solve it as well (only for polls,
though).
Also note that we're using the `ends_at` dates because some people but
be too young to vote when a process starts but old enough to vote when
the process ends.
Finally, note that we might need to change the way we calculate the
participation date for a budget, since some budgets might not enabled
every phase. Not sure how stats work in that scenario (even before these
changes).
We were already testing it as part of the statisticable concern, but
it's easier to understand when we have specific tests for it as well.
Note that we're using `eq` insteab of `be` because it's what we use most
often in the test. For consistency, we're also changing the tesst in
budget stats so it uses `eq`.
We were always displaying the event names in English.
Note we're changing the `user_supported_budgets` key because it didn't
make much sense; the investments are supported, and not the budgets.
We're also adding "created" to most of the event names in order to make
the texts more explicit, since not all the events refer to created data.
Note we're delegating the `t` method because i18n-tasks doesn't detect
code like `ApplicationController.helpers.t` and so reports we aren't
using the `admin.stats.graph` translations.
We were tracking some events with Ahoy, but in an inconsistent way. For
example, we were tracking when a debate was created, but (probably
accidentally) we were only tracking proposals when they were created
from the management section. For budget investments and their supports,
we weren't using Ahoy events but checking their database tables instead.
And we were only using ahoy events for the charts; for the other stats,
we were using the real data.
While we could actually fix these issues and start tracking events
correctly, existing production data would remain broken because we
didn't track a certain event when it happened. And, besides, why should
we bother, for instance, to track when a debate is created, when we can
instead access that information in the debates table?
There are probably some features related to tracking an event and their
visits, but we weren't using them, and we were storing more user data
than we needed to.
So we're removing the track events, allowing us to simplify the code and
make it more consistent. We aren't removing the `ahoy_events` table in
case existing Consul Democracy installations use it, but we'll remove it
after releasing version 2.2.0 and adding a warning in the release notes.
This change fixes the proposal created chart, since we were only
tracking proposals created in the management section, and opens the
possibility to add more charts in the future using data we didn't track
with Ahoy.
Also note the "Level 2 user Graph" test wasn't testing the graph, so
we're changing it in order to test it. We're also moving it next to the
other graphs test and, since we were tracking the event when we were
confirming the phone, we're renaming to "Level 3 users".
Finally, note that, since we were tracking events when something was
created, we're including the `with_hidden` scope. This is also
consistent with the other stats shown in the admin section as well as
the public stats.
We moved this file to `app/lib/` in commit cb477149c so it would be in
the autoload_paths. However, this class is loaded by ActiveStorage, with
the following method:
```
def resolve(class_name)
require "active_storage/service/#{class_name.to_s.underscore}_service"
ActiveStorage::Service.const_get(:"#{class_name.camelize}Service")
rescue LoadError
raise "Missing service adapter for #{class_name.inspect}"
end
``
So this file needs to be in the $LOAD_PATH, or else ActiveStorage won't
be able to load it when we disable the `add_autoload_paths_to_load_path`
option, which is the default in Rails 7.1 [1].
Moving it to the `lib` folder solves the issue; as mentioned in the
guide to upgrade to Rails 7.1 [2]:
> The lib directory is not affected by this flag, it is added to
> $LOAD_PATH always.
However, we were also referencing this class in the `Tenant` model,
meaning we needed to autoload it as well somehow. So, instead of
directly referencing this class, we're using `respond_to?` in the Tenant
model.
We're changing the test so it fails when the code calls
`is_a?(ActiveStorage::Service::TenantDiskService)`. We need to change
the active storage configurations in the test because, otherwise, the
moment `ActiveStorage::Blob` is loaded, the `TenantDiskService` class is
also loaded, meaning the test will pass when using `is_a?`.
Note that, since this class isn't in the autoload paths anymore, we need
to add a `require` in the tests. We could add an initializer to require
it; we're not doing it in order to be consistent with what ActiveStorage
does: it only loads the service that's going to be used in the current
Rails environment. If somebody changed their production environment in
order to use (for example), S3, and we added an initializer to require
the TenantDiskService, we would still load the TenantDiskService even if
it isn't going to be used.
[1] https://guides.rubyonrails.org/v7.1/configuring.html#config-add-autoload-paths-to-load-path
[2] https://guides.rubyonrails.org/v7.1/upgrading_ruby_on_rails.html#autoloaded-paths-are-no-longer-in-$load-path
The config.file_watcher option still exists but it's no longer included
in the default environtment file. Since we don't use it, we're removing
it.
The config.assets.assets.debug option is no longer true by default [1],
so it isn't included anymore.
The config.active_support.deprecation option is now omitted on
production in favor of config.active_support.report_deprecations, which
is false by default. I think it's OK to keep it this way, since we check
deprecations in the development and test environments but never on
production environments.
As mentioned in the Rails upgrade guide, sprockets-rails is no longer a
rails dependency and we need to explicitly include it in our Gemfile.
The behavior of queries trying to find an invalid enum value has changed
[2], so we're updating the tests accordingly.
The `favicon_link_tag` method has removed the deprecated `shortcut`
link type [3], so we're updating the tests accordingly.
The method `raw_filter` in ActiveSupport callbacks has been renamed to
`filter` [4], so we're updating the code accordingly.
[1] https://github.com/rails/rails/commit/adec7e7ba87e3
[2] https://github.com/rails/rails/commit/b68f0954
[3] Pull request 43850 in https://github.com/rails/rails
[4] Pull request 41598 in https://github.com/rails/rails
Byebug hasn't been maintained for years, and it isn't fully compatible
with Zeitwerk [1]. On the other hand, Ruby includes the debug gem since
version 3.1.0. We tried to start using at after commit e74eff217, but
couldn't do so because our CI was hanging forever in a test related to
machine learning, with the message:
> DEBUGGER: Attaching after process X fork to child process Y
(Note this message appeared with debug 1.6.3 but not with the version
we're currently using.)
So we're changing the debug gem fork mode in the test so it doesn't hang
anymore when running our CI. We tried to change the test so it wouldn't
call `Process.fork`, but this required changing the code, and since
there are no tests checking machine learning behavior with real scripts,
we aren't sure whether these script would keep working after changing
the code.
[1] Issue 564 in https://github.com/deivid-rodriguez/byebug
Note that the `budget` parameter was added to the `delete_path` method
so it works in the tests; on production, it worked because this
component is only rendered on pages which already have the `budget`
parameter.
Co-authored-by: Javi Martín <javim@elretirao.net>
Note that we keep :created_at order as complement to new :order field.
We do this so that current installations will not notice any change in the
sorting of their cards when upgrading, as the default "order" field will always
be 1, so it will continue to sort by the "created_at".
Before this change, two important things depend on the format of each key,
where to render it in the administration panel and which kind of interface
to use for each setting. Following this strategy led us to a very complex
code, very difficult to maintain or modify. So, we do not want to depend
on the setting key structure anymore to decide how or where to render each
setting.
With this commit, we get rid of the key format-based rules. Now we render
each setting explicitly passing to it the type and the tab where it belongs.