According to the GeoJSON specification [1]:
> * A linear ring is a closed LineString with four or more positions.
> * The first and last positions are equivalent, and they MUST contain
> identical values; their representation SHOULD also be identical.
> (...)
> * For type "Polygon", the "coordinates" member MUST be an array of
> linear ring coordinate arrays.
Note that, for simplicity, right now we aren't checking whether the
coordinates are defined counterclockwise for exterior rings and
clockwise for interior rings, which is what the specification expects.
[1] https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6
We're reworking the format validation to correctly interpret feature
collection, feature, and geometry, according to RFC 7946 [1].
Since Leaflet interprets GeoJSON format, we're rendering the GeoJSON as
a layer instead of as a set of points. For that, we're normalizing the
GeoJSON to make sure it contains either a Feature or a
FeatureCollection. We're also adding the Leaflet images to the assets
path so the markers used for point geometries are rendered correctly.
Note we no longer allow a GeoJSON containing a geometry but not a
defined type. Since there might be invalid GeoJSON in existing Consul
Democracy databases, we're normalizing these existing geometry objects
to be part of a feature object.
We're also wrapping the outline points in a FeatureCollection object
because most of the large GIS systems eg ArcGIS, QGIS export geojson as
a complete FeatureCollection.
[1] https://datatracker.ietf.org/doc/html/rfc7946
Co-authored-by: Javi Martín <javim@elretirao.net>
This method was added in Rails 7.0 and makes the code slihgtly more
readable.
The downside is that it generates two queries instead of one, so it
might generate some confusion when debugging SQL queries. Its impact on
performance is probably negligible.
This method was introduced in Rails 7.0, and thanks to it we can
simplify the code that gets the translations in order.
We tried to use this method to simplify the `Randomizable` concern as
well. However, we found out that, when ordering tens of thousands of
records, the query could take several minutes, so we aren't using it in
this case. Using it for translation fallbacks is OK, since there's a
good chance we're never going to have tens of thousands of available
locales.
Note that automated security tools reported a false positive related to
SQL Injection due to the way we used `LEFT JOIN`, so now we get one less
false positive in these reports.
These cases aren't covered by the `Rails/WhereRange` rubocop rule, but
IMHO using ranges makes them more consistent. Besides, they generate SQL
which is more consistent with what Rails usually generates. For example,
`Poll.where("starts_at <= :time and ends_at >= :time", time:
Time.current)` generates:
```
SELECT \"polls\".\"id\", (...) WHERE \"polls\".\"hidden_at\" IS NULL AND
(starts_at <= '2024-07-(...)' and ends_at >= '2024-07-(...)')
```
And `Poll.where(starts_at: ..Time.current, ends_at: Time.current..)`
generates:
```
SELECT \"polls\".\"id\", (...) WHERE \"polls\".\"hidden_at\" IS NULL AND
\"polls\".\"starts_at\" <= '2024-07-(...)' AND \"polls\".\"ends_at\" >=
'2024-07-(...)'"
```
Note that the `not_archived` scope in proposals slightly changes, since
we were using `>` and now we use the equivalent of `>=`. However, since
the `created_at` field is a time, this will only mean that a proposal
will be archived about one microsecond later.
For consistency, we're also changing the `archived` scope, so a proposal
is never archived and not archived at the same time (not even for a
microsecond).
We were using the same code, and the same regular expressions, in two
places. To do so, we were including a helper inside a model, which is
something we don't usually do.
Note that, for everything to work consistently, we need to make sure
that the default locale is one of the available locales.
Also note that we aren't overwriting the `#save ` method set by
globalize. I didn't feel too comfortable changing a monkey-patch which
ideally shouldn't be there in the first place, I haven't found a case
where `Globalize.locale` is `nil` (since it defaults to `I18n.locale`,
which should never be `nil`), so using `I18n.default_locale` probably
doesn't affect us.
In commit e51e03446, we started using the same code to show stats in the
public area and in the admin area. However, in doing so we introduced a
bug, since stats in the public area are only shown after a certain part
of the process has finished, meaning the stats appearing on the page
never change (in theory), so it's perfectly fine to cache them. However,
in the admin area stats can be accessed while the process is still
ongoing, so caching the stats will lead to the wrong results being
displayed.
We've thought about expiring the cache when new supports or ballot lines
are added; however, that means the methods calculating the stats for the
supporting phase would expire when supports are added/removed but the
methods calculating the stats for the voting phase would expire when
ballot lines are added/removed. It gets even more complex because the
`headings` method calculates stats for both the supporting and the
voting phases.
So, since loading stats in the admin section is fast even without the
cache because they only load very basic statistics, we're taking the
simple approach of disabling the cache in this case, so everything works
the same way it did before commit e51e03446.
Co-authored-by: Javi Martín <javim@elretirao.net>
When we first started caching the stats, generating them was a process
that took several minutes, so we never expired the cache.
However, there have been cases where we run into issues where the stats
shown on the screen were outdated. That's why we introduced a task to
manually expire the cache.
But now, generating the stats only takes a few seconds, so we can
automatically expire them every day, remove all the logic needed to
manually expire them, and get rid of most of the issues related to the
cache being outdated.
We're expiring them every day because it's the same day we were doing in
public stats (which we removed in commit 631b48f58), only we're using
`expires_at:` to set the expiration time, in order to simplify the code.
Note that, in the test, we're using `travel_to(time)` so the test passes
even when it starts an instant before midnight. We aren't using
`:with_frozen_time` because, in similar cases (although not in this
case, but I'm not sure whether that's intentional), `travel_to` shows
this error:
> Calling `travel_to` with a block, when we have previously already made
> a call to `travel_to`, can lead to confusing time stubbing.
Since now generating stats (assuming the results aren't in the cache)
only takes a few seconds even when there are a hundred thousand
participants, as opposed to the several minutes it took to generate them
when we introduced the Cron job, we can simply generate the stats during
the first request to the stats page.
Note that, in order to avoid creating a temporary table when the stats
are cached, we're making sure we only create this table when we need to.
Otherwise, we could spend up to 1 second on every request to the stats
page creating a table that isn't going to be used.
Also note we're using an instance variable to check whether we're
creating a table; I tried to use `table_exists?`, but it didn't work. I
wonder whether `table_exists?` doesn't detect temporary tables.
Since we're doing many queries to get stats for each age group and each
geozone, testing shows these indices make stats calculation about 25%
faster on processes with 100,000 participants.
Debugging shows that the bottleneck in the stats calculation is the
number of times we're querying the users table using the same array of
IDs in the `where` condition but each time combined with other
conditions.
So we're inserting the results of querying the users table with the
array of IDs in a temporary table and using this temporary table for the
other calculations. When querying this temporary table, there's no need
to filter for IDs anymore.
For budget stats, the `generate` method is now about 10-20 times faster
for a budget with 20,000 participants. For budgets with only a few dozen
participants, there's no significant difference in performance.
I thought about modifying the `participants` method and use the
temporary table there. The problem, however, is that in this case it
isn't clear when to drop the temporary table, and we could end up with
thousands of temporary tables in the database if we don't do it right.
Creating and dropping the temporary table in the same transaction, on
the other hand, guarantees that won't be the case.
Note there's no risk of duplicate tables since they're created and
dropped inside a transaction, so we're always using the same table name
for the same resource. We're adding a test that fails with a
`PG::DuplicateTable: ERROR: relation "participants__1"` error if we
don't use a transaction.
Back in commit 383909e16, we said:
> Even if this class looks very simple now, we're trying a few things
> related to these stats. Having a class for it makes future changes
> easier and, if there weren't any future changes, at least it makes
> current experiments easier.
Since there haven't been any changes in the last 5 years and we've found
cases where using the GeozoneStats class results in a slightly worse
performance, we're removing this class. The code is now a bit easier to
read, and is consistent with the way we calculate participants by age.
This reverts commit 383909e16.
We were calculating the age stats based on the age of the users who
participated... at the moment where we were calculating the stats. That
means that, if 20 years ago, 1000 people who were 16 years old
participated, they would be shown as having 36 years in the stats.
Instead, we want to show the stats at the time when the process took
place, so we're implementing a `participation_date` method.
Note that, for polls, we could actually use the `age` column in the
`poll_voters` table. However, doing so would be harder, would only work
for polls but not for budgets, and it wouldn't be statistically very
relevant, since the stats are shown by age groups, and only a small
percentage of people would change their age group (and only to the
nearest one) between the time they participate and the time the process
ends.
We might use the `poll_voters` table in the future, though, since we
have a similar issue with geozones and genders, and using the
information in `poll_voters` would solve it as well (only for polls,
though).
Also note that we're using the `ends_at` dates because some people but
be too young to vote when a process starts but old enough to vote when
the process ends.
Finally, note that we might need to change the way we calculate the
participation date for a budget, since some budgets might not enabled
every phase. Not sure how stats work in that scenario (even before these
changes).
The config.file_watcher option still exists but it's no longer included
in the default environtment file. Since we don't use it, we're removing
it.
The config.assets.assets.debug option is no longer true by default [1],
so it isn't included anymore.
The config.active_support.deprecation option is now omitted on
production in favor of config.active_support.report_deprecations, which
is false by default. I think it's OK to keep it this way, since we check
deprecations in the development and test environments but never on
production environments.
As mentioned in the Rails upgrade guide, sprockets-rails is no longer a
rails dependency and we need to explicitly include it in our Gemfile.
The behavior of queries trying to find an invalid enum value has changed
[2], so we're updating the tests accordingly.
The `favicon_link_tag` method has removed the deprecated `shortcut`
link type [3], so we're updating the tests accordingly.
The method `raw_filter` in ActiveSupport callbacks has been renamed to
`filter` [4], so we're updating the code accordingly.
[1] https://github.com/rails/rails/commit/adec7e7ba87e3
[2] https://github.com/rails/rails/commit/b68f0954
[3] Pull request 43850 in https://github.com/rails/rails
[4] Pull request 41598 in https://github.com/rails/rails
So now we know where to use the `where.missing` method which was
introduced in Rails 6.1.
Note this rule didn't detect all cases where the new method can be used.
This rule was added in rubocop 1.44.0. It's useful to avoid accidental
`unless !condition` clauses.
Note we aren't replacing `unless zero?` with `if nonzero?` because we
never use `nonzero?`; using it sounds like `if !zero?`.
Replacing `unless any?` with `if none?` is only consistent if we also replace
`unless present?` with `if blank?`, so we're also adding this case. For
consistency, we're also replacing `unless blank?` with `if present?`.
We're also simplifying code dealing with `> 0` conditions in order to
make the code (hopefully) easier to understand.
Also for consistency, we're enabling the `Style/InverseMethods` rule,
which follows a similar idea.
We originally added the `cached_votes_up > 0` in commit 4ce95e273
because back then `cached_votes_up` was used in the denominator. That's
no longer the case, and it doesn't make sense to mark a debate with 1
vote and 10 flags as conflictive but not doing it when the debate has no
votes and 1000 flags.
We're fixing the bug right now because we're about to change the
affected line in order to apply a new rubocop rule.
Note we're excluding a few files:
* Configuration files that weren't generated by us
* Migration files that weren't generated by us
* The Gemfile, since it includes an important comment that must be on
the same line as the gem declaration
* The Budget::Stats class, since the heading statistics are a mess and
having shorter lines would require a lot of refactoring
For the HashAlignment rule, we're using the default `key` style (keys
are aligned and values aren't) instead of the `table` style (both keys
and values are aligned) because, even if we used both in the
application, we used the `key` style a lot more. Furthermore, the
`table` style looks strange in places where there are both very long and
very short keys and sometimes we weren't even consistent with the
`table` style, aligning some keys without aligning other keys.
Ideally we could align hashes to "either key or table", so developers
can decide whether keeping the symmetry of the code is worth it in a
case-per-case basis, but Rubocop doesn't allow this option.
We were already applying these rules in most cases.
Note we aren't enabling the `MultilineArrayLineBreaks` rule because
we've got places with many elements whire it isn't clear whether
having one element per line would make the code more readable.
This methods performs much better when we need to load a lot of
globalized models translations and returns the best fallback translation
for the current language.
Note that in the budgets wizard test we now create district with no
associated geozone, so the text "all city" will appear in the districts
table too, meaning we can't use `within "section", text: "All city" do`
anymore since it would result in an ambiguous match.
Co-Authored-By: Julian Herrero <microweb10@gmail.com>
Co-Authored-By: Javi Martín <javim@elretirao.net>
It was added in rubocop-performance 1.13.0. We were already applying it
in most places.
We aren't adding it for performance reasons but in order to make the
code more consistent.
They were added in Rubocop 1.24.0.
Even if we were already applying FileRead everywhere, this is something
we've manually fixed in the past. Another reason to add it is that these
rules are deeply related.
The `reload` method added to max_votes validation is needed because the
author gets here with some changes because of the around_action
`switch_locale`, which adds some changes to the current user record and
therefore, the lock method raises an exception when trying to lock it
requiring us to save or discard those record changes.
All the code in the `bin/` and the `config/` folders has been generated
running `rake app:update`. The only exception is the code in
`config/application.rb` where we've excluded the engines that Rails 6.0
has added, since we don't use them.
There are a few changes in Active Storage which aren't compatible with
the code we were using until now.
Since the method to assign an attachment in ActiveStorage has changed
and is incompatible with the hack we used to allow assigning `nil`
attachments, and since ActiveStorage now supports assigning `nil`
attachments, we're removing the mentioned hack. This makes the
HasAttachment module redundant, so we're removing it.
Another change in ActiveStorage is files are no longer saved before
saving the `ActiveStorage::Attachment` record. This means we need to
manually upload the file when using direct uploads. We also have to
change the width and height validations we used for images; however,
doing so results in very complex code, and we currently have to write
that code for both images and site customization images.
So, for now, we're just uploading the file before checking its
dimensions. Not ideal, though. We might use active_storage_validations
in the future to fix this issue (when they support a proc/lambda, as
mentioned in commit 600f5c35e).
We also need to update a couple of tests due to a small change in
response headers. Now the content disposition returns something like:
```
attachment; filename="budget_investments.csv"; filename*=UTF-8''budget_investments.csv
```
So we're updating regular expression we use to check the filename.
Finally, Rails 6.0.1 changed the way the host is set in integration
tests [1] and so both `Capybara.app_host` and `Capybara.default_host`
were ignored when generating URLs in the relationable examples. The only
way I've found to make it work is to explicitely assign the host to the
integration session. Rails 6.1 will change this setup again, so maybe
then we can remove this hack.
[1] https://github.com/rails/rails/pull/36283/commits/fe00711e9
We introduced this bug in commit 55d339572, since we didn't take hidden
records into consideration.
I've tried to use `update_column` to simplify the code, but got a syntax
error `unnamed portal parameter` and didn't find how to fix it.
The current consul GraphQL API has two problems.
1) It uses some unnecessary complicated magic to automatically create
the GraphQL types and querys using an `api.yml` file. This approach
is over-engineered, complex and has no benefits. It's just harder to
understand the code for people which are not familiar with the
project (like me, lol).
2) It uses a deprecated DSL [1] that is soon going to be removed from
`graphql-ruby` completely. We are already seeing deprecation warning
because of this (see References).
There was one problem. I wanted to create the API so that it is fully
backwards compatible with the old one, BUT the old one uses field names
which are directly derived from the ruby code, which results in
snake_case field names - not the GraphQL way. When I'm using the
graphql-ruby Class-based syntax, it automatically creates the fields in
camelCase, which breaks backwards-compatibility.
So I've added deprecated snake_case field names to keep it
backwards-compatible.
[1] https://graphql-ruby.org/schema/class_based_api.html
There are CONSUL installations where the validations CONSUL offers by
default don't make sense because they're using a different business
logic. Removing these validations in a custom model was hard, and that's
why in many cases modifying the original CONSUL models was an easier
solution.
Since modifying the original CONSUL models makes the code harder to
maintain, we're now providing a way to easily skip validations in a
custom model. For example, in order to skip the price presence
validation in the Budget::Heading model, we could write a model in
`app/models/custom/budget/heading.rb`:
```
require_dependency Rails.root.join("app", "models", "budget", "heading").to_s
class Budget::Heading
skip_validation :price, :presence
end
```
In order to skip validation on translatable attributes (defined with
`validates_translation`), we have to use the
`skip_translation_validation` method; for example, to skip the proposal
title presence validation:
```
require_dependency Rails.root.join("app", "models", "proposal").to_s
class Proposal
skip_translation_validation :title, :presence
end
```
Co-Authored-By: taitus <sebastia.roig@gmail.com>
We were using this hack in order to allow `File.new` attachments in
tests files. However, we can use the `fixture_file_upload` helper
instead.
Just like it happened with `file_fixture`, this helper method doesn't
work in fixtures, so in this case we're using `Rack::Test::UploadedFile`
instead.
We were using custom rules because of some issues with Paperclip. These
rules work fine, but since we're already using the file_validators gem,
we might as well simplify the code a little bit.