While this is potentially very dangerous because assigning the ID does
not increase the ID sequence, it's safe to do so in tests where we
assign the ID to every record created on a certain table.
Even so, I'd consider it a bad practice which must be used with care. In
this case I'm using it because we look for IDs in the response, and
most tests in this file use literals to compare the response.
This changes makes it possible to remove unused variables while keeping
the test readable.
Since we're obtaining titles and usernames in the response, it's easier
to compare them to titles and usernames we manually set.
Furthermore, this way we avoid many useless assignments.
We're using `eq` and `match_array` in most places, but there were a few
places where we were still checking each element is included in the
array. This is a bit dangerous, because the array could have duplicate
elements, and we wouldn't detect them with `include`.
While there are other variables in these tests, they're not part of the
setup of the test, and so these ones can be removed while keeping the
code easy to read.
Settings are stored in the database, and so any changes to the settings
done during the tests are automatically rolled back between one test and
the next one.
There were also a few places where we weren't using an `after` block but
changing the setting at the end of the test.
There was a rare case we've found on some Travis builds where the system
time suddenly moved back to the past a couple of tenths of a second,
meaning `Time.current - resource.created_at` returned a negative number,
which lead to a division by zero.
The tests in the `spec/lib/graphql_spec.rb` failed sometimes because
creating a record with a tag list of ["health"] when both "health" and
"Health" tags exist might assign either one of them. These tests usually
pass because we create two records and just by chance usually one of the
records gets one tag and the other one gets the other tag. However, the
test was written as if we expected the first record to get the first tag
and the second record to get the second tag, while very often the tests
were passing because the first record got the second tag and the second
record got the first tag. And when both records get the same tag, the
tests fail.
So I've changed these tests to tags are assigned directly and, since we
want to test the `tag_list` method, I've also added some tests to the
Tag model, which reflect the current behaviour: a random tag is assigned
when several tags with the same case-insensitive name exist.
Another option to assign the right tag to the record we're creating
would be to add `ActsAsTaggableOn.strict_case_match = true` to an
initializer. However, that would also create new tags on the database
when we accidentally assign a tag like "hEaLth" (like in the test we add
in this commit). Ideally we would have a strict case match for existing
tags and a non-strict case match for new tags, but I haven't found a way
to do it.
Regroup DocumentParser spec. Now they're duplicates in
local_census_specs and census_api_spec.
Move duplicate private method "dni?(document_type) from
LocalCensus and CensusAPI to DocumentParser
New RemoteCensusAPI allow receive :date_of_birth and
:postal_code and use in request to endpoint always that have
been configured on remote_census_configuration:
- Setting["remote_census.request.date_of_birth"]
- Setting["remote_census.request.postal_code"]
Add new params to CensusCaller 'call' method.
Create a new RemoteCensusAPI with the same functionality as
the old CensusAPI.
In the new RemoteCensusAPI both the request and the
processing of the response are made according to the
parameters defined in the "Remote Census Configuration"
page.
Same as old Census API we only consider document_type and
document_number on API request.
* request(document_type, document_number))
On "Remote Census Configuration" we allow to define request
structure on Setting["remote_census.request.structure"]
The information text of this field on "Remote Census Configuration" is:
> The "static" values of this request should be filled.
> Values related to Document Type, Document Number, Date of Birth and
Postal Code should be nil value.
An Example with the expected value for this field:
"{ request:
{
codigo_institucion: 1,
codigo_portal: 1,
codigo_usuario: 1,
documento: nil,
tipo_documento: nil,
codigo_idioma: '102',
nivel: '3'
}
}"
Where 'codigo_institucion', 'codigo_portal', 'codigo_usuario' and
'nivel' are "static" and filled in with their expected static values.
On the other hand 'documento' and 'tipo_documento' are fields
related with 'document_type','document_number' and filled in with
nil value.
On 'request' method we fill in thats 'nil values' with their
correct argument value. We can fill_in correctly because on
"Remote Census Configuration" we allow to define request path for
'document_type','document_number'
Setting["remote_census.request.document_type"]
Setting["remote_census.request.document_number"]
An Example with the expected values:
Setting["remote_census.request.document_type"] = "request.tipo_documento"
Setting["remote_census.request.document_number"] = "request.documento"
With this information with 'fill_in(structure, path_value, value)'
method, we can update structure with correct argument ('document_type',
'document_number', 'date_of_birth' and 'postal_code') value.
An Example of fill_in(structure, path_value, value) where:
structure = "{ request:
{
codigo_institucion: 1,
codigo_portal: 1,
codigo_usuario: 1,
documento: nil,
tipo_documento: nil,
codigo_idioma: '102',
nivel: '3'
}
}"
path_value = "request.documento"
value = "12345678X"
The result expected is:
{ request:
{
codigo_institucion: 1,
codigo_portal: 1,
codigo_usuario: 1,
documento: "12345678X",
tipo_documento: nil,
codigo_idioma: '102',
nivel: '3'
}
}
This fixes a validation error being raised in these specs, as we were
missing some attributes that are required. Using the factory trait the
missing attributes are correctly filled out.
This module is used in a callback model and in charge of
- extracting resources associated from RemoteTranslation and preparing
its fields to be sent to the MicrosoftTranslateClient thought DelayedJobs
- receive the response from MicrosoftTranslateClient and update resource
with his translates
Co-authored-by: javierm <elretirao@gmail.com>
- Conect to remote translation service and translate array of strings
- Create SentencesParser module with texts management methods:
- Detect split position method: When the text requested to translate is too large, we need split it in smaller parts for we can translate. This method search first valid point (dot or whitespace) for split this text so then we can get an response without dividing the word in half.
Proposal, Debate and Comment "globalize_accessors" class method were
loaded before application available locales initialization because of
graphql initializer. This will cause unexpected translation errors at
any translatable classes declared at graphql api definition (api.yml).
Doing GraphQL initialization after application initialization should
solve this issue.
The `type: :feature` is automatically detected by RSpec because these
tests are inside the `spec/features` folder. Using `feature` re-adds a
`type: :feature` to these files, which will result in a conflict when we
upgrade to Rails 5.1's system tests.
Because of this change, we also need to change `background` to `before`
or else these tests will fail.
Even if this class looks very simple now, we're trying a few things
related to these stats. Having a class for it makes future changes
easier and, if there weren't any future changes, at least it makes
current experiments easier.
Note we keep the method `participants_by_geozone` to return a hash
because we're caching the stats and storing GeozoneStats objects would
need a lot more memory and we would get an error.
After run the rake to move external_url to description for Proposals and Legislation proposals this spec is failing because external_url field doesn't exists anymore.
This spec is giving some problems related to duplicity of records due to the way rake tasks are loaded.
It will soon become part of seeds anyways. Removing for now.
These tasks dealt with data migrations or stats generations which were
done only once, so we don't need them anymore.
New CONSUL installations don't need these tasks, and existing CONSUL
installations will execute them when upgrading one release at a time.