Since the :post_started_at and :post_ended_at fields are of type Date, we check
with Date.current and not with Time.current.
This change has been caused because some test suites were failing
(https://github.com/consul/consul/runs/2170798218?check_suite_focus=true).
The code we had was causing the banners to be available a few hours earlier
or later than they should be depending on the time zone of the application.
Many management actions only make sense if a user has been selected
beforehand.
We updated :check_verified_user method to be able to check actions that need to
have a user selected in order to avoid exceptions.
We need this control as :only_verified_user is not restrictive enough. The reason is
that the :managed_user method used in the :only_verified_user if it does not find a
user it does an initializce (find_or_initialize_by). This causes that when we have
"skip_verification" to true, it returns this non-persisted user as "verified".
These changes affect the actions of Account, Budgets and Proposals Controller
when no user is selected.
We prepare the file to be able to include specs
that do not need to have a logged-in user.
We also took the opportunity to not execute this
line in some specs where it was not necessary.
That way we make sure the request is finished before the test finishes.
We were getting a failure in GitHub Actions because an unrelated test
executed after this one had its locale set to Spanish (just for one
test, though), and one possible explanation could be that a previous
request which changed I18n.locale was still being executed.
Sometimes a test gets stuck and and we have to wait until it times out
in order to check which files were being tested at the time.
The default timeout is six hours. I'm reducing it to one hour which
should still be plenty of time even on repositories with no knapsack
token.
GitHub Actions is failing to finish sometimes. Usually that happens due
to concurrency issues when the process running the test accesses the
database after the process running the browser has started.
Since these files were the ones being tested the times we had this
issue, these are the ones we are fixing right now, although there are
probably other places where we might have this issue in the future.
We've got quite a messy hack to sign in managers: they need to visit a
specific URL (management root path).
That means tests signing in managers start the browser to sign them in,
which might cause issues if we setup the database after that.
JavaScript is used by about 98% of web users, so by testing without it
enabled, we're only testing that the application works for a very
reduced number of users.
We proceeded this way in the past because CONSUL started using Rails 4.2
and truncating the database between JavaScript tests with database
cleaner, which made these tests terribly slow.
When we upgraded to Rails 5.1 and introduced system tests, we started
using database transactions in JavaScript tests, making these tests much
faster. So now we can use JavaScript tests everywhere without critically
slowing down our test suite.
The test is failing on my machine sometimes. I thought it was because
`open_last_email` didn't wait fot the request generated by clicking the
"Registrarse" button to finish. Even if that might be the case, adding
an extra expectation showed the test was really failing because
invisible captcha reported the form was filled in too fast.
I wonder whether invisible captcha is working properly or there are
times where it doesn't start its timer when it should.
Since this might be a bug in the application, I'm not changing the test
just to make it pass.
Content like lowercase letters with `text-transform: uppercase` or
spaces after elements with `display: block` or "You're on page:" are not
seen that way by users with a browser supporting CSS.
So we're testing what most users actually experience.
This menu requires JavaScript to open/close subnavigation menus, so
we're now testing the way users with a browser supporting JavaScript
(98%-99% of the users) deal with the menu.
IMHO opening new windows is a usability issue which has been known for
twenty years since it takes control away from the user and breaks the
"back button", but for now we're keeping the same behavior as we already
had, while slightly increasing the complexity of the tests (which is a
good indicator of a usability issue).
Before clicking the "Edit phase" link, there's already a "Name" field
present in the page (the name of the budget).
With the rack driver, there's no problem since the `fill_in` action
waits until the page is loaded.
However, the test will be a flaky spec if we use a driver supporting
JavaScript, since clicking the "Edit phase" link will cause an AJAX
request and the `fill_in` action might be executed before the AJAX
request is finished.
Using `have_selector` makes the test more robust since it matches all
the following HTML:
* <div data-track-event-category=verification>
* <div data-track-event-category='verification'>
* <div data-track-event-category="verification">
Our former expectation only matched the first one.
Using separate tests to check every link on the page made the tests
slower. We were also adding a useless initial request on tests which
started by visiting a different URL.
This useless initial request meant in some tests the browser was started
before using factories to create data. Accessing the database in the
test after the browser starts might cause concurrency issues in
JavaScript tests.
The test was creating a new "upload.images" setting instead of using
"uploads.images". It passed because it wasn't using JavaScript but was
configuring the wrong setting.
IMHO testing the navigation once is enough. In the rest of the tests we
can access the page directly and make the tests faster by reducing the
number of requests.
Testing the validation on the server side doesn't work with chromedriver
because HTML valdations result in the browser not even submitting the
form, so we're adding a `:no_js` tag to indicate we deliberately choose
to use the Rack driver.
The following test checking client side validation passes on my machine
but fails on Github Actions:
```
scenario "Validates price formats on the client side", :js do
investment.update!(visible_to_valuators: true)
visit edit_valuation_budget_budget_investment_path(budget, investment)
fill_in "Price (€)", with: "12345,98"
click_button "Save changes"
validation_message = find_field("Price (€)").native.attribute("validationMessage")
expect(validation_message).to be_present
end
```
There seems to be a bug in the dashboard when calculating
`accumulat_supports` because in some cases `vote_weight` is `nil`.
Besides, these tests check the database after the browser has started,
instead of checking things from a user's point of view.
Since both the tests and the code are wrong, and I'm not familiar enough
with the code in the dashboard section, for now I'm leaving things as
they were.
There's a usability issue in certain cases in browsers: when the
milestones table is at the bottom of the screen and it fills the screen
width completely, hovering over the link to delete a milestone makes the
"Delete milestone" tooltip cause a horizontal scrollbar. The scrollbar
makes it impossible for users to click the link.
We should probably fix this usuability issue; for now, I'm keeping the
test the way it was.
The user experience with JavaScript enabled is actually very bad;
there's a usability issue here because it's impossible to change an
answer once a "radio button" is selected, which goes against the
standard practice on basically any HTML form.
Issue 4123 already mentions this problem. Until we fix it, we're
disabling JavaScript in these tests.
These fields are hidden for users with a browser supporting CSS and is
only there to fool bots, so we're testing the case of an attack by bots
using browsers with no CSS support.