Sometimes a test gets stuck and and we have to wait until it times out
in order to check which files were being tested at the time.
The default timeout is six hours. I'm reducing it to one hour which
should still be plenty of time even on repositories with no knapsack
token.
GitHub Actions is failing to finish sometimes. Usually that happens due
to concurrency issues when the process running the test accesses the
database after the process running the browser has started.
Since these files were the ones being tested the times we had this
issue, these are the ones we are fixing right now, although there are
probably other places where we might have this issue in the future.
We've got quite a messy hack to sign in managers: they need to visit a
specific URL (management root path).
That means tests signing in managers start the browser to sign them in,
which might cause issues if we setup the database after that.
JavaScript is used by about 98% of web users, so by testing without it
enabled, we're only testing that the application works for a very
reduced number of users.
We proceeded this way in the past because CONSUL started using Rails 4.2
and truncating the database between JavaScript tests with database
cleaner, which made these tests terribly slow.
When we upgraded to Rails 5.1 and introduced system tests, we started
using database transactions in JavaScript tests, making these tests much
faster. So now we can use JavaScript tests everywhere without critically
slowing down our test suite.
The test is failing on my machine sometimes. I thought it was because
`open_last_email` didn't wait fot the request generated by clicking the
"Registrarse" button to finish. Even if that might be the case, adding
an extra expectation showed the test was really failing because
invisible captcha reported the form was filled in too fast.
I wonder whether invisible captcha is working properly or there are
times where it doesn't start its timer when it should.
Since this might be a bug in the application, I'm not changing the test
just to make it pass.
Content like lowercase letters with `text-transform: uppercase` or
spaces after elements with `display: block` or "You're on page:" are not
seen that way by users with a browser supporting CSS.
So we're testing what most users actually experience.
This menu requires JavaScript to open/close subnavigation menus, so
we're now testing the way users with a browser supporting JavaScript
(98%-99% of the users) deal with the menu.
IMHO opening new windows is a usability issue which has been known for
twenty years since it takes control away from the user and breaks the
"back button", but for now we're keeping the same behavior as we already
had, while slightly increasing the complexity of the tests (which is a
good indicator of a usability issue).
Before clicking the "Edit phase" link, there's already a "Name" field
present in the page (the name of the budget).
With the rack driver, there's no problem since the `fill_in` action
waits until the page is loaded.
However, the test will be a flaky spec if we use a driver supporting
JavaScript, since clicking the "Edit phase" link will cause an AJAX
request and the `fill_in` action might be executed before the AJAX
request is finished.
Using `have_selector` makes the test more robust since it matches all
the following HTML:
* <div data-track-event-category=verification>
* <div data-track-event-category='verification'>
* <div data-track-event-category="verification">
Our former expectation only matched the first one.
Using separate tests to check every link on the page made the tests
slower. We were also adding a useless initial request on tests which
started by visiting a different URL.
This useless initial request meant in some tests the browser was started
before using factories to create data. Accessing the database in the
test after the browser starts might cause concurrency issues in
JavaScript tests.
The test was creating a new "upload.images" setting instead of using
"uploads.images". It passed because it wasn't using JavaScript but was
configuring the wrong setting.
IMHO testing the navigation once is enough. In the rest of the tests we
can access the page directly and make the tests faster by reducing the
number of requests.
Testing the validation on the server side doesn't work with chromedriver
because HTML valdations result in the browser not even submitting the
form, so we're adding a `:no_js` tag to indicate we deliberately choose
to use the Rack driver.
The following test checking client side validation passes on my machine
but fails on Github Actions:
```
scenario "Validates price formats on the client side", :js do
investment.update!(visible_to_valuators: true)
visit edit_valuation_budget_budget_investment_path(budget, investment)
fill_in "Price (€)", with: "12345,98"
click_button "Save changes"
validation_message = find_field("Price (€)").native.attribute("validationMessage")
expect(validation_message).to be_present
end
```
There seems to be a bug in the dashboard when calculating
`accumulat_supports` because in some cases `vote_weight` is `nil`.
Besides, these tests check the database after the browser has started,
instead of checking things from a user's point of view.
Since both the tests and the code are wrong, and I'm not familiar enough
with the code in the dashboard section, for now I'm leaving things as
they were.
There's a usability issue in certain cases in browsers: when the
milestones table is at the bottom of the screen and it fills the screen
width completely, hovering over the link to delete a milestone makes the
"Delete milestone" tooltip cause a horizontal scrollbar. The scrollbar
makes it impossible for users to click the link.
We should probably fix this usuability issue; for now, I'm keeping the
test the way it was.
The user experience with JavaScript enabled is actually very bad;
there's a usability issue here because it's impossible to change an
answer once a "radio button" is selected, which goes against the
standard practice on basically any HTML form.
Issue 4123 already mentions this problem. Until we fix it, we're
disabling JavaScript in these tests.
These fields are hidden for users with a browser supporting CSS and is
only there to fool bots, so we're testing the case of an attack by bots
using browsers with no CSS support.
The filter "Mark as viewed" doesn't work properly, so here's a case
where the test would fail with JavaScript not because the test is wrong,
but due to a bug.
For now we're keeping the test as it was, but eventually we'll have to
fix the bug.
ChromeDriver only supports characters in the BMP (Basic Multilingual
Plane). One test fails with ChromeDriver because it was entering emojis.
I'm using kanji characters instead, although I must admit I'm not sure
why such an unusual login was used in the first place.
The method `formatted_heading_price` depends on the current locale. When
we make a request to `visit budgets_path locale: :es`, the request
modifies `I18n.locale` as well.
However, if we use JavaScript tests, the process running the test is
different than the process handling the request, and so the change in
`I18n.locale` does not affect the test.
Checking against the actual value we expect makes the test work with and
without JavaScript.
We're improving the readability of the ones we're about to modify.
Using human texts makes tests easier to read and guarantees we're
testing from the user's point of view. For instance, if we write
`fill_in banner_target_url`, the test will pass even if the field has no
label associated to it. However, `fill_in "Link"` makes sure there's a
field with an associated label.
Using `have_selector` Capybara might detect `<a>` tags which are not
links because they don't have an `href` attribute. Besides, with
`have_selector` Capybara only detects visible text, which means it won't
detect links which are icons with tooltips.
Using `have_current_path`, Capybara waits until the condition is true,
while using `include` the expectation is evaluated immediately and so
tests might fail when using a driver supporting JavaScript.
Besides, using `have_current_path` the error message is more readable
when the test fails.
When using tests with a driver supporting JavaScript, there might be
concurrency issues if both the process running the test and the process
running the browser access the database once the browser has been
started.
On JavaScript tests, Rails URL methods don't include the port when
invoked from a test, but they do when invoked from the browser. This was
causing some tests to fail with Selenium.
Checking the `style` attribute fails in JavaScript tests because the
browser converts colors to the `rgb()` format.
So we're testing the generated HTML in a component test while
simplifying the system test.