The current tracking section had a few issues:
* When browsing as an admin, this section becomes useless since no
investments are shown
* Browsing investments in the admin section, you're suddenly redirected
to the tracking section, making navigation confusing
* One test related to the officing dashboard failed due to these changes
and had been commented
* Several views and controller methods were copied from other sections,
leading to duplication and making the code harder to maintain
* Tracking routes were defined for proposals and legislation processes,
but in the tracking section only investments were shown
* Probably many more things, since these issues were detected after only
an hour reviewing and testing the code
So we're removing this untested section before releasing version 1.1. We
might add it back afterwards.
Having exceptions is better than having silent bugs.
There are a few methods I've kept the same way they were.
The `RelatedContentScore#score_with_opposite` method is a bit peculiar:
it creates scores for both itself and the opposite related content,
which means the opposite related content will try to create the same
scores as well.
We've already got a test to check `Budget::Ballot#add_investment` when
creating a line fails ("Edge case voting a non-elegible investment").
Finally, the method `User#send_oauth_confirmation_instructions` doesn't
update the record when the email address isn't already present, leading
to the test "Try to register with the email of an already existing user,
when an unconfirmed email was provided by oauth" fo fail if we raise an
exception for an invalid user. That's because updating a user's email
doesn't update the database automatically, but instead a confirmation
email is sent.
There are also a few false positives for classes which don't have bang
methods (like the GraphQL classes) or destroying attachments.
For these reasons, I'm adding the rule with a "Refactor" severity,
meaning it's a rule we can break if necessary.
These feature tests were taking too long, we can't run them for every
single model.
I'm taking the approach of using one different model for each test, but
in theory only using a few models covering every possible scenario
would be enough.