Note that the `sitemap:refresh` task only pings search engines at the end, so it only does so for the `Sitemap.default_host` defined last. So we're using the `sitemap:refresh:no_ping` task instead and pinging search engines after creating each sitemap. Note we're pinging search engines in staging and preproduction environments. I'm leaving it that way because that's what we've done until now, but I wonder whether we should only do so on production. Since we're creating a new method to get the current url_options, we're also using it in the dev_seeds.
47 lines
1.1 KiB
Ruby
47 lines
1.1 KiB
Ruby
# Use this file to easily define all of your cron jobs.
|
|
#
|
|
# It's helpful, but not entirely necessary to understand cron before proceeding.
|
|
# http://en.wikipedia.org/wiki/Cron
|
|
|
|
# Example:
|
|
#
|
|
# set :output, "/path/to/my/cron_log.log"
|
|
#
|
|
# every 2.hours do
|
|
# command "/usr/bin/some_great_command"
|
|
# runner "MyModel.some_method"
|
|
# rake "some:great:rake:task"
|
|
# end
|
|
#
|
|
# every 4.days do
|
|
# runner "AnotherModel.prune_old_records"
|
|
# end
|
|
|
|
# Learn more: http://github.com/javan/whenever
|
|
|
|
every 1.minute do
|
|
command "date > ~/cron-test.txt"
|
|
end
|
|
|
|
every 1.day, at: "5:00 am" do
|
|
rake "-s sitemap:refresh:no_ping"
|
|
end
|
|
|
|
every 2.hours do
|
|
rake "-s stats:generate"
|
|
end
|
|
|
|
every 1.day, at: "1:00 am", roles: [:cron] do
|
|
rake "files:remove_old_cached_attachments"
|
|
end
|
|
|
|
every 1.day, at: "3:00 am", roles: [:cron] do
|
|
rake "votes:reset_hot_score"
|
|
end
|
|
|
|
every :reboot do
|
|
command "cd #{@path} && bundle exec puma -C config/puma/#{@environment}.rb"
|
|
# Number of workers must be kept in sync with capistrano's delayed_job_workers
|
|
command "cd #{@path} && RAILS_ENV=#{@environment} bin/delayed_job -n 2 restart"
|
|
end
|