One of the issues that I see published often on forums like SQLServerCentral is the advice to update statistics on your tables if you have strange performance issues, or sudden changes in performance. Statistics are important for the query optimizer, and you should understand the basics of how they work.
However there’s a problem with statistics. They get out of date. SQL Server will automatically update statistics, but it doesn’t do this constantly. It does it after 20% of the table changes (by default). If you have 1000 rows, that means 200 rows changed (or added) can trigger the update. If your table has 50 changes every couple days, you’ll get statistics updated every week.
If you have 1mm rows (think large, historical data here), then those same 50 changes won’t trigger statistics updates for a long time. 200,000 changes will be needed then.
There’s a trace flag, 2371, that can help with the minimum needed to trigger a stats update (or you can do this with your own jobs). By choosing this, you can lower the minimum for triggering an update.
What you do really depends on your issues. If you find poor performance in queries, look for wildly incorrect estimates of rows in your query plans. If you find that your statistics aren’t being updated, or not updated enough, you might enable the trace flag, or create your own job to update statistics manually.
Note that if you are rebuilding indexes, you don’t need to also update statistics on the columns in the index. They are done as part of the index rebuild.