One of the things that I saw demonstrated at my very first PDC conference in 1998 was the addition of Hierarchical Storage Management to Windows servers. This was in the Windows NT 4 era, and I was investigating a variety of ways that we could potentially handle a large number of fax and scanned images for my company. We eventually implemented a RW optical jukebox to help us manage our large collection of images and it worked very well in helping us manage costs.
I saw this great article recently on tiered storage that uses a database system as an example of how you can potentially improve the performance of your system and manage costs by using different types of storage. It’s worth the read, and talks about a fictional example of how we might use SSDs for frequently accessed data and SATA HDDs for cold, less accessed data.
As we collect and store more and more data, I think that we will find that we are storing lots of data that we potentially access very infrequently. If that is the case, then we ought to be considering different types of storage that can handle the needs of that particular set of data while also managing costs. I have always struggled with my budgets for database servers, trying to manage CPU, RAM, and disk costs and find a balance among them. If I can potentially use different costs of disk storage to gain more RAM or CPU power, it’s a trade-off that I would have often made.
The trick with tiered storage is knowing your data and the access patterns. That means better understanding your queries and access patterns, and that requires better knowledge of how SQL Server works. You should learn to query DMVs and read performance metrics and apply that knowledge to your own systems. Those skills might just help you improve performance in a very cost effective way, an accomplishment that is worth bringing up in your annual review.