I was reading this piece on scaling Dropbox and something caught my eye. It’s a very interesting read, especially if you deal with scaling, and I’d encourage everyone that works with technology to read it. The one thing that really caught my eye, however, was the idea of running with extra load. In the piece, the author notes that they had a process running on their systems that consumed memory and CPU. If they ever reached their limit on the systems, they could stop the process, giving them a little more horsepower for the application.
That’s interesting. It’s a take on similar techniques that we used on our SQL Servers in the past. We could keep a few 1GB files (in the days of 50GB disks) on each logical drive. If the drive somehow filled up, we could delete the file, giving us a little more space.
Steve, that’s silly. You’d still need the same amount of space, so why does this help? It helps because it buys you time. If a process fills your log file, which fills the disk, the database stops. If you kill the process, and then delete the file, you’ve got space to clear your log, and keep your system running while you find out what went wrong. That’s the idea of artificial headroom. It allows you more time to respond in a crisis.
I’m not sure how I’d want this to work on my SQL Servers. After all, any load I placed on them wouldn’t necessarily just occupy CPU. It would also impact the buffer pool, as the type of process I chose would influence what would stay (or go) in that bit of memory. However the idea of limiting my system slightly, maybe 5%, in a growth situation is interesting.
At the very least it might appease my users while I get a purchase order for more resources approved.
The Voice of the DBA Podcasts
We publish three versions of the podcast each day for you to enjoy.