The Standard Limitation

Is 64GB of RAM enough for a SQL Server instance? It was in the past for most servers that I’ve developed software on or administered. These days I know some people have 2TB of RAM in their big database servers, which makes 64GB seem paltry. After all, I’ve had 32GB in a laptop before. However I know most of the SQL Server databases out there, in absolute numbers, are fairly small. In the low GB in size. I’d think 64GB isn’t too much for these.

The reason that number comes up is that it’s the limitation for SQL Server’s Standard Edition (SE), and apparently, it’s my fault the number is set so low. Not just my fault, but all of you out there that keep buying SQL Server licenses. I’m not sure I agree with Brent’s verbiage, but I do agree with his conclusion. As long as SQL Server sells, and it’s selling well, why wouldn’t Microsoft push people to buy Enterprise Edition and pay more to use larger servers?

Personally I think we should be charged by the scale of the system we use, rather than this weird, limited feature/scale list that MS has. They could easily say that SQL Server is $2000/core and $500/4GB. They could play with the numbers and come up with something that might be cheaper for some, more expensive for some, but it would allow us to easily buy more capacity and pay more as we needed to add hardware.  That’s how the cloud works, and even how many of our virtualized systems work. Want to move from 4 cores and 16GB of RAM to 16 cores and 64GB of RAM? Flip some switches. Depending on the version of Windows, you might not even need to reboot.

I know some of you think SQL Server should be $5000 and we put it on any size hardware we like, but that’s certainly not going to happen in today’s world. We tend to value computing resources by scale, and I think that’s a reasonable way to examine software.

Steve Jones

Video and Audio versions

Today’s podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

Follow Steve Jones on Twitter to find links and database related items and announcements.
Steve Jones Windows Media Video ( 16.9MB) feed

MP4 iPod Video ( 20.4MB) feed

MP3 Audio ( 4.1MB) feed

Feeds are available at iTunes and Mevio

To submit an article, rant or editorial,
log in to the Contribution Center

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged , . Bookmark the permalink.

10 Responses to The Standard Limitation

  1. Brent Ozar says:

    The “selling well” link is a little misleading. Revenue went up – because Microsoft changed the licensing model, switching to a core-based system and jacking up costs at the exact same time it became nearly impossible to buy a quad-core server anymore. All of my clients are paying more for SQL Server, but I wouldn’t read that as happy people who are deploying more servers. They’re just suddenly paying more.

    Like

  2. way0utwest says:

    I hear you, Brent, except that it’s still selling. Maybe not higher counts and just higher cost/install, but that might be what MS expects. If they get this 24 month schedule going, they might only expect a 15% upgrade rate from previous installs. Perhaps that’s why cost is up?

    Hard to know, and I’m sure lots of your clients may not be happy (and consider alternatives), which is fine. However I think there are lots of SQL Server people, especially those that run smaller systems. In that sense, I think you, and your clients because of your skills, are outliers. The things you need/want are different than a lot of us. The SSC site is on SS2K8 and not likely to move anytime. We could still run fine on SS2K, and no benefit to upgrading.

    Like

    • Brent Ozar says:

      And if you were going to build a new system or product today that required a database at its core, would you use SQL Server?

      Jeff Atwood, the founder of StackOverflow (and one of the biggest web networks that relies on SQL Server), said no. When he started his new project, Discourse, it’s got PostgreSQL at the core. That’s a pretty big statement right there.

      Like

  3. way0utwest says:

    Except again, I’d argue you and Jeff are outliers. Jeff not only expects to succeed and succeed bug, he’s got a track record to expect that. Most people have no idea, and would compromise, thinking they 4 core system will run for a long time, and it likely will.

    Would I use SQL Server? I’d debate it. The costs for me to learn/develop/administer another platform aren’t zero. However they aren’t high. At a startup level, I’d have to question whether SQL Server was worth $8k for a 4 core system when the hardware was $2k and I’d use limited db features.

    We almost didn’t run SQLServerCentral on SQL Server. As bad a message as it might have been, we were concerned about cost. We seriously debated PostgreSQL and MySQL in 2002 and would have gone that way if not for generous discounts from MS.

    I wish they still sold SS2K8 or R2. I know they are concerned about revenue, but that would really help them to understand if they are pushing the product in the right direction, are attracting people, or not. Right now they operate with poor information, and as all platforms mature, they are likely to start losing more customers that have choices. The problem with the way they operate is that they might lose customers at Internet speed with one good alternative.

    Personally I’d seriously look at PostgreSQL today. It has lots of what I need, and if I needed SSIS or SSRS, I’d buy one license.

    Like

    • Brent Ozar says:

      That’s fair about the outliers point. But the more that I look around, the less outlier-ish it feels. Most blogs don’t run on SQL Server, most developers are looking elsewhere for new apps, and most managers I talk to are looking for a way out of their licensing costs.

      Like they say, diplomacy is the art of saying “Nice doggie” until you find a rock. I think there’s a lot of managers out there looking for rocks while they pay their licensing fees.

      Like

    • way0utwest says:

      Entirely possible people are looking for other choices.

      That’s where maturity comes in. If PostgreSQL supported all the T-SQL syntax of SQL Server, even without functions and stored procs working the same, I think I’d switch in a heartbeat in my own company. For most projects/applications, you don’t need a lot of what MSSQL offers.

      However, you do need someone to administer them, and the more the platform is different and requires handholding, the more you hesitate to add another item.

      It’s a good question, and there are certainly issues with pricing and licensing with SQL Server. However like Oracle and DB2, they have a lot of momentum on their side, which is hard to overcome. I think SQL Server is dirt simple to administer compared to Oracle, but we have plenty of people struggling to learn how to do it well. That alone would make me as a manager of a company be concerned about moving people to PostgreSQL/MySQL and having my people, or finding people, to work on it.

      Like

  4. Austin Zellner says:

    I think this is the gap between the sales / purchasing process and the developer viewpoint. If you are rolling your own system, then you have the option of picking and choosing the system that it will run on. But if you are buying a system from someone, then you are going to be limited by what is available, and since SQL Server is relatively easy to “setup and go”, there is quite a bit of it deployed and leveraged. And with a wide base of DBA’s ( or people who can play them on TV ) that can support SQL Server, it is a good choice for organizations that are more focused on keeping the lights on rather than maximizing DB performance.

    At the beginning of a system’s life, knowing how much data will be used can be difficult to project, and to meet the demands of fitting into a budget or building a quote, then having a “feature/level” approach makes it possible to lock in on a price and negotiate in a way that may be impossible up front with a usage model.

    Like

    • way0utwest says:

      I’m not sure how the usage model fails here. I’d say the feature/level model fails worse. You can’t easily change that, especially down if you’ve guessed wrong on EE. With usage, you migrate to new servers, larger or smaller, as needed.

      Like

    • Austin Zellner says:

      I don’t disagree from a “rational” approach; usage is definitely a saner model over the long term. My point is from a purely initial sales approach, a feature / level model allows me to easily box in a price for you based on what features you are looking for, and for you as the customer to go back to your management with a “this is our price” number. In my experience, a usage rate becomes problematic in the sales cycle as management will be fearful of “exploding costs”, and technical on both sides of the fence find it difficult to give commitment numbers of usage without historical data specific to that particular customer or for that particular project.

      I do think that as a generation of management grows up with Cloud and usage based pricing from other vendors, and as the market matures with tools to help predict usage ( and agreements in place to allow for special pricing / forgiveness for extreme spikes in usage ) then the market will move into that direction.

      Like

    • way0utwest says:

      Ah, I get you. Yes, sales are easier when you have a fixed price at the beginning.

      I do think we need to evolve here, but also we need to look at this in terms of options for the future. With SE/EE, you can be looking at tens, or hundreds, of thousands of dollars to upgrade. you don’t have to start super small, but reasonably. With usage based pricing, not in the cloud model by the month, but in terms of adding CPUs/RAM, you can make a decision to grow. Not an instant decision to add more this month, but a decision that takes a month or two and you make a incremental growth in hardware/licensing.

      Like

Comments are closed.