I’ve always wondered about this. When I create a stored procedure I do this:
CREATE PROCEDURE MyProc @param1 int as BEGIN -- add code here END;
As is often the case, I realize that I’ve made some mistake and need to change the code later. So I’ll do this:
ALTER PROCEDURE MyProc @param1 int as BEGIN -- add better code here END;
In both cases, I’ve repeated lots of the code that I used the first time, though hopefully less of the bugs. If I create a function or view, I do something similar. However when I build a table, I do this:
CREATE TABLE MyTable ( MyInt int );
If I decide that’s not enough data storage, and it’s likely not, I would do this:
ALTER TABLE MyTable ADD MyChar varchar(50);
We’re used to this, but why do we do this? Why not this?
ALTER TABLE MyTable ( MyInt int , MyChar varchar(50) );
It’s almost as though DDL mixes the idea of code submission with architectural scaffolding. It’s inconsistent, and it’s the big reason why we can’t use comments in our table code like this:
ALTER TABLE MyTable ( MyInt int -- integer to store a pointer to this row, requires unique index for integrity , MyChar varchar(50) -- random value of some data I need to store for this example. );
I don’t have any hopes that things will change, but it does make me wonder why SQL, which is often simple and highly versatile with a few consistent structures, would create this strange inconsistency.
The Voice of the DBA Podcast
The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.