Skip to content

ALTER SCHEMA TO ADD PERMISSIONS

I’m sure some of you have wanted to do this:

ALTER SCHEMA Steve AUTHORIZATION Steve

You realize this doesn’t work, and you can’t grant the user Steve, rights to his schema after it’s created. You can do this:

CREATE SCHEMA Steve Authorization Steve

UPDATE: Someone pointed out this works after the fact:

ALTER AUTHORIZATION ON SCHEMA::Steve TO Steve

But not alter it. Strange and annoying. In my last post, I showed dropping and recreating the schema. That works well if you are beginning development, but not when you’re in the middle.

Let’s make this less confusing and see how we actually allow a developer to access a schema to create procedures (or other objects) when the schema exists.

First, let’s assume we want a developer, Steve, to be able to create procedures in the ETL schema. We have these conditions:

  • The ETL schema exists
  • The ETL schema is owned by another developer.
  • The login and user, Steve, exists in this database with no permissions.

I want to now allow Steve to build the procedure ETL.MyProc.

Grant Permissions

The first thing I do is grant create procedure permissions to Steve.

CREATE LOGIN steve WITH PASSWORD = ‘Test';
GO
USE Sandbox
GO
CREATE USER Steve FOR LOGIN Steve
GO
GRANT CREATE PROCEDURE to Steve;

GO

With this done, now let’s set up our schema.

CREATE SCHEMA ETL
GO

There are no default permissions, so the user Steve cannot create ETL.MyProc right now. How do we fix this?

The trick here is that I need to allow Steve to ALTER the schema. I can do this by using this statement.

GRANT ALTER ON SCHEMA::ETL TO Steve;
GO

I could do other things. I could grant CONTROL. to Steve instead, but I might not want to do that. That gives Steve the ability to actually drop the schema, which probably isn’t want. It’s certainly not the “least permissions” to let the developer create objects in a schema.

Data Generator – Limiting Values

I needed to generate some data for some development work on the SSC database. No, I’m not allowed to change code directly, but I was looking to send some changes to the development team, already done, and then hopefully just have them test and deploy it.

In my case, to avoid exposing any real data in case of issues, I downloaded the schema only to my laptop. The I created a database with all the objects. One of my first areas of work was on the points and scoring systems, but to do that, I needed points.

sql-data-generator-150

I fired up my copy of Data Generator, let it detect the objects, and pre-populate the fields and clicked "Generate Data". That worked well, and I had a bunch of data in my system.

pointsgenerator_c

My first area of work was to rewrite some procedures that perform calculation. I did that, ran a simple SUM, and got this:

Msg 220, Level 16, State 2, Line 3
Arithmetic overflow error for data type int

Not what I expected. I just generated some data and ran a sum. What could be the issue?

It turns out that the default settings for integer columns are shown here:

pointsgenerator_b

That’s great if you want a random distribution, but it’s not so good in this case. The points values I want to store for each row should be fro 1 to 7, and randomly distributed. I’d actually like them to be weighted towards 1 and 2, but for this project, it doesn’t matter.

I decided to fix things by first deleting all the points data. Once this was done, I could then select the table on the left, and select the column.

pointsgenerator_d

This changes the right panel to the specific settings for this column. I changed the values, as you can see here, to be more in line with my needs. Only values from 0 to 7 are included.

pointsgenerator_e

I could actually use different settings for different columns. For example, for the PointsCategory column, I used these settings, from 1 to 1,000.

pointsgenerator_f

With these new settings, I generated new data for this table, and then my aggregate calculations worked.

Data generation is a very handy thing to have, especially in development environments where you don’t want live data. In my case, while I think my systems are fairly safe, I’d hate to lose my laptop, with a copy of the SQLServerCentral database and a million emails that people might not want shared.

Small Experiments in Data

Amsterdam is trying to become a smart city, with technology and investments being made in ways that will (hopefully) improve the city living experience for residents. There are a lot of items mentioned in the article, but one of the main items that I think underpins the entire experiment is data.

The piece notes that people and organizations can contribute to public data, which is then accessible by anyone that wants to build software or applications. The idea that groups can add to, and consume, data, is important. We need lots of data to better understand, and work with, our environment. We also need to ensure that the data is accessible widely otherwise it can be stifling to the innovations that people may dream about.

Amsterdam is participating in the CitySDK, a project that tries to standarize APIs for cities. That’s important to avoid every environment having to reinvent their own APIs. This also means that software applications written by talented developers in one place might be easily ported to other locations. Ultimately, we might find that not only do our cell phones easily work in all locations, but specific applications that we have (perhaps for something like buses or grocers), might also easily port to other places.

I also think this provides a lot of opportunity for those of us working in software. I can see that there would be lots of organizations that have ideas for applications, but lack programming skills. There may be lots of opportunities in the future for developers to work with those that dream of new systems by bringing an idea to life. Even if you donated your time, you could polish your development skills, your communication skills with non-technical people, and dramatically improve the impression your resume makes with potential employers.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Demo Setup–Attaching Databases with Powershell

I found another use for Powershell, one actually suggested by someone else: attaching specific SQL Server databases.

TL;DR I have a script that detaches all user databases from a SQL Server instance and then reattches certain ones. Full script at the end.

The Issue

We have a lot of demo databases on our demo VMs for Red Gate. Some specific databases are used to show things with different products, but it ends up with us having a few dozen databases on an instance of SQL Server.

That’s not the best way to show things to users, as they can get confused with so many databases. Specifically for us, we have a set of databases for one of our classes, a different set for a second class, and a third set for a third class. We do this because things need to be set in different stages for each class.

One of our sales engineers said it would be great if we could hide some databases when we didn’t need them. I immediately saw a use for Powershell here.

Approach

My approach to this problem would be this.

  • detach all user databases
  • attach specific databases by specifying the name of the database, and the mdf/ldf/ndf file names.
  • use a batch file the user can double click on the desktop to run the Powershell script.

This seemed to make sense, and I started to tackle this on one of my machines in this manner. However because I detached all my databases first, all of a sudden working on things was a pain. As a result, I setup a new VM and created dummy databases there. I first worked on the attach piece, and then the detach part.

Detaching User Databases

This was fairly simple, and I’ve written about it before. In this case, I merely cut and pasted this code into my script.

$srv = New-Object ‘Microsoft.SqlServer.Management.SMO.Server’ $instance

#detach all user databases
$dbnames = $srv.Databases.name

  foreach ($dbn in $dbnames) {
    Write-Host $dbn
    if ($dbn -ne "master" -and $dbn -ne "model" -and $dbn -ne "msdb" -and $dbn -ne "tempdb") {
      $srv.DetachDatabase($dbn, $false)
   
      }
    }

The first line is actually needed for both parts of the script, and we re-use that object later.

The script gets a handle to the databases object and then a collection of all the names. We loop through the collection and if we aren’t looking at one of the four system databases, we call the detachDatabase method.

Note that this means I’m in control of the instance and I know I don’t have a distribution database or anything else that might break. For me, I can safely drop everything other than master/model/msdb/tempdb.

Attaching Databases

I had to search around for some example code. I guess I didn’t have to, but the docs from MS can be tricky to put together, so I searched and found a few examples. Specifically, I ran across this post that described how to attach a single database.

I decided to begin by building up the db name and paths to the files. I started by setting a variable to the path and database name.

$sqldatapath = "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\"

$dbn = "sandbox"

One of my databases is “Sandbox” and the path for all my database files is given as the default.

Next I build up the mdf/ldf files. In my case, I don’t have anything other than single mdf file databases.

$mdffiles = $sqldatapath + $dbn + ".mdf"
write-host $mdffiles
$ldffiles = $sqldatapath + $dbn + "_Log.ldf"
write-host $ldffiles

With these, I now can tell what I’m doing. I write the data out to the host, mostly so that if something breaks, the user can determine where. We’re all technical, but it’s nice to know what’s broken.

These are the important bits, but now I need a place to store them. At only one time in the script, I create a new StringCollection object.

$dbfiles = New-Object System.Collections.Specialized.StringCollection

I’ll reuse this object for each database. In this object, I store the database file names. I use the .Add method to get them in here.

$dbfiles.Add($mdffiles)
$dbfiles.Add($ldffiles)

Now I have all my parameters. I can call the AttachDatabase method.

$srv.AttachDatabase($dbn, $dbfiles, "sa", "None")

The documentation says I need an owner, and for simplicity, I use “sa”. I also can specify options, but I don’t care in this case.

This attaches my first database. However, I need to repeat this. I could build some loop and use some array, which is probably better, but for the sake of simplicity here, and preventing issues, I copy and paste this code multiple times. In my case, I have no more than 4 databases, for any environment, so I merely copy/paste this code and change the database name.

However, I don’t want to keep adding to my StringCollection each time. In between each set of databases I need to call, I add this:

$dbfiles.Clear()

Now I have a few simple scripts I can modify easily, and others can understand them.

The Batch File

The other thing I learned with the batch file is that it doesn’t have the same context as my editing session. I had to add a line to load the SQLPS stuff at the beginning for it to work.

Import-Module "sqlps" -DisableNameChecking

I also had to ensure the execution policy is set on each machine, but we tend to do that when we set up the machines.

Simplicity

This is the simple way. It’s really not the best way, and if these scripts change much, this is a problematic way of doing things. I really should have a loop with a list of databases in one place in the script. That way if I add or remove a database, I can easily do it.

That’s an improvement I’ll make.

Let me also say that I have a pattern of database names, and files. If I needed to handle different file locations and varying numbers of files, I think this approach actually works better. Each section of the script can be edited easily, and separately, without worrying about complex logic.

I like simple.

Scripts

The batch script is this.

powershell c:\Utilities\attach_demodbs.ps1

I call the Powershell host and give a fully qualified path to the script.

Here is one of my demo scripts, for two databases: sandbox and EncryptionPrimer:

<#

Attach Demo Databases

This script detaches all user databases and then attaches the following databases

Attaches
- Sandbox
- EncryptionPrimer

#>

Import-Module "sqlps" -DisableNameChecking

$srv = New-Object ‘Microsoft.SqlServer.Management.SMO.Server’ $instance

#detach all user databases
$dbnames = $srv.Databases.name

  foreach ($dbn in $dbnames) {
    Write-Host $dbn
    if ($dbn -ne "master" -and $dbn -ne "model" -and $dbn -ne "msdb" -and $dbn -ne "tempdb") {
      $srv.DetachDatabase($dbn, $false)
   
      }
    }

$dbfiles = New-Object System.Collections.Specialized.StringCollection

$sqldatapath = "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\"

$dbn = "sandbox"

write-host "Instance: " $srv.Name
write-host "Attach " $dbn

$mdffiles = $sqldatapath + $dbn + ".mdf"
write-host $mdffiles
$ldffiles = $sqldatapath + $dbn + "_Log.ldf"
write-host $ldffiles

$dbfiles.Add($mdffiles)
$dbfiles.Add($ldffiles)

$srv.AttachDatabase($dbn, $dbfiles, "sa", "None")

$dbfiles.Clear()

#attach staging
$dbn = "EncryptionPrimer"

write-host "Instance: " $srv.Name
write-host "Attach " $dbn

$mdffiles = $sqldatapath + $dbn + ".mdf"
write-host "MDF: " $mdffiles
$ldffiles = $sqldatapath + $dbn + "_Log.ldf"
write-host "LDF: " $ldffiles

$dbfiles.Add($mdffiles)
$dbfiles.Add($ldffiles)

$srv.AttachDatabase($dbn, $dbfiles, "sa", "None")

$dbfiles.Clear()

Computational Biologist

I saw this job in a NYT times piece and was intrigued to see the mention of a computational biologist at the Mt. Sinai Medical Center. This is a job that couldn’t have existed in past, as throughout most of history we haven’t had computers that could do work in the way we do it today. I suppose we could possibly have had large numbers of people performing these tasks, but I’m not sure that would have been practical. I’m not even sure it would be feasible as the amount of resulting data might have easily overwhelmed any attempt at organization and understanding by a few individuals.

What I found interesting in the piece is that the “team does not do the basic science. … His group works on the computational pipeline. They try to improve the affordability and effectiveness of cancer treatments through automation. Their goal is to help scientists and doctors tailor treatments to an individual’s situation in a way that should increase the likelihood of a successful outcome.”

This use of data science and analysis is tackling quite a challenge here. The treatment of diverse humans is challenging, where each person can be slightly different in their biochemistry. What’s more, these diseases are still not well understood, and this doesn’t allow for a simple pattern that can be easily discerned from mounds of data. However bringing more analysis to the problem could have huge rewards for people suffering in the world.

I have hope that in the future data will continue to solve more problems like this and make the world a better place.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Worst Comments

I was watching a presentation recently on refactoring C# code and was amazed by some of the comments that the speakers showed in the code. The example was a real application that had been obfuscated and simplified a bit for the talk. The comments, however, had only been changed when they might disclose a specific person or company. The speakers pointed out a few of those changes, but also noted that most of the comments were verbatim from the original code.

Comments like “Dave changed this from the old way”  or “Bug 445: as per the operations group” were good examples of bad comments. These items don’t really help a developer understand the code. The comments in application code should be there to add to the code itself, helping someone understand a reason for the code, not an obscure reference or an obvious statement (“this code adds two balances together).

With that in mind, I’m sure many of you have come across some comments in code that have evoked a wide range of emotions. I’m sure you’ve been frustrated, annoyed, or something else. Perhaps even from your own comments. With that in mind…

What are the worst comments you have found in code?

I hope you don’t have examples in your current application, but perhaps you do. Perhaps you have even committed your own code recently without really taking the time to accurately describe the change. Maybe you want to go look in your VCS and see what you’ve entered lately.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

Back

It’s amazing to think I’ve been gone 10 days, out of the country. SQL Bits was amazing, and still is the best SQL Server conference in the world I’ve seen. I haven’t been to them all, but this is the best of those I go to.

Ireland was fantastic, and a very warm welcome for a day of training and then a talk at the Dublin SQL Server User Group. Some really interesting questions for me to blog about and answer in the next few weeks.

Photo Mar 05, 10 40 37 AM

I saw some neat SQL Server tips, tricks, ideas, and inspirations, which is always a good thing as it motivates me to do more and learn more as my career progresses.

And I got a new shirt:

Photo Mar 05, 4 15 07 AM

An amazing gift from the crew at SQL Bits. My own speaker shirt. I was quite touched and I really appreciate the effort they went to. I know what I’ll be wearing next year.

Photo Mar 10, 1 52 58 AM

A few people were jealous, since the "regular" speaker shirts were blue polos. I guess I stand out a bit from them. The Friday night party was fun, with the Captain making an appearance.

Photo Mar 06, 2 43 03 PM

All in all a great trip.

Now back to work.

Allowing a User to Create Objects in a Schema

I was testing something the other day and realized this was a security area I didn’t completely understand. I decided to write a few posts to help me understand the issues.

I want to give a developer rights to create objects in a schema. In this case, I’ll stick with procedures, but the same thing would apply for tables, views, etc. How do I do this, allow someone to create objects in their schema?

Let’s create a login and user:

CREATE LOGIN steve WITH PASSWORD = ‘AR3allyStr0ng!P@**Wo9d’;
GO
USE Sandbox
GO
CREATE USER Steve FOR LOGIN Steve
GO

Now I have a user, and want them to be able to create this:

SETUSER ‘Steve’;

CREATE PROCEDURE Steve.MyProc
AS
    SELECT
            1;
RETURN

If the user does this, they get:

Msg 262, Level 14, State 18, Procedure MyProc, Line 3
CREATE PROCEDURE permission denied in database ‘sandbox’.

That’s no good.

We can see from the error that we don’t have writes to create procedures. Let’s fix that. First, we change our context and then we grant permissions.

SETUSER
GO

GRANT CREATE PROCEDURE TO Steve;

GO

With this done, let’s now try creating the procedure again with the SETUSER statement and the CREATE PROC statement. We then get:

Msg 2760, Level 16, State 1, Procedure MyProc, Line 5
The specified schema name "Steve" either does not exist or you do not have permission to use it.

This didn’t used to be the case in SQL 2000, where schemas didn’t exist. Now we don’t have any implicit schema for our user. Let’s see if we can make anything.

CREATE PROCEDURE MyProc
AS
SELECT 1;
RETURN
GO

Returns this:

Msg 2760, Level 16, State 1, Procedure MyProc, Line 11
The specified schema name "dbo" either does not exist or you do not have permission to use it.

At this point Steve doesn’t have permissions to any schema. Let’s start by adding a new schema.

CREATE SCHEMA Steve
GO

Once this is done, can I now create a procedure?

SETUSER ‘Steve’;

CREATE PROCEDURE Steve.MyProc
AS
    SELECT
            1;
RETURN

I get this:

Msg 2760, Level 16, State 1, Procedure MyProc, Line 5
The specified schema name "Steve" either does not exist or you do not have permission to use it.

The same error as before. This makes perfect sense because although the schema exists, I don’t have permissions to use it.

That’s the default in SQL Server. You don’t get any permissions by default. You need to explicitly set them.

In this case, I want Steve to have control of the schema [Steve], so I really want the user, Steve, to own it. How do I do this?

The key is that I want to use the Authorization clause with CREATE SCHEMA. I can’t use this with ALTER SCHEMA, only with CREATE SCHEMA.. so I need to do this:

SETUSER
GO
DROP SCHEMA Steve;
GO
CREATE SCHEMA Steve AUTHORIZATION Steve;
GO

Once this is done, I can now let my user create procedures.

SETUSER ‘Steve’
GO
CREATE PROCEDURE Steve.MyProc
AS
SELECT 1;
RETURN
GO

This works, and my developer can work in their own schema. Of course I need to ensure the developer has access to other objects, hopefully using a role of some sort that I’ve created for my application users.

 

SELECT SUSER_NAME();

DROP SCHEMA Bob
DROP SCHEMA steve

REVOKE CREATE SCHEMA FROM Steve

CREATE SCHEMA Steve AUTHORIZATION Steve

ALTER SCHEMA Steve AUTHORIZATION Steve

SETUSER ‘Steve';
SELECT SUSER_NAME();

CREATE PROCEDURE Steve.MyProc
AS
    SELECT
            1;
RETURN

CREATE PROCEDURE MyProc2
AS
    SELECT
            1;
RETURN

SETUSER;
SELECT SUSER_NAME();

GRANT CREATE PROCEDURE TO Steve

SETUSER
DROP PROC steve.MyProc;
DROP PROC steve.MyProc2;
DROP SCHEMA Steve;

Altering a Column with NOT NULL

A short piece, as I ran into the need recently to alter a column to NOT NULL status. I’ve rarely done this in the past, usually specifying NOT NULL when I create the table. Often in future changes, I’ve been wary of not allowing NULLs since I’ll always find an application, or worse, a business situation where there is no good value available. However that’s a separate discussion.

Altering the Column

Let’s say I have a column that is specified as NULL in a table, and I want to change that. I initially tried this:

ALTER TABLE Tags ALTER COLUMN Status NOT NULL;

However, I got a syntax error. For the life of me, I couldn’t understand why, so I looked up the syntax. If you look at the ALTER TABLE syntax, it shows that the ALTER COLUMN item needs the type included. While I am not changing the data type, to alter the column, I need to do:

ALTER TABLE Tags ALTER COLUMN Status tinyint NOT NULL;

Another inconsistency in SQL. We don’t provide the whole definition again, and here we need to provide the column definition, even when only changing one of the settings.

Data Breach Danger

Recently a court in the US ruled that there was no imminent danger from a data breach at a Texas hospital. This is good news and bad news for the world, and I’m a little torn about how I feel. On one hand, it’s good for us as data professionals that we aren’t necessarily going to be liable for the immediate effects from lost data. While the losses aren’t always our fault, we certainly could feel pressure from management if companies faced immediately legal or financial penalties.

However it’s bad news because I think there’s little else that data breaches do than cause harm to those whose information is lost. It can be incredibly hard to link a specific breach to a specific identity theft incident, and I see this as a way of allowing companies to escape liability for their poor security practices.

In reality, however, I have no solution to propose. As a data professional, I try to keep data safe, but it’s very, very difficult. One small hole in your technical infrastructure or human employees and you can lose a ton of data very, very quickly. I know it’s not lost, but copied, however you have lost control of it.

We will face more and more security incidents, and as those tasked with protecting data, I’m not sure what we can do, or should do. However, I do think that organizations can’t take all responsibility, nor can they take no responsibility. The balance of how to deal with losses and issues is certainly something I hope we work out.

Soon.

Steve Jones
Follow

Get every new post delivered to your Inbox.

Join 5,128 other followers