Skip to content

Making Better Presentations–Tips and Tricks

I give a lot of presentations each year. I am constantly trying to improve my presentation skills to continue to get invitations to speak, to represent my employer well, and to ensure attendees are interested and learn a few things.

With that in mind, I’ve compiled a number of tips that I have learned over the years, or that I’ve seen ruin some interesting presentations.

I’ll be adding to this list over time and linking to the posts as they become visible.

Feel free to try one of them (please), all of them (if appropriate), or none of them (not recommended).

Feedback is welcome.

Testing is Your Best Investment

I signed up for the FlowCon 2014 conference in San Francisco this September. It’s a two day event about software development and how we can do better. My job is starting to encompass more work in this area, and I’m excited to go see some of the ThoughtWorks developers talk about what they do well. Part of the reason I wanted to go came after watching a few videos from last year, including this one from Randy Shoup.

There are some interesting things in the talk, but one thing really caught my eye at the 9:52 mark. Mr. Shoup made a statement that “tests help you go faster” and “the best investment you can make in your own code …[are] tests for your code.” Those statements are a short part of the talk, but they make a lot of sense to me.

It’s easy to ignore testing, after all, it’s hard to test for everything that can possibly happen, and writing tests can be extremely tedious. However building tests JIT, when you find bugs or problems, can give you good code coverage, especially in the areas where you find developers are likely to make mistakes.

And like with all other software development skills, the more you work on writing tests, the easier, and faster you become at building them.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.6MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

The Flash Database

I caught this piece on the impact of flash storage on database engine design this week. It’s interesting as there’s been debate for years as to whether SQL Server should alter its behavior if it detects SSDs being used for storage instead of physical disks. It doesn’t, and perhaps that’s fine, though the article makes me think there are performance gains to be had if behavior changes.

The article really looks at a few of the NoSQL products, though the design changes aren’t necessarily specifically limited to those products. I particularly thought that two ideas in the piece were interesting: indexes in memory with data on disk and the realization that threading can be he bottleneck with SSDs. I’m not sure if Windows and/or SQL Server could use these ideas, but they are interesting.

I do wonder sometimes if a little more control of indexes would be helpful in SQL Server. Imagine if I could limit a large slice of memory to strictly non-clustered indexes and then have other data on SSDs. Would there be a way to tune SQL Server to run better for some workloads? Perhaps the algorithsm that choose query plans would change if they knew a scan of an NCI could be completed in a fraction of the time that a seek took place on a CI? Maybe we’d be willing to perform more seeks on in-memory indexes before performing lookups on disk.

The idea of more concurrent operations, requiring more threads, also seems to be an area where I could suspect that both Windows Server and SQL Server could benefit from SSDs. If the systems changed their read and write algorithms and used many more threads with SSDs, could we get more throughput? Should our systems be more aware of how many controllers and paths might be on a system? I wonder, especially as some of this hardware becomes cheaper and cheaper. I could certainly see more organizations looking at using lots of smaller SSDs for a few servers that require high performance than a SAN.

However it’s not as though SQL Server isn’t trying to take advantage of technology changes. The In-Memory OLTP system  and Buffer Pool Extensions in SQL Server 2014 are designed to take advantage of more memory and SSDs to dramatically improve performance. I don’t know what else might be coming in the next version of SQL Server, but I do hope that as new ideas emerge, SQL Server considers taking advantage of them.

The ALS Ice Bucket Challenge

A few friends challenged me yesterday, including Grant Fritchey and Aaron Bertrand. Last night, when I got to the Denver SQL Server User Group meeting, Todd Kleinhans was ready to take the challenge and asked if I’d participate.

I was speaking, but happy to do it afterwards. Here we are:

I tagged Allen White, Erin Stellato, and Jes Borland.

Production Subsets

Continuous delivery recommends developers never use production data. It’s too big, too cumbersome, and slows the process too much. Developers should have enough data to determine if their solutions work as they build them. Testing should have enough to do some tuning, but unless you plan on full performance/load tests (which you should), then you don’t need the full set of production data.

It’s an interesting idea, and overall I agree. A subset of data, hundreds of rows, can usually tell you if you’re writing code that works if you profile the code and look for inefficiencies. Note that profiling code doesn’t mean use Profiler. It means examining the resource used by your code in terms of CPU, I/O, memory, etc. There are tools to help you, and at some point in your development process, you should be using them.

However it can be time consuming and cumbersome to build small development data sets. There are lots of choices in how you might do this, and I thought this would make an interesting poll. For those of you that deal with development, whether that’s T-SQL, .NET, or something else, what do you think?

Should we have a subset of production data, a custom data set, or perhaps deal with complete production data?

Some of this depends on the size of your production data, and I hope, it’s contents. I would not want any PII, PCI, medical, etc. in any development area. However if that’s not the case, then what do you prefer?

Whether you have  custom data set or a subset of production, it can be cumbersome to keep this up to date. Your data may evolve over time and there’s overhead in maintaining some scripts that would produce the data you need. Perhaps that’s the cost of writing good software, but I’m curious how many of you feel.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

Bronze Age Development

I was watching a presentation on testing recently where the speaker noted that on one project he’d spent twice as much time testing as coding. That sounds like an outrageous amount of time, but my concern was tempered when he said the release to production produced no bugs. I’m sure some bugs surfaced later, but I have often seen that most of the bugs, especially the incredibly annoying ones, are typically discovered quickly.

I was reminded of that presentation when I saw this quote: “…the result was a two-year development process in which only about four months would be spent writing new code. Twice as long would be spent fixing that code.”

That’s a quote on the development of Visual Studio a few years back. I wonder if the “twice as long fixing” time would have been reduced with better testing efforts earlier in development. It’s hard to know since all evidence on the value of testing is based on disparate projects with different teams working at different levels of experience, but I’ve run into a few people that think more testing reduces overall development time.

The consultant who gave the presentation believes strongly in testing, not only at the application level, but also at the database level. This person has tried different levels of testing on different projects, and found that building and writing tests throughout development results in many fewer issues at release. Perhaps more telling is that when the person has performed less testing in later projects (because the clients declined to pay for it), there were more bugs in production.

I don’t know if the total time spent on building software is less with testing occurring early than with allowing clients and customers to test and report bugs. Certainly some of that might depend on how many bugs you fix and how many bugs people must cope with, but I do know that the fewer issues people find with your software, the more excited they are to ask you to write more code in the future.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

Two in Two Days

It’s a busy week for me. I’ve got quite a few articles to review, feedback to write for Stairway Series authors, review and changes of some PowerPoint decks for later this month, and two User Group presentations.

This is on top of a busy first week of school in the household. I feel like I’m playing catch up all week.

Boulder

I’ll be at the Boulder SQL Server User Group tonight. My presentation will be on Unstructured Data in SQL Server, looking at Filestream and Filetable and how they can be setup and used.

Fortunately I’ve done this before, and a little practice this week was enough to get me ready.

Hopefully I’ll see a few of you there as I haven’t been to Boulder in over a year.

Denver SQL

The Denver SQL Server User Group usually asks me do a presentation or two each year and this time I have a new one. They get to be my guinea pigs for the first delivery of this talk.

Get Testing with tsqlt, a preview of a talk I’ll be doing at SQL in the City, is on the agenda. I’ve been going over this one a few times this week, so hopefully it goes smoothly.

Updating tsqlt

I was looking to write a new test with the tsqlt framework recently. I wanted to isolate a stored procedure’s logic and planned on using the FakeFunction procedure available in tsqlt.

I wrote my test, using a template from the Pluralsight course on tsqlt and the documentation. I tried to execute the test and get a “tsqlt.fakefunction does not exist” error.

I was slightly confused at first, but checking my list of functions and stored procedures showed that I didn’t have the FakeFunction procedure available. It’s a relatively recent addition to tsqlt, so I needed and update.

After downloading the framework (a zip file), I opened it up to find this:

tsqlt_a

A number of files there, but the tsqlt.class.sql is the important one. I double clicked it from the zip and it opened in SSMS.

tsqlt_b

It’s a standard T-SQL script, albeit a long one. I executed it and it ran fine. My framework was updated to the latest version and I now had the function I needed.

tsqlt_c

Of course, I used my test to ensure this worked as expected and I was pleased to see it work well.

Yet Another Attack Vector

There’s a new movie that’s just come out in August. It looks funny, and I’m planning on going to see Let’s Be Cops. I know it’s a movie, it’s not real, but it concerns me, even with a ruling on warrant-less searches of digital devices. I’m sure you think the an arrest wouldn’t breach your digital security, but how much of a stretch is it for someone to impersonate a police officer (it happens), pull over an executive or engineer, and “search” their cell phone. It’s especially possible if most of us expect that the police have the right to look in our devices (they don’t).

What concerns me is that this is another attack vector into our lives, and potentially, into our companies and organizations. We store more and more information, and access, in our digital devices. We use VPNs, and even authentication tokens, but we often store those on our devices because we can’t memorize everything. If someone has control on of our devices, the potentially have access to anything we do.

How hard would it be for someone to access our mail, or some resource through our work VPN? How quickly could determined attackers perform some malicious activity, or worse, copy information that we’d never be aware was lost? It’s not likely, and perhaps it’s far-fetched, but it seems criminals are becoming more and more creative all the time.

I worry about our data, but more importantly, I worry about the rights and privacy of our digital information. I hope we update our expectations and rights to meet the challenges of our digital future.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

SQL in the City 2014

It’s official, we have a couple 2014 SQL in the City events scheduled, covering two countries this fall. Both events are packed into a short time (less than two weeks), so Grant and I have quite a bit of travel ahead of us.
 
Once again we’ve divided up the sessions into administration for the DBAs and development topics for the developers, but feel free to cross from one to the other. Our theme is “Ship often, ship safe” and we hope to show you how to build better software, faster. Our goal is to help you get your enhancements and patches into production so your customers can make better use of their applications.
 
London is first, on Oct 24 at Grange St. Paul’s Hotel. The agenda is set with Grant and I delivering a few sessions and a host of Red Gate developers along with some Friends of Red Gate presenting on a variety of topics. We also have a labs you can drop into during the day to gain practical knowledge on how to solve some of your SQL Server problems.
 
A little over a week later, on November 3, 2014, SQL in the City returns to Seattle, with an all day event the Monday before the Pass Summit. We have a similar agenda, though a few different speakers at the McCaw Hall at Seattle Center.
 
We do hope you’ll join us at one of the events, and get a day of training on SQL Server, the Red Gate way. We’ll also have a short happy hour afterwards, and Grant and I would love to share a toast with you. Feel free to stop and chat with either of us at anytime we’re not presenting.

Follow

Get every new post delivered to your Inbox.

Join 4,640 other followers