This is a series of posts on the PASS Summit and SQL Saturdays. I’m outlining some thoughts here, sometimes for the first time, but on topics that I think make better events. These are opinions and thoughts, not mandates or demands. I’ve written on choosing speakers and debuting content.
Update: added a comment from another set of feedback
Tl;dr: We need to review and examine comments made on abstracts as a community and a professional organization. Ultimately the program committee has a tough job, but I think we can better prepare people to both critically review abstracts and choose great content that makes the Summit an incredible event.
As the PASS Summit selections were released in June, a few of the speakers posted their submissions and the comments they received from the program committee. I think this is great, and I’d really encourage every speaker to do this. Whether you were selected or not, this is good data to better understand how the committee works and how sessions get evaluated.
This will change year to year, as committee members change. It’s good to understand the viewpoints this year, along with the results. Debate and discussion is good here, and will help shape the views of new volunteers in the future, so I’d encourage it.
In fact, if speakers want to publish on their blogs and send links, I’ll compile a list. Here’s what I’ve seen so far:
Not a big list. I don’t have anything because I didn’t submit this year.
But I’m getting distracted. What I wanted to talk about is…
How Should We Choose Content?
It’s always interested to hear various people involved in the process at events talk about this. For the most part it comes across like the strategy many business people employ. They think a little, make a guess, and hope it works well.
If anyone tells you they know what people want to see, then go look at the rooms at the event and see how well the audience matches up with the size of the room. There are plenty of mistakes that showcase themselves with too few, or too many, people in a room.
And how could they know? When the schedule is picked, no one really has much of an idea on who is actually attending. Most of the registrations will come as the event gets closer. In addition, we’re all fickle and subject to the changing requirements and demands of our jobs. We might think PowerBI is a joke (or incredibly necessary) right now, but our view could change 180 degrees by October.
There isn’t a perfect system. And I’m not saying that I think the program committee did a bad job. On the contrary, it’s a hard, thankless job, and someone will always complain about the process. My intention isn’t to do that, but rather discuss what is good feedback and what isn’t.
I hope the volunteers next year are better prepared and have some idea of how attendees and speakers view their thoughts. I also hope we publish better guidelines, both for reviewers to better choose sessions, but also for attendee comment. I’d like to know what attendees actually want to see or what they think is a good session.
Lastly, for reviewers, you need to pick sessions for the attendees. I know many of you are attendees, but there are many, many more attendees that you need to consider. It’s easy to get trapped into picking what you like, which may not make a great conference schedule.
The reason I think this is important is that none of us necessarily has great skills as a critic, which is what reviewers do here. Many of us learned about how to examine writing in school and provide an analysis. Those skills wane, and should be both practiced and critiqued.
Comments on Comments
You can read the various posts above and look at comment, but I have a few thoughts on what items were added to the abstracts.
Note, this section is long, and I’ve mixed up comments from the various people that published items above.
Comments I Like
“Goals align w/ abstract description. Demonstration of Brent Ozar tools is a must for all SQL Administrators. Topic: Interesting for attendees and would gain an audience. Title reflects content.Interesting session for those seeking performance solutions. Objective: Level and prerequisites match goals. Material matches subject and could be presented in 75 minutes.” – Excellent feedback and detailed. I like the disclosure and notes about how the abstract/goals/title relate.
“The outline seems well developed. The outline seems to clearly describe the contents of the presentation. The topic and goals should be compelling to attendees. The topic appears to be timely, new and relevant. There appears to be a reasonable amount of live demonstrations in relation to the topic being presented. The topic and goals appear to deliver an appropriate amount of material for the time allotted.” – Again, good feedback with detail that shows the reviewer thought through this and attempted to clarify what they saw.
“Whilst I get why the title might have chartaphobia in it – it still didn’t sound right. It’s not an actual word. The abstract is good, it does tell me that I may have charts but they’re “hideous and disgusting”. This didn’t sound nice and might turn some audience members away. The topic of Power BI is a good topic, the delivery just needs some work for a 100 level – explain earlier in the abstract how Power BI will make the charts better or exist at all.” – I like this feedback. It’s detailed with pointers to why the reviewer drew conclusions.
“Well written outline with clear goals and an outline that lets the attendee know exactly what will be achieved from the session. The topic is one that I think appeals to any data professional who has …” – This is good feedback. The goals match the abstract and inform the attendee. Why is the topic good? There’s more, but I left it off.
“Basic and 300 don’t coincide. Need to decide is it an advanced topic or not.” – This is a good comment that explains why this abstract needs work.
“I like the Shakespearean twist to the abstract, but I feel like it is really lacking in content to let an attendee know what they would really be coming into (outside of learning about whether to get certified or not).” – A good list of what is wrong. I agree. There isn’t quite enough to entice someone and let them know what’s coming. It’s not bad, but could be better.
Comments I Don’t Like
“…should be 200 level – the title sounds like a deep dive, not entry level.” – OK, so not entry level, but deep dive is 200? Levels are somewhat silly, but clearly we have a typo (meant 300/400) or a big mismatch in what 200 is. Maybe we need better guidelines on what 100/200/300/400/500 are?
“detailed but not compelling” – I really dislike very general comments like this. Why isn’t this compelling? Is it that you don’t like the topic? You know it already? I’ve made comments like this (for abstracts and in VCS), and this is decidedly unhelpful. As a reviewer, stop and think about what you don’t like and express that. If you can’t, you need to work on that skill.
“Personally, I feel this is more suited to an SQL Saturday” – What? I’ve given quite a few sessions at a SQL Saturday that I’ve delivered at PASS. Same for sessions I’ve seen. Who thinks we do better or worse sessions at one event or the other?
“Also session has no indicated real examples!” – I thought this was funny because the abstract mentions volunteering, speaking, blogging, and organizing events. What “real” examples are needed?
“Abstract is a bit rough but good enough to capture attention, describe the topic, and provides reasons why someone should attend.” – I like the detail, but what does “rough” mean? I have my impression, but I’d rather see this articulated as “difficult to read and understand the content” or “the formatting is offputting.” Note, this is why we should have some re-review and perhaps copy edit of sessions accepted. At least let the author fix things.
“Very focused topic – great for a lightning talk. Very good prerequisites and goals. The abstract is entertaining yet clear.” – Good comments, but not great. What does “good” mean? The pre-reqs should match the goals, is that what happened?
A two-fer here: These are two comments:
- Well written abstract, but it doesn’t appear appealing with only 25% demo and no real examples
- This seems like the right length to really kick start learning R.
OK, is length the amount of material or something else? I don’t like that word, but we have two reviewers with what seems to be opposite views. I tend to agree with the first one, there need to be examples and for this, 25% seems low. But I think we should debate when comments are vastly different and someone should perhaps think about more training or declining a repeat volunteer request.
- Sounds like it could be a good session but the abstract seemed all over the place. Hard to follow.
- abstract seems to ramble. grammatical anomalies. punctuation misuse.
- Fantastic topic, and extremely well-constructed abstract!
- Well written abstract, sold me in easily. Seems to be on level. Clear goals.
- Nice abstract with clear goals and outlines.
- Abstract OK. Learning goals a bit roughly defined – could be more precise in this narrow topic.
- Good abstract. Could be an interesting session to attendees.
This means that we don’t have reviewers that are on similar pages. I am unclear how people could view this so vastly differently. The group of people that wrote these comments need to have a root cause analysis of what happened here. Or this needs to be a case study for next year’s reviewers.
Note, I’ll say that I think the abstract did get off track and a little unfocused in explaining what will be covered. A few grammar mistakes I forgive, but it also needs some tightening with fewer words and more focus on what’s covered.
“Only issue is the second sentence reads a little oddly.” – I agree with this one, but if you’re going to make this comment, expand. Why? Help someone better understand this tangled mess of a language we call English.
“Abstract targeted at Mgr/Team Lead. Unfortunately this is not the typical audience at PASS Summit.” – What’s the typical audience? Do we not look to have some atypical or niche sessions. This is a place that I wish reviewers would remember that not everyone goes to every session. There will be 15 or so sessions at any given time. Niche topics are OK.
“level too low” – For a 100 jumpstart session with no pre-reqs. Again, why?
“If there is a prerequisite, the session level should be 200 (instead of 100).” – OK, if this is a guideline for speakers, fine, but reviewers don’t get to make things up. I couldn’t find anything about this on the PASS site. The rest of the comment did ask for more details, which I do like.
“Abstract is ok, with a few wording and grammar choices that could be improved/changed to make reading easier” – I’m torn here. I think writing and grammar are important, and if the speaker makes mistakes here, they may do so in their Powerpoint and distract from the presentation. However, this process is also competitive and time sensitive, so minor issues I’d overlook and perhaps even have the speaker (or committee) correct for the event.
Old topic for BIML. Need to add some new features.” – Stop this. What you learned isn’t what others know. We have people at all levels, so don’t program for your level. At worst, there should be guidelines about how many topics at a level for each area. Without that, don’t make these comments. This person, in my opinion, is a poor reviewer and critic.
“Demo percentage seems low for such a topic but, overall, looks like a good session” – Do we have guidelines for how much demo per topic or type of session? If not, then DO NOT use this as your evaluation criteria. If we do, where is it?
“topic is interesting but not really “hot” or “latest”.” – The rest of the comment is good, but here I’m confused. What does hot or latest matter? Are we trying to be hot or latest with all sessions? If not, and I’d say not, then why the comment?
“goals not compelling” – Again, why not. What makes something compelling? Or at least describe what isn’t compelling to you, or for the audience.
“can this really be covered in 10 minutes?” – First, who are you that you question the speaker’s ability here. Especially if you don’t know if this is Joe Developer or Stacia Misner. Don’t make these comments if you haven’t seen the session. If you have, disclose that. If you think there is too much content for the time, note that, but I think this is too subjective a comment. Your (the reviewer’s) learning or teaching ability is not being evaluated here.
“Abstract: not compelling attendees. Topic: goals are very low, I don’t think this session is interesting” – I realize that reviewers might be rushed, but this is poor. Not compelling attendees? What does that mean? It’s a comment that doesn’t help, and by the way, I disagree. The second part of this also isn’t detailed. What is too low? I’d disagree. The goals list things that are specific, and certainly not what I’d call too basic. The last item shows the reviewer’s thoughts, which should be low weighted. This is about what a percentage of the attendees will want. Not all of them, some of them.
“This is more related to dba track rather than prodev. Also is survival really career development? Many would say that working 15 years as a lone dba could equate to failure in some peoples eye’s and I would struggle to want to see this session based upon info provided.” – I’m surprised this got through the review process from Lance and Mindy. I know they read a lot of comments, but this one is unprofessional and rude. The first part and the last partial sentence are fair. Those are opinions, though I think this is easily a pro-dev topic. The second sentence shows how many reviewers see the world very narrowly. I think the abstract was on balancing workload and life, which is professional development. My opinion here. However noting the someone is a failure for working the same job 15 years is ridiculous. The lone DBA is a company decision, not an individual one, but apart from the grammar errors, this one shows a person that is quite myopic on what a successful career looks like. I wouldn’t have this person back reviewing abstracts.
“Dinking the abstract rating for identifying information.” – I. Hate. This.
Honestly, I think this is a jealousy or pet peeve for a few people over, well, I’ll say it, Brent Ozar’s marketing. He does it well, better than most people and this comment/attitude is crap. Go look at all the marketing for all conferences. They mention the speaker’s names. Speakers attract attendees. If we don’t care about the speakers, why do we have this page? If you want a speaker independent review, then blank out the names, but don’t ding abstracts for this.
This is a subjective process. I am glad at least three people review the abstracts because any one person might not like the topic/speaker, have a bad day, not be interested in the topic, etc.
For the record, I’m no better and have to constantly remind myself as I choose content and edit articles that it’s not my opinion. I have to think more broadly and put myself in the position of the beginner, the intermediate person, the expert DBA that’s getting into R, the person that wants to delve deeply into a topic.
Really I want this to be discussed and debated on a few levels. What guidelines should reviewers use, and what feedback helps speakers. I would like to see a better program that helps build a better event overall. Again, we all could use skills in evaluating sessions better, so make this more public and let’s ensure those that volunteer have better guidelines on how to examine abstracts.
Lastly, I think judging based on levels is silly. We can’t agree on this, so if you think the level is too high or low, that can be adjusted. Make a comment this should be XX and let’s have speakers edit this before the schedule is released.The same thing for minor spelling or grammar, or phrasing. Let’s remember we have a lot of non-English speakers, and correct some of these items rather than just saying a native English speaker would deliver a better session.
Let’s remember, writing != speaking.