Priority Ranking versus Mean Scoring versus Likert Scales in ARS voting

Recently, a financial client submitted polling questions to me for a given session. Among the questions they wanted to pose to the audience was one regarding insolvency resolution. They wanted attendees to priority rank from a list of three components to identify what were most to least important to their framework. Under this scenario, I convinced my client to use an alternate method to priority ranking.

Priority ranking allows you to rank up to ten selected alternatives. For example, a list of five criteria can be offered and the audience can be asked to rank all five, using a predetermined point system. This point system can be used to equate values ranging from most to least important/useful/ necessary and so on. There’s also the lesser considered and used feature within priority ranking; ranking fewer alternatives than those listed.

By limiting the selections, it also eliminates the tedious mystery of knowing which platform(s) are applicable to each participant. In other words, why make the audience rank 10 criteria if only three or six apply to them? That is why the ‘Other’ answer option is very useful in identifying your audience. And, if the percentage responding to ‘Other’ is significant, you can address those who selected ‘Other’ and ask which platforms they find useful that weren’t listed.

I suggest using prioirity ranking only when listing more than five answer options. To that end, ask yourself if each of those additional answer options, beyond five, are truly useful/necessary for the results, or the data that you’re seeking, and whether they are relevant to your presentation. Then decide how many of those options you would like the audience you prioritize.

Your next step is determining your point system. This feature I’ve found to be too often neglected by clients, presenters, planners and the like. Their focus is on engaging the audience through their feedback. However, this neglect also affects the value and trueness of the data, which can also reflect the quality of the presentation itself.

For example, you want to have participants rank a question and you offer, say, 10 criteria from which to rank them. Now ask yourself:

-From those 10, how many do you let the participants rank? All of them? Their top five? Their top three?

-Once that’s established, what point ranking do you use? 10-9-8-7-6-5-4-3-2-1? 10-7-5-3-1? 10-7-2?

Below is an example of the difference between using preset and custom point systems. The question asks to rank your top five. The data reflects the identical number of keypads responding and the ranking order of each keypad’s five selections.

Notice the results between the preset and custom point systems assigned. By stretching your point values, you better identify what keys are of greater priority.

Now, let’s play Devil’s Advocate; what if you or your attendees can’t select one true criteria that they find most important and, instead, have TWO criteria that are equally important to them? How can you apply a descending point system that truly reflects their opinion? After all, we have preferences in life, both personally and professionally. But, there are instances where we can’t prioritize one criteria over another. It can’t be forced upon.

ADVICE: Review the wording of your polling question. If the question leaves room to question it’s validity, it shouldn’t be asked. I often will assist or suggest to a client that we go back to the drawing keyboard.

Back to my client scenario.

They had only three criteria to consider as options. I asked and they agreed that all three criteria applied to everyone in the audience. I then asked and they also agreed that a preset ranking system narrowed the quality of the results. Enter the solution; Mean score ranking.

Instead of priority ranking the three criteria on one slide, each criteria were voted individually on a scale of 1 to 10. The additional time needed to poll three criteria individually, versus one, was actually pretty minimal. Priority ranking from three or more options into one considerably thought provoking question can be more time consuming than three less thought provoking questions. This allows your audience to focus on one component at a time and vote on their merits individually.

The results were addressed and used as a form to propel the direction of the panel discussion. It also allowed the panel to address Criteria B and C in greater depth based on their scores and their near identical priorities to the audience.

If the presentation doesn’t involve a Q & A with the presenter, using a Likert Scale of answer options can be used to achieve a similar effect. You can use a range from ‘Very Important’ to ‘Not Important At All,’ from ‘Strongly Agree’ to ‘Strongly Disagree,’ etc. With this scale, you can add options such as ‘Does not apply to me’ or ‘I Don’t Know’ to increase participation and better identify your audience.

 

 

THE ARS HAT

Why me?

I’ve often heard from clients that ARS ‘has to work’, that it will ‘make or break’ their presentation or event. And that’s why I’m there, to provide them that assurance. But I’m also curious as to why they express or reiterate those concerns.

At the risk of those in the AV industry, who provide and perform the functions of audience response technology, that may think I’m being critical of them, I'm certainly not. When I’m onsite, they are the same people that I work alongside. I value their assistance. Their 'primary' functions are often necessary to help me see through on the assurance I provide our clients.

I also know that, behind the scenes, they are focused on other important components of our client event (sound, projection, video, cabling, microphones, etc.) of which they are more proficient at than I am and they have to devote their time and energy towards in similar preparation. When I’m not requested at an event, they perform the function of an ARS operator in a secondary or tertiary role, at best, behind their other AV tasks onsite.

At my most recent events, I’ve had three different AV techs tell me:

“I’m glad I don’t have to do that.”

“I’m so glad you’re here.”

“How did you do that?”

Why me? That’s why. I wear the ARS Hat.

How to merge two ARS questions into one

Conditional Branching is a seldom used component within a presentation. When used, it is misused, mismanaged and, as a result, time consuming. More than anything, conditional branching is ignored, underutilized and most presenters either fear it or they just don’t know how to best apply it within a presentation. Changing sequence within your presentation is something we’ll address in due time.

Often, a presenter pretends to engage an audience by asking for a show of hands to questions that carry little relevance to the audience or value to the presenter. After all, if the question was of value, it would yield honest and more abundant feedback; thus, a presenter would know better not to ask for a show of hands.

A presenter asks a general closed-ended question to start. They follow that with subsequent, scripted and more detailed closed-ended question(s). This faux attempt at engagement, when combined with weak content, is a recipe for disengagement and can snip the connection between he or she and their audience.

The point being, the presenter is not interested enough in how many people are responding because they are following script and are ‘going down that path’ regardless. Again, we’ll discuss a presenter’s more genuine interest in getting feedback another day.

When asking a Yes/No or closed-ended question, it is common to be asked a follow up question to those who answered Yes. But when a presenter proceeds further by using a narrowing line of questioning, they’re reaching fewer and fewer audience members while dismissing more and more.

In using audience response technology to ask such questions, the common mistake is made by asking the audience those same questions. The first question separates those who it may or may not apply to…

…and then asking a follow up question ‘solely’ for those who it did apply to.

 

The solution is to merge ‘No’ with those specific ‘Yes’ follow up options into one polling slide. This engages and sustains the audience collectively.

In the example below, you can also expand the ‘No’ and ‘Yes’ options to better identify their reasoning or thoughts behind their selection.

Though lacking abundant proof of my theory, I truly believe that more people will answer ‘No’ when clustered with various ‘Yes’ options as opposed to being clustered with just ‘Yes.’ The reason is simple curiosity; those ‘No’ responders will more likely want to know how they fare by comparison to their fellow attendees’ various ‘Yes’ results. And if you can add a simple 'Why?' component to your answers, congratulations! In other words, by capturing more interest in that singular moment, you’re likely going to capture more feedback. And then there’s the obvious second benefit, that being, the time saved by polling the entire audience with just one question.

 

Some will argue the presenter’s intent is only to focus on the ‘Yes people.’ However, that’s taking a risk in any presentation where there is a presumed outcome. And that is where, again, the larger scope of Conditional Branching within a presentation can really help. But for now, consider this as a baby step in maintaining your audience’s attention.

WARM AND CAPABLE: Using ARS to gauge Presidential candidates.

Chris Malone of Fidelum Partners explained how social psychologists have long pegged two specific personality traits that largely and best surmise individuals; warmth and competence.

Malone, co-author of the book The Human Brand, spoke last month at ORX (Operating Room Excellence) 2015 at the Marriott Riverside in San Antonio. The seventh annual conference was hosted by Herrin Publishing of Malvern, Pennsylvania. Herrin is the publisher of Outpatient Surgery magazine.

Malone linked warmth and competence with their resulting emotions and behaviors. He then asked the audience to gauge two individuals based on those perceptions.

Taking advantage of the political climate, the two subjects were Hillary Clinton and Donald Trump. By using their audience response cards, 185 participants voted on a scale of 1 to 7 for each candidate based on their perceived warmth and competence of the Presidential candidates. The range representation was 1 equaled ‘Does not describe at all’ to 7 equaled ‘Describes extremely well.’ The results were, to say the least, remarkable.

Neither scored well. Clinton was perceived as being more warm and trustworthy than Trump. However, Trump was viewed as more competent and capable than Clinton. In the end, when averaging their two scores, they were nearly identical.

-Trump's two traits carried a mean score of 3.0736

-Clinton's two traits carried a mean score of 3.0704

As it relates to the OR environment, Malone emphasized that numerous judgements are made daily when engaging with colleagues, patients and their families. Because so many of these perceptions are made quickly, loyalty is harder to sustain now than ever before despite the fact that we’re more connected to one another digitally than ever.  Malone concludes that we’ve been missing half the picture -  the half that’s driven largely by warmth perceptions and results in human trust and loyalty.

Next year’s OR Excellence will be held October 12-14, 2016 at the Hyatt Regency Coconut Point in Bonita Springs, Florida. For more information, visit http://www.orexcellence.com/

For more information about ‘The Human Brand’, visit thehumanbrand.com

Gastroscopes, Anesthesia Trolleys and Operating Tables, Oh My!

The Medical Device industry's use of ARS has long been sporadic, mostly out of uncertainty as to how to use it in a live venue. Gauging the audience for their comfort and knowledge on an existing or new product can go a long way; not only by showing your audience that you care about their thoughts, but the helpful pillar that is understanding your audience.

The idea of presenting under the fear of regurgitation or recycling that is which already known can be eliminated simply by getting familiar with who you're addressing. It can enhance your Q & A discussion later in the presentation, which you can kick-start with ARS using deeper questions and content.

It's more than promoting medical device software to an audience of medical buyers or caregivers. It's qualifying your audience. And it can allow you to better position yourself and help your target audience learn more.