All-Star Comparative Scoring

Welcome to our Cheerleading Community

Members see FEWER ads... join today!

The complaint was everyone was doing what was needed to be in the high category so there wasnt enough room to score. So they ditched it and went to comparative. What you should do is iterate and improve. I would actually say last years scoresheet was a success. It got everyone to compartmentalize and do what was needed. They just needed an adjustment to make it fit better.
Also, I think if you took difficulty scores out of live judging and saved that for video review, the whole thing might seem less chaotic. Only thing you'd be judging live is dance, creativity, technique and overall impression.

Plus, it would be easier to go back if necessary.
 
Also, I think if you took difficulty scores out of live judging and saved that for video review, the whole thing might seem less chaotic. Only thing you'd be judging live is dance, creativity, technique and overall impression.

Plus, it would be easier to go back if necessary.

I think a coach can ague diffculty, but never live execution. And then allow them to pay $500 if they want to petition a difficulty score. If they rescore give $500 back. If not, judges keep.
 
I think a coach can ague diffculty, but never live execution. And then allow them to pay $500 if they want to petition a difficulty score. If they rescore give $500 back. If not, judges keep.
That $500 would be killer. I argued difficulty (correctly, with email proof from the scoring director of the EP to back me up) at our first competition. Difficulty score was raised (to the correct range) by 0.3 points. That is a huge difference and allowed us to win a paid US Finals bid. I make exactly $0 coaching for this team. The idea of paying to ensure judges do their jobs correctly (just get into the right range - I'm not arguing scores once they are within the correct range)... Yeah, that would suck if that policy were implemented.
 
That $500 would be killer. I argued difficulty (correctly, with email proof from the scoring director of the EP to back me up) at our first competition. Difficulty score was raised (to the correct range) by 0.3 points. That is a huge difference and allowed us to win a paid US Finals bid. I make exactly $0 coaching for this team. The idea of paying to ensure judges do their jobs correctly (just get into the right range - I'm not arguing scores once they are within the correct range)... Yeah, that would suck if that policy were implemented.
Glad you won that one. I was wondering how that turned out. #coachsarahforthewin
 
That's one thing I always liked about Cheersport: video judging. I believe they judge an entire division, then review the whole division for correct scoring, then after scores are released you still have like an hour or so to challenge your scores.
 
That $500 would be killer. I argued difficulty (correctly, with email proof from the scoring director of the EP to back me up) at our first competition. Difficulty score was raised (to the correct range) by 0.3 points. That is a huge difference and allowed us to win a paid US Finals bid. I make exactly $0 coaching for this team. The idea of paying to ensure judges do their jobs correctly (just get into the right range - I'm not arguing scores once they are within the correct range)... Yeah, that would suck if that policy were implemented.

The reverse is if there is no barrier to requesting a score increase every team will request a score increase.
 
I think a coach can ague diffculty, but never live execution. And then allow them to pay $500 if they want to petition a difficulty score. If they rescore give $500 back. If not, judges keep.
I'd be fine with restricting arguing to difficulty.

I do think you should have some type of cap (be it fiscal or a set number).
 
Question for all you judges out there. Is there ever a situation where you go back to a score sheet for a team who perhaps preformed first in the division and rescore? ie. You gave them a middle of the range score and realize, after more teams preformed in the division, that they are better then you originally scored.

Otherwise how else can you give a team an accurate score if you are using "comparative" scoring when you have nothing to compare them to?

I hope that makes sense.
 
Question for all you judges out there. Is there ever a situation where you go back to a score sheet for a team who perhaps preformed first in the division and rescore? ie. You gave them a middle of the range score and realize, after more teams preformed in the division, that they are better then you originally scored.

Otherwise how else can you give a team an accurate score if you are using "comparative" scoring when you have nothing to compare them to?

I hope that makes sense.
Follow question based on the above.

If you do go back and adjust a previous teams score is it with the assistance of video replay or memory?


*If 2 is better than 1, does that make a double standard more enjoyable?*
 
The reverse is if there is no barrier to requesting a score increase every team will request a score increase.
I have Jam's score check sheet pre-filled with expected score and every single skill that is choreographed under each category and the time it happens in the routine. I take note of when skills are not performed as choreographed (although I usually have an overabundance of skills just in case a kid or two happens to omit a skill - although that's rare). I realize that I only coach levels 1-3 and this may be more difficult to do for higher levels, but I can't imagine not going into a competition fully prepared to justify completely why my teams should score into a particular range. Maybe I'm an anomaly this way? But I would hope not. Again, I don't argue with judges if the teams are scored in the proper range. But you better be sure that I will be arguing if they are out of range. What else is score check for, than to verify that objective scores meet your choreographed expectations?

As to comparative scoring... Jam did it all last year and I was always okay with the results, even in terms of cross-level comparison (rec divisions usually have a grand champion named for all of the rec divisions, inclusive of levels 1-4), but then again, we had the same panel of judges scoring all of the divisions so the comparisons may have been more fair.
 
Otherwise how else can you give a team an accurate score if you are using "comparative" scoring when you have nothing to compare them to?

You leave room for other teams to be scored above or below you. The idea is that an accurate score is only accurate compared to the competition. You can't compare your score for one comp to your score at another.

People on my team like to talk about how we got our highest score at worlds every last year and I want to shake them and say that that doesn't mean anything!
 
Last edited:
You leave plenty of room for other teams to be scored above or below you. The idea is that an accurate score is only accurate compared to the competition. You can't compare your score for one comp to your score at another.

People on my team like to talk about how we got our highest score at worlds every last year and I want to shake them and say that that doesn't mean anything!


When it comes to judging, I don't necessarily "leave room".. I mean on difficulty what they do is what they do. End of story.. rarely do you have perfect execution so that is somewhat where there can be room, however, by the time you add all 3 scoresheets and deductions it works out... the first performing team can easily win, if they are better but they need to truly be BETTER!
 
Question for all you judges out there. Is there ever a situation where you go back to a score sheet for a team who perhaps preformed first in the division and rescore? ie. You gave them a middle of the range score and realize, after more teams preformed in the division, that they are better then you originally scored.

Otherwise how else can you give a team an accurate score if you are using "comparative" scoring when you have nothing to compare them to?

I hope that makes sense.


I'll talk to this, cos it made me so made when I was judging. When I judge, I always have in my mind where execution should fall appx. For example, you hit a lib but it's not strong, or the toe isn't at the knee, but it's not bad overall, you're gonna get a 6/7. Difficulty usually has a low/med/high rubric, so that's easier to place. I've always looked both at what the team does as well as how they did in comparison to other teams. That way, if I've scored team A too high or too low, other teams are judged the same.

One competition, I was arguing a score with my fellow panel judge. Our scores needed to be within 1 point, as coaches were complaining about high spreads from judge to judge. I gave a Team B a 4 on stunt execution, she gave them a 6. We both presented our reasoning, but she wouldn't budge. I finally said, "Who was cleaner, Team A or Team B?" She said Team A and I then asked her why she scored them lower than Team B. She got the point and made the score change. I was shocked that there were judges who didn't think about these things.

As for scores across competitions, I've often had my kids do so much better at a later competition, but score the same or lower than other competitions they've also competed in. It's all about the different panels of judges and what they're looking for. The ranking don't usually bother me, cos I find they're quite accurate, but it can be hard when the kids know they did better but receive a lower score. It's also hard on them as we use their previous scores to goal set for the next competition (ok, 6 on jump execution, let's push for 7-8 at the next one, etc.). Even if they do better, it's hard when that's not reflected, but that's part of being a judged sport.
 
When it comes to judging, I don't necessarily "leave room".. I mean on difficulty what they do is what they do. End of story.. rarely do you have perfect execution so that is somewhat where there can be room, however, by the time you add all 3 scoresheets and deductions it works out... the first performing team can easily win, if they are better but they need to truly be BETTER!

Fair enough, but would you put TGLC at the very top of the scoresheet if they went first and you knew cheetahs was coming later? Or vice versa?

There's only a rough rubric for difficulty so it's not like team double ups automatically equals some score.
 

Latest posts

Back