All-Star Idea For How To Score Better

Welcome to our Cheerleading Community

Members see FEWER ads... join today!

But we all seem to agree that judging difficulty 'live' is really ineffective. Getting an objective accurate difficulty score later seems to be fare as long as the judges judging live determine performance and execution scores.
I agree!!! Now, how can we (parents, gym admins, coaches, judges and owners) help you all move this process forward to help legitimize the sport that we all care so much about...
 
Right now we ask trained judges to, via at the moment and by eyesight, to judge all the parts of difficulty AND exeuction/performance at once. Well... sometimes easy skills are performed so well and with such flair they seem more difficult. As well sometimes hard skills that are insanely harder than everyone elses struggle a hair with execution and are therefore rewarded less.

Then you throw in the smoke and mirrors of how many people actually do a skill to how many are on the floor.

So, what if we did this:

There are 3 judges (just a number that popped off the top of my head) that judge the execution of every category, performance, and creativity of the routine live. And that is it. They do not do difficulty. Then a separate judge later looks at a video and goes through and rewards the skills difficulty wise more objectively. HD video is easily possible (we already do it for legalities). If you do 5 standing fulls in a group of 15 people to try and make it look like more someone could actually count.

Reasons I like this idea:

The high energy performances of the routines would not be lost while judging. In fact judges would have only one piece of the puzzle that have to worry about. How well did they do what they attempted to do. They don't care how easy or how hard. Strictly based on how well it was executed.

Difficulty would no longer be hidden or covered. If you have a faker you wont be rewarded because there is video evidence of it. Ratios would actually matter and all the fake stunts that we hope people dont notice would finally matter.

If a team was improperly rewarded for difficulty it could actually be argued, but performance/creativity/all that stuff that can only be done live would not be changed.

Thoughts?
I like this idea!! I think that the person judging difficulty should not be the person judging execution. How can you watch and judge that simultaneously? I don't know how the video would work....maybe would add time. But I think if you have a difficulty judge and an execution judge that would solve some of the issues.
 
I agree!!! Now, how can we (parents, gym admins, coaches, judges and owners) help you all move this process forward to help legitimize the sport that we all care so much about...

Honestly, talk it out here. Figure out the amount of personnel needed. Then present to the NACCC, USASF, or whoever and see what happens.
 
Once knowing the elements of the skills performed in ratio to how many are on the floor then it is possible to determine (in no way different than now) how difficult it is. Though I would allow skills to be .XX instead of just .X in difficulty.

But given a sheet with everything in it versus a one time live viewing, with it then being that objective that you could score the routine without actually watching the routine to know that if 2 teams did 7 of the same stunt sequence that they would receive the same score but if one had a front spot it would change.

Or a real example stingrays lime does quite a few jump to handsprings, does the third time doing essentially the same thing still increase the difficulty? Versus a team that does the same skill once in the routine?
 
But given a sheet with everything in it versus a one time live viewing, with it then being that objective that you could score the routine without actually watching the routine to know that if 2 teams did 7 of the same stunt sequence that they would receive the same score but if one had a front spot it would change.

Or a real example stingrays lime does quite a few jump to handsprings, does the third time doing essentially the same thing still increase the difficulty? Versus a team that does the same skill once in the routine?

In theory it should. Doesn't always. Some scoresheets reward skills better than others.
 
In theory it should. Doesn't always. Some scoresheets reward skills better than others.
Just think those are types of things that would need to be addressed if difficulty gets judged by an exact list of skills, we need to know what will score what and if doing the same thing twice will increase it or I f having a second easier stunt will hurt your score more than it helps
 
Just think those are types of things that would need to be addressed if difficulty gets judged by an exact list of skills, we need to know what will score what and if doing the same thing twice will increase it or I f having a second easier stunt will hurt your score more than it helps

But this is a different question. How is what happens now different than the system I am suggesting?
 
I'm all for taking it one step at a time. First get something like this implemented, then work on setting up a COP for each skill..
 
I'm all for taking it one step at a time. First get something like this implemented, then work on setting up a COP for each skill..

The nice thing is with the difficulty method being done by video and NOT live it is actually possible to have a COP.
 
Would it help if coaches put a time of execution on each skill that needs to be judged for difficulty, in addition to the number of skills performed on the sheets they turned in ahead of time? That way the judge watching the video can fast forward through a lot of choreography that is irrelevant to the difficulty score. To make this work, the video would have to start at the same time that the music starts, so that the timing is exact.
 
Would it help if coaches put a time of execution on each skill that needs to be judged for difficulty, in addition to the number of skills performed on the sheets they turned in ahead of time? That way the judge watching the video can fast forward through a lot of choreography that is irrelevant to the difficulty score. To make this work, the video would have to start at the same time that the music starts, so that the timing is exact.

I think THAT exact isnt necessary. Remember I did one of the hardest most intricate routines in about 12 minutes with not the best camera work and equipment. If you have done this enough if you can skip ahead and behind fairly easily.

I would be curious (and this all depends on BlueCat 's willingness to share) if I gave difficulty scores how close I would be to what Cheetahs got.
 
I approve of this idea, at the end of the day, difficulty is set and limited by the routine, what you perform is what you perform the difficulty will not change, therefore creating a fixed score for this based on the routine content whether that be through 1 person objectivly watching the video or looking at a script (by whatever means) and giving the correct score for difficulty penalising fakes should be done. You shouldn't get scores for difficulty when it is false, just because it was not spotted. Then a set of judges can score the variable parts of the routine that are subjective -creativity etc. This would mean the judges average scores would just be based one this therefore it would be 100% fair for every team on the mat as difficulty is easily standardised and the subjective elements of the score are done as an average from judges who have focused purely on those elements. This is how I think it should be judged
 
Could the judge with the replay do it with one viewing in half speed from the front, then a second full speed view to confirm/check? Or is that not enough time?
 
But this is a different question. How is what happens now different than the system I am suggesting?
I get that it's essentially the same, and nothing has to be different. But it's kinda like a 8 color crayon box versus the 96, if you see something and it's blue, if you just have the 8 pack it will be just blue but with 96 there are 8 different blues.

If you are going to have a comprehensive list, there should be an objective way to evaluate the list. Then we would actually know if our routines have maxed out their difficulty
 
I get that it's essentially the same, and nothing has to be different. But it's kinda like a 8 color crayon box versus the 96, if you see something and it's blue, if you just have the 8 pack it will be just blue but with 96 there are 8 different blues.

If you are going to have a comprehensive list, there should be an objective way to evaluate the list. Then we would actually know if our routines have maxed out their difficulty

I think you are putting the cart before the horse. At the moment there is no way to have an objective list at all. I am suggesting to setup a system that would allow a list to be developed later on. So it would allow for more accuracy later.
 

Latest posts

Back