All-Star How Would You Change All Star Scoring?

Welcome to our Cheerleading Community

Members see FEWER ads... join today!

There are multiple ways to do that 1 toss. Are those differences enough to change the start value?

Having different arm positions, to me, does not change the difficulty of the basket, so no.
 
This is the first novel I have written in a while.

I shortened the novel. :). (To write my own apparently.)

I have loved this idea since you first posted about it, basically to make the objective part of the score as accurate as possible.

I have a few thoughts (in no particular order, though I left your statement

1. Difficulty score - maybe it is feasible to submit a video of a routine for a "difficulty score range" for each component that a coach could have in his/her pocket to use to help initiate a dispute (I still like the $200 idea because it will still hopefully weed out frivolous claims.). (With routine changes throughout the year, maybe this isn't feasible, or maybe this is a service gyms pay for on the outset that lets them challenge without the initial upfront cost?)

2. Execution score review in large divisions - in divisions with more than a certain number of teams, OR a certain amount of elapsed time between the first and last team in a division. I totally understand that scores earlier in the division will be lower than at the end of a large division. Found first in a divisions shouldn't be a "death sentence" especially in some of the competitive divisions where say the top 5-6 teams out of 14 are separated by less than a point. (I already cringe at something like the summit where some of the divisions have 20+ teams, and teams that are the best of the divisions. In theory, scores for most of those large divisions should be fairly close and I already pity the teams that go on first. I know some EPs reverse day 1 order on day 2 to make up for that, but I have to imagine the day 1 bias / standings still affect day 2 scores.

3. As a fan / parent, I am still baffled by the the fact scores are not released by many EPs. I think it should be required that aggregate category scores are released for each team (though at this point if be happy with a final score.). I can't imagine going to any other sporting event and just finding out team B won...and that's it. I would sacrifice real time type scores (like we get in other judged sports) because I want there to be time for reviews, but once the event is finished, I would love to see how teams stack up.


The Fierce Board App! || iPhone || Android
 
I shortened the novel. :).

1. I honestly think keep the current way we score right now (a person decides on the current criteria what the difficulty is) with the new video judging part. No it will not be 100% right, but I think changing too many things at once will cause issues that would be avoidable while making structural changes. The only list of skills I think that someone could submit that would actually help the process is if they listed the order the routine the skills came and what they want scored for what category.

2. The reason that happens is because people want to leave room on the scoresheet for teams later that might be better. It isn't the judges fault, it is because we currently ask them to score the way they do they have to score like this. If you separate out the difficulty and execution you solve this problem. Here is how:

Execution is OK to give out perfect scores repeatedly. If every team that came through executed their routine, no matter the difficulty, perfectly it is ok in this system I am describing to give them a perfect. If a level 1 team entered a level 5 division that level 1 team with a level 1 routine could score a perfect on execution. It isn't what you do, it is how you do what you do. So if the first team that goes is PERFECT than a judge can reward that.

The difficulty judge can break down categories methodically. I also see how routines are structured with the list could help make sure scores are more accurate on the difficulty end.

3. I think scores aren't released because people could find fault in the scoring even if the result was right.


It is ok to compare apples to oranges as long as you do it the right way. Is this orange more orangey than this apple is more appley? If you create the right criteria you can compare a lot of stuff.
 
1. Agreed on keeping changes at once to a minimum. I think the struggle with defining actual values for skills comes because people will always debate what skills are harder. This is definitely a case where to start, it is better to be generally right than precisely wrong. Throw up some values before team selection or definitely before choreography and now it is on the gyms to choreograph based on how they want their routine to score. Sure the starting point may no be perfect, but we have to start somewhere.

2. I like it!! I definitely see how that could work.

3. I agree that's why. But I think it's a cop out. I think your proposed judging process would make overall judging more accurate. Hopefully that will give the judges / EPs the tools to be more confident with releasing their scores. My current perception is that they don't release them because they aren't maybe as confident with the scores as they should be. In a perfect world, I would like to see not only category scores but the breakdowns of difficulty and execution in each category. I don't even mind if they are released a week after the event. I understand being afraid of the potential backlash, but as our parents become empowered with some knowledge, the resistance will become much weaker (of course we will never get rid of the Suzie's moms, but don't punish the rest of us who just want to understand the sport our kids love.)


The Fierce Board App! || iPhone || Android
 
Andre
Mclovin

ASCheerMan

I say coaches are completely allowed to argue their difficulty score, for a price. (stole this part from gymnastics) If a coach believes the difficulty score for their team is not correct they can pay $200 dollars to have that difficulty score reviewed by someone else independently. If the score the new judge is outside a certain range (let's say if the team was given .7 and the new judge gave them more than a .1 swing... so .9 or .5) the coach gets their money back. If not the competition keeps the money. Basically a gym will only argue if they really know their score was messed up. $200 per category they think was wrong.

I would have gotten it all back, but I think I made at least 6 or 7 trips to the accuscore table in myrtle beach and I don't want to have to carry a couple thousand in cash on me to competitions. With that being said, the process at least from my perspective ran smoothly, but it helps to be able to present a clear case with knowledge of the rules, scoring, and deduction system and I had a sheet to write out the rule or range requirement and what we did in the routine to meet that and why the current score, deduction, rules interpretation was incorrect. Only really had to "argue" (in person discussion, involving several layers of judges) 1 thing and it was a rule interpretation.
 
I would have gotten it all back, but I think I made at least 6 or 7 trips to the accuscore table in myrtle beach and I don't want to have to carry a couple thousand in cash on me to competitions. With that being said, the process at least from my perspective ran smoothly, but it helps to be able to present a clear case with knowledge of the rules, scoring, and deduction system and I had a sheet to write out the rule or range requirement and what we did in the routine to meet that and why the current score, deduction, rules interpretation was incorrect. Only really had to "argue" (in person discussion, involving several layers of judges) 1 thing and it was a rule interpretation.

The fact that you had to do that is the issue, not that you did do it.
 
I really like the idea of having a declaration of skills. Someone above mentioned how it created discrepancies while judging however I believe there are ways to avoid it.
Athletes should be made aware of the skills their coach claims for them, and it is their job as an athlete to preform those particular skills.
As a coach, the decision has to be made 'will this skill be preformed, does the team have the stamina, will this individual come over this mental block' etc...
I think some ways the coach and athlete relationship will have more 'trust' if thats the right word- because communication has to be open.
And if there is a penalty for not abiding to the declaration, the pressure to perform a flawless, choreographed routine is increased and you might have a more cohesive team.
I say this because i dont know how many times throughout dance/gymnastics/and cheer I've been told 'the judges dont know what you're supposed to do, so preform like you didn't make a mistake' but with cheer being such a heavily weighted team sport, the last thing a team wants is for an individual to not pull their weight in skills choreographed to them.
 
The fact that you had to do that is the issue, not that you did do it.
I appreciated having a process, vs just being more of the oh well thats what the judges saw or having to actually argue and then just getting into the bare bottom of the range rather than getting a legitimate score.

But they were essentially doing what you had proposed, they were video judging the routine for what was being challenged.

One major Problem with video judging and it happened with my team during one of our challenges, There was a glitch in the video(it was clear that there was a glitch) but it essentially erased the skill in question. We did toe touch bhs toe touch, in the video we did kind of a humming bird looking toe touch. If that was just video judged we would have received the incorrect score.
 
I appreciated having a process, vs just being more of the oh well thats what the judges saw or having to actually argue and then just getting into the bare bottom of the range rather than getting a legitimate score.

But they were essentially doing what you had proposed, they were video judging the routine for what was being challenged.

One major Problem with video judging and it happened with my team during one of our challenges, There was a glitch in the video(it was clear that there was a glitch) but it essentially erased the skill in question. We did toe touch bhs toe touch, in the video we did kind of a humming bird looking toe touch. If that was just video judged we would have received the incorrect score.

I would say comparably the number of times the video will messup compared to the number of times a judge will mess up because they are doing it live would be a much more acceptable amount.
 
I would say comparably the number of times the video will messup compared to the number of times a judge will mess up because they are doing it live would be a much more acceptable amount.
True, but there may need to be a live watching person taking notes in addition to the video judge as a back up.

So we need
3 technique/execution/performance/creativity live Judges
1 live difficulty judge
2 Video difficulty Judges
1 Video Rules Judge
1 Head Judge
and someone to handle challenges

What is the current typical judging set up?
 
True, but there may need to be a live watching person taking notes in addition to the video judge as a back up.

So we need
3 technique/execution/performance/creativity live Judges
1 live difficulty judge
2 Video difficulty Judges
1 Video Rules Judge
1 Head Judge
and someone to handle challenges

What is the current typical judging set up?

Why not have just a second video camera as a backup?
 
During two day events would the teams submit the written list each day or would they submit it once and only be able to amend it in case of injury?
I thought I would get some suggestions on ways to improve the scoring process for all star cheer.

Here are a few of mine:

1. All the scores are made public. EVERY number written down on any judge's sheet should be made available to the public. The comments are given only to the coaches of the teams.

2. Universal scoring process (with customizable variables) Having a single system would improve the judging as all training could be done on one system. My "tweak" to this is to allow EPs, if they choose, to add a variable number as a "multiplier" to each category to change the weighting of the various areas of cheer. The final score would look the same, but the EPs could adjust the value of each category if they chose. The important part is that the judging process would not change at all, the computer would just spit out the adjusted results.

3. Unofficial scoring is announced as you go. It increases total interest at the event. (Yes, awards is less dramatic, but OVERALL there is more drama throughout the event as you anticipate and compare scores all day long.) It also lessens the appearance of politics. Judges can still keep track of the scores they have given as a reference. IMPORTANT: Scores/placements are not final until coaches have had the chance to quickly review their scoresheets and lodge any protests. (Correct math errors, out-of-range scoring, etc.) This also has the benefit of allowing teams to compose themselves before being put under the award ceremony microscope.

NEW:

4. Coaches turn in a skills declaration before their teams compete. The judges have a written list of the skill elements in the routine to use as a reference in deciding difficulty. This would be in the order that they are performed in the routine. (Execution would still be subjective, and a major part of the final score.) Penalties would be given if athletes changed their skills to something easier. (Athlete throws a tuck instead of a double, flyer singles down instead of doubles, etc.) A judge sitting with the deduction judge would watch video to determine compliance with written skills. Coaches would have the ability to make last-minute changes in the case of injury or water-down decisions.

5. Expert panel annually ranks the difficulty of the most common various skill elements. Is a 1.5-up to stretch harder than a tic-tock? Is a 1 to double harder than a 2 to whip double? Is an assisted toss stretch harder than unassisted toss extension? There currently is NO standard by which coaches (or judges) can go by to decide what skills they will be rewarded for. Coaches may be performing skills they think are getting rewarded for, but the judges may not actually think it is harder. We basically need a frame of reference.
Number 5 is a tricky one.
Is there a guide to declare what is more points numerically? Because, for instance, the 360 ball up tick tock was never performed until 2010 Worlds by Maddie Gardner. I'm sure more teams will continue to surprise us with brand new tumbling passes, stunts, pyramid inversions, etc. in the years to come. So when something new is introduced, what would that count for in comparison to other less, or just as difficult skills?

Another thing: Do you mean that a tick tock and a full up, for example, would have to count as the same amounts of points? That confuses me because I don't feel that is the right way to judge it as neither is more difficult than another. I feel that the content of the skill shouldn't only be the factor along with execution, I think it should be about the utilization of people and variation of difficult skills. But then again, it wouldn't be fair if a team that did a full up and a tick tock would get the same amount of points as a team with 1 1/2 ups and a double tick tock. I think what you're saying is very reasonable and I completely agree, but I don't understand how it'd work and what factors would play a role in deciding the difficulty of different skills.
 
I'm not a fan of #4.... I lived in Germany for a bit, and they had a system in place where each element was given a certain point value (more points = more difficulty). For example, in Level 6 a 421/221 would be 5.0, but a 211 would be worth 7.0.
You'd think it would be great - because everyone would know what they could expect for what they put into their routine... However, that was not really the case.

What you ended up with were teams who all had the most difficult elements in their routine (so that basically every routine ended up looking basically EXACTLY the same... Mount, Hit Lib X, Dismount. Walk. Mount, Hit Pyramid X, Dismount. Walk. etc.) Then you had teams who did not possess the technique or skill to be executing half of what they did, simply because if you didn't try that Wolf Wall or Swedish Fall etc. you had no chance of scoring.

Point 4 reminds me of this concept. A TicToc to X is worth X points, but a 1.5 is work Z points.

Unless, your idea is that this "point scale" is kept internal for judges only so that they have a reference point to judge who gets scored in the upper end of the difficulty or lower for example Elite stunts, etc.
 
Back