- Jul 10, 2010
- 2,439
- 6,692
- Moderator
- #196
The Bows & Arrows are the Special Needs team from ACE.who are they? I don't know all of the individual team names for ACE, but you said "not nearly as dramatic" so I'm guessing they're not a level 5 team?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
The Bows & Arrows are the Special Needs team from ACE.who are they? I don't know all of the individual team names for ACE, but you said "not nearly as dramatic" so I'm guessing they're not a level 5 team?
I have always wondered how the scoring would change if cheer comps were like dance comps where you wouldn't know the name of the team or gym until awards. The unis would have no team or gym designation on them and would only be identified by a number and division level.
How would this affect the scoring and the data, if all teams were to compete in the same uniform and unnamed to the judges? No team voice over’s would be allowed. In essence the judges would be scoring the true routine and not the team.
I completely agree with you. Now can you find arguable examples that could have been in the top 10 realistically? This will help our discussion.
Gotcha! Thanks!The Bows & Arrows are the Special Needs team from ACE.
American Elite Allstars and Ohio Explosion Rockets were the two special needs teams this year at worlds.Gotcha! Thanks!
Does anyone know the names of the two special needs teams that performed/competed at Worlds? I managed to get pics of them, but no name. This one girl was so freakin adorable throughout the entire routine- it's almost as if she were the captain she was so ready and/or fired up for everyone to be where they were supposed to be, etc.
Those must be the ones I got pictures of then :)American Elite Allstars and Ohio Explosion Rockets were the two special needs teams this year at worlds.
If you watch the scoring video, it says that day 2 scores may be higher because judges know what to expect and have seen most of the routines, so they generally know what's coming, but would still leave room for cushion at the top of the scoring range.Did anyone notice that the scores (at least in the LG COed division) were so much higher on day 2 with what (to me) was the same routines. Cali Coed scored higher on day 2 then they did on day one with deductions. This right here is what confuses me how is that possible to score higher with 7.5 pts in deductions on day 2 then with 0 deductions on day one? I'm really wondering if this was something that happened in other divisions and if anyone else noticed it or am I just over thinking it.
If you agree the data suggests "score inflation" as the day goes on in divisions with large numbers of teams, it begs the question "what about the process of judging lends itself to higher score bias as the day progresses". Universal score sheet or no, it appears to me that any judging is going to be influenced directly by the most recent cohort of data one has seen in a comparative manner. That means what you saw that day, most recently and any routine you had seen during the year that really impacted your memory. I would also add the "fish story" phenomenon in here, as those really high impact routines seen earlier and possibly at other competitions clearly improve beyond their original execution as time goes on unless compared side by side on video.
For instance, if you had a 100 team division and the 1st team was clearly superior in every aspect of the score sheet then they would set a high standard in each category that no other team should surpass. I would suggest that will rarely if ever happen due to the complex nature of the routines and the developing parity in major competition cheer divisions. You would then have to assume that all scores will be relative to the highest and lowest score a judge has seen and scored for that day. However, that standard will change as more teams are seen, both up and down. Would it be the natural tendency of watching repeated routines for your comparative "high score" to drift upwards even when watching teams that had essentially similar routines just due to the sheer number of routines watched and the cumulative effect on the judges opinion of seeing routines that appeared close but either a little better or worse vs the judges memory of an earlier routine? Meaning, if an early routine in a competition set a very high bar in your mind as compared to your experiences at other earlier competitions you had witnessed in the same division that year, and you gave it a say 9.5 out of 10 and a couple of teams came later and outperformed your expectations and the earlier team would they get 9.8s, but would the team that was about as much worse as the the better teams were better compared to the team setting the standard not be scored down as much, say to a 9.4? Is it a tendency to score better comparatively higher than not as good is scored low? I think this is an area where well known teams may have a bit of an advantage as compared to others also. I would suspect that coaches well positioned in the industry may have enough potential impact on a judging panel and an EP to command a higher scoring bias, but that is another discussion.
If this stuff is real, I would suggest the most important factor would be to limit the judging exposure to the fewest teams possible to allow the routines to be fresh in the memory of the judge as to allow the fairest comparison while limiting the portion of the routine the judge is responsible for to the smallest amount. I think as long as all the staffs know the sheets, the form of the score sheet may be less a factor than the capability and tendency of a judges' mind. Or you could move away from solely real time judging and go to tape review. I personally think that will give better results.
Did anyone notice that the scores (at least in the LG COed division) were so much higher on day 2 with what (to me) was the same routines. Cali Coed scored higher on day 2 then they did on day one with deductions. This right here is what confuses me how is that possible to score higher with 7.5 pts in deductions on day 2 then with 0 deductions on day one? I'm really wondering if this was something that happened in other divisions and if anyone else noticed it or am I just over thinking it.
Yeah the best way to battle this is have separate people judge difficulty and exception / performance.
Your performance / execution judges watch live. Execution can be done based on a 0-10 or 0-1 scale (like Varsity has now). Execution scores should be based on level of perfection and most likely look like a bell curve. Some people could be scored a perfect. In fact multiple teams would be able to be scored perfect. If a mini level 1 team walks into the level 5 division and PERFECTLY executes their routine they can get a perfect score. The judge wouldnt have to leave room for another perfect routine. A judge would be perfectly able watch the routine and judge off execution and enjoy the routine. As well scores for performance value can also be in there. This would also FORCE people to be more creative because the performance / execution judges are not concerned with difficulty. They want to be entertained. That is one of the largest parts of cheer I think people enjoy.
On the other half difficulty does not and should not be done live. Have a skills cheer nerd (someone as myself) go through the routine and quantify it in difficulty. This is a GREAT thing because as many tapes as I have watched not a single one has ever been able to convey the energy a team puts out there over tape. Meaning a skills judge wont be influenced by how well something is performed, just the difficulty itself. Difficulty is important to allstar gyms because this is what allows you to charge for tumbling classes and stunt classes. As well this way you can make adjustments if the difficulty was off. A challenge system that would cost $200 per team to challenge a difficulty score would grant the freedom if there were issues. You see something off, you can challenge. Now the execution / performance score is different in that your goal is to make the judges feel like it was executed well and performed well. There is no challenging because you get once to 'impress' the judges. But difficulty is what it is. Judges currently will hold room on the difficulty sheet for something later (which is the LEAST WORST solution until they implement what I am talking about) and in doing that leave room on the execution side as well. Right now if you only mildly impress the judges they may miss you had 7 kick kick doubles which is the highest in your division and you should have a difficulty score more than anyone else. If someone were to collect me all the sheets from at large division and I could see difficulty and execution scores I bet we would find that they are RARELY very different. You get a 15 out of 20 on difficulty and 16 out of 20 on execution. So judges hold room on the execution side as well.
I want to end on two final thoughts. First, I think judges try really hard and do the best possible job available to them in the system provided. Because the system is flawed they take a lot of heat in it. It's a hard job and they are not setup to win. They take a lot of pride in their difficult judge, so they are not likely to change to something else because they would believe it would still be equally as hard and we are just arguing moot points. I see this ALL the time in my day job. People take pride in jobs they do that are hard and not effective and almost resent someone saying it can be done better and easier. Sorta like your grandpa talking about how you have it easy because he used to walk 4 miles in the snow uphill both ways to school every day and your mom drops you off. You should be walking the 4 miles because its good for you. But in reality grandpa what you took pride in is dumb. You WISH your mom had a car to drop you off.
Second, I have had the case made that order is more important than the scores. If they get the order right, then the scores don't matter as much. I think that is a HUGE part of the problem. Look at this data. Judges should never worry about personally trying to get the placement right. A judge should be focussed on their individual section and the winner is a product of the scores, not the scores a product of the winner.