Why I Love the 2013 Beach Stylers Judging System

There was a judging experiment this year at Beach Stylers, and I loved it.

The Beach Stylers judging system was a simplified version of the FPA system. Two panels of judges scored routines. The first panel handled Execution and Artistic Impression. The second panel handled Difficulty. Key changes across the board created a competition that served our sport by rewarding ambitious play.

Execution
Penalties for mistakes were reduced and collapsed into fewer deduction categories. With 0.2 as the worst penalty for any mistake, taking risks got very attractive.

Difficulty
Difficulty was scored by phrase. Normally this doesn’t affect risk incentive because the easier, transitional phrases mute the effect of the peak moments. At Beach Stylers, only the top 10 phrases counted, creating an incentive to go bigger and bigger. Every time a team replaced a weaker move with a stronger one, their mark went up noticeably. Combined with the reduced penalties from execution deductions, the top 10 approach encouraged players to pushing their limits.

Artistic Impression
AI was simplified but touched enough elements to measure the performance while not being a burden. With the added responsibility of judging Execution, it was helpful for AI judges to track fewer subcategories.

Linking Execution/Artistic Impression
In the Beach Stylers system, the AI score and the Ex scores are multiplied together. This is a cool approach to reducing the skewed impact AI and Ex traditionally have on the final score and preserving the importance of Difficulty. Here’s how it works. AI/Ex can contribute a maximum of 50 points to the score. Let’s say a team maxes out in AI for 50 points (10 x 5 subcategories). But they have 3 drops. That would result in an Ex score of 9.4 and an Ex multiplier of 0.94 (Ex score divided by 10). The AI/Ex score is 50 x 0.94 or 47 points.

This is a reversal of the scoring dynamic from the FPA system where AI adds points to the score and Difficulty is locked in a narrow averaged range. At Beach Stylers, Difficulty was unleashed by the top 10 approach, allowing teams to add to their score in a tangible way every time they replaced a weaker top 10 combination. Meanwhile, AI/Ex stayed in a solid range, generating modest distinctions between teams. Teams that sacrificed difficulty for AI were likely to be hurt more than teams that sacrificed a AI for difficulty. That said, I saw a team or two lose points by not addressing AI.

The Judging Experience
I judged only AI/Ex, and it wasn’t taxing. Cooperation among judges helped to minimize Execution tracking errors. It’s possible to judge AI without taking many notes, so focusing on Execution marks while taking in the whole performance felt relatively effortless.

Let’s Do This More Often
This judging approach is a breath of fresh air. Like the turboshred approach, it incentivizes state-of-the-art freestyle play. It unleashes us. It’s an engraved invitation to step up. Turboshred has a presumption of mistakes that the general public understands. That’s not usually the case in team play. Beach Stylers addresses this by including enough incentive for cleaner and cooperative play to be fun for the general public to see. Let’s try this approach to competition more often!