Judging Criteria
Judging Process
- Pre-Judgement — Core jury reviews all submissions, filters rule violations, and reduces each category to up to 100 photos
- Jury Review — Category jury evaluates each photo and submits their personal Top 10
- Winner Selection — Top 5 and 1 Winner per category are determined through weighted voting
- Jury Validation — Category jury review and confirm the Top 5 and Winner selection
- Ultimate Picture of the Year — Selected from category winners (process being finalized)
Below is a detailed breakdown of each step in the judging process. Our methodology combines multiple evaluation techniques: absolute scoring, pairwise comparisons, and weighted voting. To ensure fair, transparent results. Each step is designed to minimize individual bias while rewarding truly exceptional work.
Step 1: Pre-Judgement
Who: Core Jury members
Goal: Review all submissions (~1000 photos), filter out rule violations and irrelevant entries, and reduce each category to a maximum of 100 photos to minimize load on category jurors.
The Process
Each photo is reviewed by at least 3 core jury members. Each jury member casts one of three votes:
- Yes — 1pt. Is relevant to the category, and deserves to advance
- Maybe — 0.5pt. Photo has merit but has noticeable issues or is borderline for the category
- No — 0pt. Photo violates submission rules (e.g. contains watermarks), or is irrelevant to the category or clearly does not meet quality standards
Mean Score Calculation
For each photo, we calculate the mean score from all votes:
Mean Score = (sum of all vote values) / (number of votes)
Example: A photo with votes [Yes, Maybe, No] has a mean score of (1.0 + 0.5 + 0.0) / 3 = 0.5
Disagreement Penalty
When jury members disagree significantly on a photo, we apply a penalty to account for uncertainty:
Agreement Penalty = 1 - (most common vote count / total votes)
- Penalty = 0 → Full agreement (all jurors voted the same)
- Higher penalty → More disagreement among jurors
This ensures that photos with controversial or uncertain reception are ranked lower than photos with clear consensus.
Pairwise Comparisons
To resolve ambiguity in the "Maybe" range, jury members also compare photos directly in pairs. For each comparison, one photo wins and one loses.
Pairwise Score = (wins) / (total comparisons)
This captures relative quality between photos, which is especially useful when absolute Yes/Maybe/No votes are noisy.
Final Composite Score
The final ranking combines all three signals:
Final Score = (0.6 × Mean Score) + (0.3 × Pairwise Score) - (0.25 × Agreement Penalty)
| Weight | Factor | Purpose |
|---|---|---|
| 0.6 | Mean Score | Absolute quality assessment |
| 0.3 | Pairwise Score | Relative ranking among similar photos |
| -0.25 | Agreement Penalty | Reduces score for uncertain/controversial photos |
Selection
- Rank all photos by Final Score
- Top 100 photos per category advance to category jury
- Photos that rank below the top 100 in a category are deprioritized — shown under the main pool. These photos are not permanently rejected; jury members can review and recover them. This prevents overwhelming the category jury while preserving the ability to reconsider edge cases.
Why This Algorithm Is Fair
- No single juror decides — Each photo requires multiple votes and reviewd by at lest 3 jury members
- Multiple evaluation methods — Combines absolute and relative judgments for stronger signal
- Jury can recover pre-judged photos – Deprioritized photos are not hidden and can be reconsidered by category jury members
- Based on established methods — Draws from peer review systems, crowdsourcing aggregation, and tournament ranking theory
Step 2: Jury Review
Who: Category Jury members assigned to each category
Goal: Evaluate up to 100 photos per category and submit a personal Top 10 ranking.
Each photo is scored on multiple dimensions. The total score ranges from -15 to +50 points.
Relevance Check
Irrelevant photos to the specific category will be discarded.
Scoring Framework
Creativity (0 to +10 points)
Storytelling elements. Appropriate and original interpretation and representation of the cosplay character.
Composition (0 to +10 points)
Use of compositional rules (do they make sense, do they add value to the photo), and the pose of the photographed person in terms of composition.
Lighting (0 to +10 points)
Creates the right atmosphere, is used appropriately, and is well-balanced (exposure).
Environment (0 to +10 points)
Use of colors, location choice, props used to decorate the environment and support storytelling. Use of special effects that enhance the story.
Interaction with the Model (0 to +10 points)
Pose, expressions, timing, preparation of the model (attention to detail).
Negative Impact of Photo Editing (0 to -15 points)
Deductions for poor editing, such as:
- Poor skin retouching or unnatural-looking skin
- Poor hair retouching or unnatural blending of hair
- Excessive color toning that diminishes the photo's value
- Excessive sharpening that harms image quality
- Excessive added or existing noise that reduces the photo's quality
- Unnaturally shaped body parts due to overuse of liquify tools
- Excessive special effects or overlays that remove value from the photo
- Unbalanced contrast that reduces the image's value
After evaluating all photos, each jury member submits their personal Top 10 ranking.
Step 3: Category Winner Selection
Who: Aggregated from all category jury votes
Goal: Determine the Top 5 and 1 Winner per category using a mathematically fair voting system.
Weighted Borda Count
We use a Weighted Borda Count — a well-established voting method used in elections, sports rankings, and academic competitions. Our implementation uses Fibonacci-based weights to create a "steep" curve that strongly favors exceptional works than less controversial.
Position Weights
Each position in a jury member's Top 10 receives a specific weight:
| Position | Weight | Rationale |
|---|---|---|
| 1st | 21 | Top pick receives maximum weight |
| 2nd | 13 | Strong preference, significant gap from 1st |
| 3rd | 8 | Clear favorite |
| 4th | 5 | Above average |
| 5th | 3 | Solid choice |
| 6th | 2 | Honorable mention |
| 7th–10th | 1 each | Minimal weight, included in top 10 but not standout |
The steep drop-off (21 → 13 → 8) ensures that a photo ranked #1 by one juror outweighs a photo ranked #2 and #3 by two others. This prevents "safe" photos that everyone finds "pleasant" from beating truly exceptional work that some jurors ranked at the top.
Score Calculation
For each photo that appears in at least one ballot:
Total Points = sum of (weight for each position where photo appears)
Inclusion Count = number of jurors who included this photo
Peak Weight = highest single weight received
Tie-Breaking (Lexicographic Order)
When photos have equal Total Points, we resolve ties in order:
- Consensus (Inclusion Count) — The photo selected by more jurors wins
- Conviction (Peak Weight) — The photo with the highest individual ranking wins
This ensures:
- Photos with broad support beat niche favorites (Consensus)
- Among equally supported photos, the one with strongest advocate wins (Conviction)
Example
| Photo | Juror 1 | Juror 2 | Juror 3 | Total Points | Inclusion | Peak |
|---|---|---|---|---|---|---|
| A | 1st (21) | 3rd (8) | — | 29 | 2 | 21 |
| B | 2nd (13) | 2nd (13) | 5th (3) | 29 | 3 | 13 |
Photo A and B both have 29 points. Photo B wins because it has higher Inclusion Count (3 > 2).
Why This Algorithm Is Fair
- Transparent weights — Every juror knows exactly how their ranking translates to points
- Steep curve rewards excellence — Prevents mediocre consensus picks from winning
- Multiple tie-breakers — Ensures deterministic results without arbitrary decisions
- Mathematically proven — Borda Count is a widely trusted voting method used in many competitions worldwide
Step 4: Jury Validation
Who: Category Jury members
Goal: Review and confirm the Top 5 and Winner selection.
The jury reviews the algorithmically determined Top 5 and Winner to ensure the results make sense. This step allows for collective discussion and final agreement before public announcement.
The jury may override the aggregation algorithm's results through discussion if they collectively agree that the algorithmic outcome doesn't reflect the best choices. The algorithm provides a strong starting point, but human judgment has the final say.
Step 5: Ultimate Picture of the Year
Who: All jury members
Goal: Select the single best photo across all categories.
Candidates are selected from the Top 5 highest-scoring photos in each category. These selected images form the final pool of candidates, from which the Ultimate Picture of the Year is chosen by collective evaluation of all jury members.
The detailed selection process for Ultimate Picture of the Year is being finalized.