Hero background

Judging Criteria

Judging Process

  1. Pre-Judgement — Core jury reviews all submissions, filters rule violations, and reduces each category to up to 100 photos
  2. Jury Review — Category jury evaluates each photo and submits their personal Top 10
  3. Winner Selection — Top 5 and 1 Winner per category are determined through weighted voting
  4. Jury Validation — Category jury review and confirm the Top 5 and Winner selection
  5. Ultimate Picture of the Year — Selected from category winners (process being finalized)

Below is a detailed breakdown of each step in the judging process. Our methodology combines multiple evaluation techniques: absolute scoring, pairwise comparisons, and weighted voting. To ensure fair, transparent results. Each step is designed to minimize individual bias while rewarding truly exceptional work.


Step 1: Pre-Judgement

Who: Core Jury members

Goal: Review all submissions (~1000 photos), filter out rule violations and irrelevant entries, and reduce each category to a maximum of 100 photos to minimize load on category jurors.

The Process

Each photo is reviewed by at least 3 core jury members. Each jury member casts one of three votes:

  • Yes — 1pt. Is relevant to the category, and deserves to advance
  • Maybe — 0.5pt. Photo has merit but has noticeable issues or is borderline for the category
  • No — 0pt. Photo violates submission rules (e.g. contains watermarks), or is irrelevant to the category or clearly does not meet quality standards

Mean Score Calculation

For each photo, we calculate the mean score from all votes:

Mean Score = (sum of all vote values) / (number of votes)

Example: A photo with votes [Yes, Maybe, No] has a mean score of (1.0 + 0.5 + 0.0) / 3 = 0.5

Disagreement Penalty

When jury members disagree significantly on a photo, we apply a penalty to account for uncertainty:

Agreement Penalty = 1 - (most common vote count / total votes)
  • Penalty = 0 → Full agreement (all jurors voted the same)
  • Higher penalty → More disagreement among jurors

This ensures that photos with controversial or uncertain reception are ranked lower than photos with clear consensus.

Pairwise Comparisons

To resolve ambiguity in the "Maybe" range, jury members also compare photos directly in pairs. For each comparison, one photo wins and one loses.

Pairwise Score = (wins) / (total comparisons)

This captures relative quality between photos, which is especially useful when absolute Yes/Maybe/No votes are noisy.

Final Composite Score

The final ranking combines all three signals:

Final Score = (0.6 × Mean Score) + (0.3 × Pairwise Score) - (0.25 × Agreement Penalty)
WeightFactorPurpose
0.6Mean ScoreAbsolute quality assessment
0.3Pairwise ScoreRelative ranking among similar photos
-0.25Agreement PenaltyReduces score for uncertain/controversial photos

Selection

  1. Rank all photos by Final Score
  2. Top 100 photos per category advance to category jury
  3. Photos that rank below the top 100 in a category are deprioritized — shown under the main pool. These photos are not permanently rejected; jury members can review and recover them. This prevents overwhelming the category jury while preserving the ability to reconsider edge cases.

Why This Algorithm Is Fair

  • No single juror decides — Each photo requires multiple votes and reviewd by at lest 3 jury members
  • Multiple evaluation methods — Combines absolute and relative judgments for stronger signal
  • Jury can recover pre-judged photos – Deprioritized photos are not hidden and can be reconsidered by category jury members
  • Based on established methods — Draws from peer review systems, crowdsourcing aggregation, and tournament ranking theory

Step 2: Jury Review

Who: Category Jury members assigned to each category

Goal: Evaluate up to 100 photos per category and submit a personal Top 10 ranking.

Each photo is scored on multiple dimensions. The total score ranges from -15 to +50 points.

Relevance Check

Irrelevant photos to the specific category will be discarded.

Scoring Framework

Creativity (0 to +10 points)

Storytelling elements. Appropriate and original interpretation and representation of the cosplay character.

Composition (0 to +10 points)

Use of compositional rules (do they make sense, do they add value to the photo), and the pose of the photographed person in terms of composition.

Lighting (0 to +10 points)

Creates the right atmosphere, is used appropriately, and is well-balanced (exposure).

Environment (0 to +10 points)

Use of colors, location choice, props used to decorate the environment and support storytelling. Use of special effects that enhance the story.

Interaction with the Model (0 to +10 points)

Pose, expressions, timing, preparation of the model (attention to detail).

Negative Impact of Photo Editing (0 to -15 points)

Deductions for poor editing, such as:

  • Poor skin retouching or unnatural-looking skin
  • Poor hair retouching or unnatural blending of hair
  • Excessive color toning that diminishes the photo's value
  • Excessive sharpening that harms image quality
  • Excessive added or existing noise that reduces the photo's quality
  • Unnaturally shaped body parts due to overuse of liquify tools
  • Excessive special effects or overlays that remove value from the photo
  • Unbalanced contrast that reduces the image's value

After evaluating all photos, each jury member submits their personal Top 10 ranking.


Step 3: Category Winner Selection

Who: Aggregated from all category jury votes

Goal: Determine the Top 5 and 1 Winner per category using a mathematically fair voting system.

Weighted Borda Count

We use a Weighted Borda Count — a well-established voting method used in elections, sports rankings, and academic competitions. Our implementation uses Fibonacci-based weights to create a "steep" curve that strongly favors exceptional works than less controversial.

Position Weights

Each position in a jury member's Top 10 receives a specific weight:

PositionWeightRationale
1st21Top pick receives maximum weight
2nd13Strong preference, significant gap from 1st
3rd8Clear favorite
4th5Above average
5th3Solid choice
6th2Honorable mention
7th–10th1 eachMinimal weight, included in top 10 but not standout

The steep drop-off (21 → 13 → 8) ensures that a photo ranked #1 by one juror outweighs a photo ranked #2 and #3 by two others. This prevents "safe" photos that everyone finds "pleasant" from beating truly exceptional work that some jurors ranked at the top.

Score Calculation

For each photo that appears in at least one ballot:

Total Points = sum of (weight for each position where photo appears)
Inclusion Count = number of jurors who included this photo
Peak Weight = highest single weight received

Tie-Breaking (Lexicographic Order)

When photos have equal Total Points, we resolve ties in order:

  1. Consensus (Inclusion Count) — The photo selected by more jurors wins
  2. Conviction (Peak Weight) — The photo with the highest individual ranking wins

This ensures:

  • Photos with broad support beat niche favorites (Consensus)
  • Among equally supported photos, the one with strongest advocate wins (Conviction)

Example

PhotoJuror 1Juror 2Juror 3Total PointsInclusionPeak
A1st (21)3rd (8)29221
B2nd (13)2nd (13)5th (3)29313

Photo A and B both have 29 points. Photo B wins because it has higher Inclusion Count (3 > 2).

Why This Algorithm Is Fair

  • Transparent weights — Every juror knows exactly how their ranking translates to points
  • Steep curve rewards excellence — Prevents mediocre consensus picks from winning
  • Multiple tie-breakers — Ensures deterministic results without arbitrary decisions
  • Mathematically proven — Borda Count is a widely trusted voting method used in many competitions worldwide

Step 4: Jury Validation

Who: Category Jury members

Goal: Review and confirm the Top 5 and Winner selection.

The jury reviews the algorithmically determined Top 5 and Winner to ensure the results make sense. This step allows for collective discussion and final agreement before public announcement.

The jury may override the aggregation algorithm's results through discussion if they collectively agree that the algorithmic outcome doesn't reflect the best choices. The algorithm provides a strong starting point, but human judgment has the final say.


Step 5: Ultimate Picture of the Year

Who: All jury members

Goal: Select the single best photo across all categories.

Candidates are selected from the Top 5 highest-scoring photos in each category. These selected images form the final pool of candidates, from which the Ultimate Picture of the Year is chosen by collective evaluation of all jury members.

The detailed selection process for Ultimate Picture of the Year is being finalized.