We use cookies to enhance your visit to our site and to bring you advertisements that might interest you. You consent to our cookies if you continue to use our website. Read our Privacy Policy, Cookie Policy and Terms of Service to find out more. Your use of Ranx’s Services is subject to these policies and terms.
Got it

Methodology of Ranx

The average ranking of an item in a ‘rankable’ list on Ranx is determined by the average score obtained by the item. The general formula for calculating the average score of an item is:

Image of formula
Image of formula
Image of formula
The items of a list are ‘ranked’ in increasing order according to their average score, i.e., the item with the lowest average score is ranked#1 and the item with the second lowest average score is ranked #2 and so on. The average rank of an item determines its position in a rearranged list and is displayed as such to the ‘List Viewers’.

The items in a list can be altered by the ‘List Creator’ in four ways that are likely to affect the average score of items. Items can be (i) added when creating the list, (ii) added when editing the list, (iii) removed when editing the list, and (iv) restored to the list after being previously removed when editing the list. Addition and restoration of items to a published list may result in a discrepancy of influence exerted by ‘List Rankers’ with those who rank the list after the alteration having a greater impact on the overall rankings of items.

To prevent unfair representation of ranks of the items added or restored, and rank manipulation of a list by such means, a newly added item is assigned a fictitious score which is equal to the score of the middle (median) position in the list by assuming all ‘non-rankers’ (i.e., those who ranked the list when the item was not in the list) submitted an identical median score for the added item. This assigned score is denoted as Canr and is a constant value. The median score ensures that the newly added item is more visible to the List Viewers than if it were placed at the bottom of the list.

When determining the overall average score of the newly added item after the item has been ranked by some new rankers both the average score by rankers (i.e., those who ranked the list after the new item was added) and the assigned score by ‘non-rankers’ are considered and employed in a modified form of the above formula. This new average score is updated every time a new ranking of the list is submitted by new rankers.

Similarly, when a previously removed item is restored in a list, the new rank or position of the item is computed using both the original average score of the item (before removal from the list) and the assigned median score by non-rankers.

When prior rankers ‘rerank’ a list after the addition of an item, their status is changed from ‘non-ranker’ to ‘ranker’ and their assigned fictitious median ‘non-ranker’ score is subtracted and new actual score is added when computing the new average score of the item.

When a ranker submits a new ranking of a previously ranked list (regardless of whether the list has been edited or not), the previous scores are removed and the new score for each item is represented in the rankings.

To ensure that the assigned median score by ‘non-rankers’ does not produce an unfair and disproportionate degree of reduction of influence on the average score of an item, a smoothing co-efficient, C, is used to give a method that considers and compares the proportions of the ‘non-rankers’ with assigned fictitious score and rankers with actual scores.

This co-efficient allows the average score of the item to smoothly transition from the assigned average score to the actual average score. If the number of ‘non-rankers’ is greater than the number of new rankers of an item, the value of C is greater than zero and can exert an influence on the value of average score of an item. When the number of new rankers becomes equal to the number of ‘non-rankers’, the value of C becomes zero and the effect of the assigned fictitious score by ‘non-rankers’ is nullified. As the number of new rankers of an added item exceeds the number of ‘non-rankers’, the transition is completed and so forth the average score is always computed based on scores of real rankers.