Problems on this site are rated automatically, by looking at who solves or fails to solve them. Each problem has an __Elo__ rating, and so does every user. Each attempt is evaluated as if these two ranked entities were playing a match. So when a user solves a problem, the problemâ€™s rating slightly decreases, and the userâ€™s rating slightly increases, both in proportion to how stable their ratings are considered, plus the rating differential.

From the Elo rating, the normal __go rating__ nomenclature is mapped. This is mostly a linear function, with some stretching on both the low and high end. This is not scientific, but is the result of looking at decades of site data and trying to get a reasonable mapping plus a reasonable distribution of ratings.

Traditionally in problem books, the ratings are arbitrarily assigned by the problem author. There are pros and cons to this, but we generally believe that basing the rating on data works better.