13th Feb, 2020By Softmints6 minutes

Suppose you are in charge of a company developing the next competitive online game. Naturally it will be on your mind to have a matchmaking system, so that players can dive right in to the competition.

The question is, should you let players see their MMR?

I'm going to discuss some possibilities by drawing on ideas from a completely different discipline: the microeconomics of certification.

In particular, there's a great paper called "Coarse Grades: Informing the Public by Withholding Information" which inspired me to share and adapt its ideas for the lane-pushing space. Yes, that title seems like an oxymoron, and I do encourage opening the paper itself (it's free to download) to follow along.

The Certification Industry

There are many people, products, and institutions in the world. At some point, we need to navigate and make an informed decision about them: but there's so much information and some of it might be inaccurate.

Enter the certifier. Their job is to examine things and publish their findings for the use of parties who need reliable, standardised information.

The FairTrade logoThe Fairtrade Foundation is a simple example. Their work is to certify that products meet certain production standards. Many producers will spend time and money to become certified with Fairtrade, because their customers assign value to the mark.

Another example would be an Apple Authorized Reseller, or a L'Oréal Professionnel hairdresser. In these cases, a company will certify that the people delivering their products offer quality consistent with the products themselves.

Universities and colleges are also certifiers. They offer two services: a teaching service which consumes the bulk of their time, and an accreditation service so other people (future employers) have reliable information about what was achieved. Indeed, a great deal of a university's resources will be invested in improving their credibility.

There are of course bodies which specialise in just one service. Cram schools provide only learning, while a chartered accountants' association primarily certifies based on performance in their exams.

Coarse versus Fine

During their process, the certifier usually gathers considerably more information than they eventually publish. An examiner knows you got 73%, but this is called a 'B'. The film board may record exactly which, when, and how many swear words are uttered, but all the public sees is the final 'Teen' rating.

'G' stamp issued by the Irish Film Classification Office
This article is okay for everyone to read!

Why does a certifier, whose service is providing accurate and useful information, not report everything they have taken the trouble to obtain? In some cases, repackaging and reducing the information actually adds cost!

One answer is that the consumers may prefer a simplified grade, though there is nothing stopping a certifier from publishing the full information and a summary.

...or is there?

This paper that I've linked above makes the following claim:

...a certifier who wants to maximize information [to their audience] needs to consider how the grading scheme affects the willingness of senders to be certified at all. Coarsening can increase the amount of information receivers get in equilibrium by inducing more participation.

If a certifier shares absolutely everything that they discover, then candidates may become less willing to participate. Nobody wants to be exposed as "barely passable" at what they do. What they do want: is to be grouped with better candidates. That's where coarse grading schemes help.

The higher the quality of candidates it is possible to become grouped with, the more attractive it becomes to make the effort to get certified. Particularly so for weaker candidates! And with increased participation, the audience gets better overall information* even though less information is shared about individuals.

*The paper provides mathematical argument for this, showing that coarse grades minimise the mean square error of the audience when estimating a given candidate's quality.

Choosing To Play Ranked

Lets bring the conversation back to online games for a bit.

Like a school that provides both learning and accreditation, online games provide both gameplay and "badges".

To what extent are game developers in the certification business? To what extent are they interested in the public, or their players, or themselves, having the most accurate information about player skill in the game?

Supposing the developer is interested in maximising accurate information to the public: is it best practice to coarsen visible player rankings to induce higher participation in ranked modes?

And for a game's community: if the above is true, does a third-party providing a perfect reverse-engineering of every player's MMR result in a net loss of information?

Pass or Fail

A further claim of the paper is that among coarse grading schemes, the one that provides maximum information to an audience about candidates is maximally coarse: pass or fail!

The reason is that in a pass-fail system, a weaker candidate can share the same certificate as the very best (and all the opportunities that come with that), so it's a really good idea for weaker candidates to make the effort. Assuming the threshold to pass is set appropriately, this maximises participation and minimises mean squared error.

A downside of pass-fail is that sometimes the very best don't appreciate being grouped with weaker candidates. To counteract this, the honours grading system uses pass-fail with an additional shout-out for the top achievers: who are reported exactly. This often takes the form of public award ceremonies, such as those for top achievers in a class at a university.

The paper argues that either pass-fail or honours will be as good or better than any coarser grading system. Quite the claim!

Tell Everybody Less

If this held true, should game developers follow suit? What might a revised system look like?

Suppose a new game comes out, and runs one of these honours grading systems. There is a single badge for players who qualify, and a leaderboard which gives the exact ranking for the top 50 or 500 players in a region. Beyond that, nothing is made public.

It seems like a hard sell. There is such a massive skill disparity between players in competitive online games that pass-fail loses an enormous amount of information. Is there any way to salvage such a system?

Two approaches come to mind. The first is that the developer might operate a pass-fail system on some metric which is not skill. Perhaps the certification is that they are trusted by the system to be neither a leaver, flamer, nor troll. Perhaps it goes further, and identifies players who "strive to win", or "are improving", or whatever is the one thing that would positively dispose you towards a team-mate who bore the mark.

Certification At Scale

A second approach is also alluded to in the paper: in the real world there are many certifiers available, and candidates will generally choose the best certifier available to them at their price point. Consider this option: open the field for clans to offer certification! Membership is a classic pass-fail system.

In olden days of DotA Allstars, being a member of Clan TDA was the gold standard. They operated in-house games, maintained an exclusive channel, and had close access to the development team. I have fond memories of my own time in Clan DCE, which was thankfully more social than competitive!

Perhaps our hypothetical game's system would look something like this:

  • The developer officially pass-fail certifies players who prove to be "desirable teammates", perhaps in some way that only the developer is well-positioned to assess.
  • Clans issue an honours certification, where up to 80 members may wear their badge, and the top 15 have their exact rank within the clan shown.
  • Clans are subject to an honours certification, where up to 80 clans per region are acknowledged as the top clans, and the top 15 have their exact rank shown.

Good? Bad? I have no idea. But if the lesson about coarse grading is to be applied, and applied at scale, maybe this is what the outcome looks like.

I want us to consider the future we're building for lane-pushing games. Are current certification systems meeting all our needs? Could entwining clans with public competitive ranking give them a new relevance that offers benefit to the game's social experience?

Closing Thoughts

Should we expose MMR? It's not clear that there's a "one size fits all" answer.

Coarser grading schemes influence participation "at the margin", where players are undecided about being certified or not. If the certification we're providing is related to player skill, then hiding MMR and using honours grading seems like it would maximise participation in ranked modes.

On the other hand, Dota2 has a competitively-minded player base (60-70% of matches are ranked), and seems to benefit a lot from community sites making use of its API: including for the purposes of letting the community hunt down cheaters. Better enforcement against abuse might improve information to the public in a different way.

I'm left curious about what certification we should be providing. Of course there's value to players understanding each others' relative skill, but maybe we shouldn't limit ourselves to that. Modern games offer a diverse range of achievements: they don't only reward winning.

If we certify for victory, does that induce competitively minded players? If we certify for something else: say creativity or cooperation, would that be induced instead?

I bet someone out there has ideas. See you in the comments!