So, reffing is one of the most important activities that goes on in the league, and despite all the changes the league has gone through in the past six years, it's stayed more or less the same, except for how people get paid for doing it. There are some consistent problems with reffing, too: it's hard, so not many people can do it--there are never enough referees to accommodate all the battles that people want to have--and it takes a long time. I'd like to try to change that as much as possible: increase the referee:player ratio and encourage faster reffings, but I'm not really sure how to go about it. So, discussion. Multiple things to talk about, but each one is going to get tl;dr enough of them on their own, so we'll go one at a time.
We'll start with approving referees. The current system is pretty terrible: there's no way that a single round of reffing can tell a great deal about how a person is going to handle all the possible situations that will arise in an actual battle, and once you're approved there's no further oversight. If you mess up and get told to re-apply, all you can do is write up another reffing and wait a few more weeks, hoping that that one goes better. It's obviously not a complete failure, since we *do* have referees, but it both discourages plenty of people who might have stuck around if the system were more responsive and allows through people who don't really know what they're doing.
In general, it would be nice if there were some sort of reffing test that would allow a person to show off their ability to handle reffing a variety of different situations, while getting more immediate feedback on how they were doing. There are a couple of ways that this has been done in the past:
1. Just allow anybody to ref battles as long as the people involved agree, and if they do a good enough job, at the end they'll get approved. This is how I became a referee.
2. Set up some kind of mentorship program, where an approved referee agrees to apprentice a prospective referee. The novice picks up a battle, and as they go through and ref it, the approved referee is there to correct any mistakes they make and give feedback on the process. At the end of the battle, the referee can recommend the novice for approval, or suggest that they try again on a different battle.
3. Applicants are required to ref an extended mock battle rather than a single mock round.
All of these options are more forgiving that the current system, in that if you mess up one round, you get told what you did wrong, but you get to proceed to the next round immediately. As long as it's clear, at the end, that you've gotten the hang of things, then you're good, no matter how much you messed up in the beginning. They also give much more information about how competent a referee someone actually would be, since they have to respond to a much wider range of actual battle situations and also have to actually keep up with battlers' command schedules just as they would once they became full referees.
The problem is that these are all more labor-intensive for the approver(s) than the current system. It's a royal pain to evaluate just one round of mock reffing; these options almost all require the evaluation of many rounds, and ideally quite a bit faster than current ref approvals.
This is somewhat mitigated in the first case, where the reffing is evaluated only at the end of the match. However, it does mean that if someone who really has no clue what they're doing starts reffing something, the battlers are going to have to be constantly correcting them--if they even recognize that something's wrong! This places more of a burden on the trainers involved to police the reffings, and there's the potential for frustration if someone really isn't up to the task and is just kind of blundering along.
On the other hand, if an approved referee is in charge, then there's someone to step in, point out mistakes, and make corrections if it becomes necessary, so the battlers don't have to worry about it. The referee does, on the other hand, which returns to the issue of how much work it is to evaluate battle rounds.
The mock round system is one I'm not really a fan of, since it requires approvers to not only evaluate the prospective referee's work but come up with what they get to work on each step of the way. This does allow for the maximum range of competence testing, since you can make sure there's at least one round where there are too many conditionals, or a pokémon was ordered to use something it doesn't know, etc., which might not happen at all in a real battle. Other than that, I don't see any particular advantages to this option.
Ultimately, I don't really know which of these, if any, might be better than the current system. Thoughts on these, or alternative suggestions? In the end, how do we encourage the maximum number of people to take on the challenge of becoming referees, and see to it that as many of them as possible are successful and good at what they do? That's what I'm really getting at here.
We'll start with approving referees. The current system is pretty terrible: there's no way that a single round of reffing can tell a great deal about how a person is going to handle all the possible situations that will arise in an actual battle, and once you're approved there's no further oversight. If you mess up and get told to re-apply, all you can do is write up another reffing and wait a few more weeks, hoping that that one goes better. It's obviously not a complete failure, since we *do* have referees, but it both discourages plenty of people who might have stuck around if the system were more responsive and allows through people who don't really know what they're doing.
In general, it would be nice if there were some sort of reffing test that would allow a person to show off their ability to handle reffing a variety of different situations, while getting more immediate feedback on how they were doing. There are a couple of ways that this has been done in the past:
1. Just allow anybody to ref battles as long as the people involved agree, and if they do a good enough job, at the end they'll get approved. This is how I became a referee.
2. Set up some kind of mentorship program, where an approved referee agrees to apprentice a prospective referee. The novice picks up a battle, and as they go through and ref it, the approved referee is there to correct any mistakes they make and give feedback on the process. At the end of the battle, the referee can recommend the novice for approval, or suggest that they try again on a different battle.
3. Applicants are required to ref an extended mock battle rather than a single mock round.
All of these options are more forgiving that the current system, in that if you mess up one round, you get told what you did wrong, but you get to proceed to the next round immediately. As long as it's clear, at the end, that you've gotten the hang of things, then you're good, no matter how much you messed up in the beginning. They also give much more information about how competent a referee someone actually would be, since they have to respond to a much wider range of actual battle situations and also have to actually keep up with battlers' command schedules just as they would once they became full referees.
The problem is that these are all more labor-intensive for the approver(s) than the current system. It's a royal pain to evaluate just one round of mock reffing; these options almost all require the evaluation of many rounds, and ideally quite a bit faster than current ref approvals.
This is somewhat mitigated in the first case, where the reffing is evaluated only at the end of the match. However, it does mean that if someone who really has no clue what they're doing starts reffing something, the battlers are going to have to be constantly correcting them--if they even recognize that something's wrong! This places more of a burden on the trainers involved to police the reffings, and there's the potential for frustration if someone really isn't up to the task and is just kind of blundering along.
On the other hand, if an approved referee is in charge, then there's someone to step in, point out mistakes, and make corrections if it becomes necessary, so the battlers don't have to worry about it. The referee does, on the other hand, which returns to the issue of how much work it is to evaluate battle rounds.
The mock round system is one I'm not really a fan of, since it requires approvers to not only evaluate the prospective referee's work but come up with what they get to work on each step of the way. This does allow for the maximum range of competence testing, since you can make sure there's at least one round where there are too many conditionals, or a pokémon was ordered to use something it doesn't know, etc., which might not happen at all in a real battle. Other than that, I don't see any particular advantages to this option.
Ultimately, I don't really know which of these, if any, might be better than the current system. Thoughts on these, or alternative suggestions? In the end, how do we encourage the maximum number of people to take on the challenge of becoming referees, and see to it that as many of them as possible are successful and good at what they do? That's what I'm really getting at here.