But shouldn't they begin at home, by agreeing on standard methods of measuring success, and even go a step further to issue fairly detailed guidelines of how these figures should be used? For instance, students interested in a career in marketing should look at scores X, Y and Z, whose weights are a%, b% and c%, respectively.
Considering that most school sites are very similar, education agents have self-interest in recommending one course over the other (How do we know they don't get commissions and only fees?), and magazine ratings are unreliable (No 1 in list A doesn't exist in list B), and all sorts of rumours make their rounds, candidates cannot be blamed if they are totally confused about where to go.
On the face of it, it is not in the school's self-interest to help, obviously, given the immense asymmetry in information. But do we have conclusive proof that this is indeed so?
On second thoughts, two scores are readily available: first, the application fee, and second, the tuition fee. The number of publications by students and faculty may be useful too, particularly for the more academically inclined. But are these readily available and easy to collate? Can't college associations make it mandatary to publish certain data, and make the data accessible to everyone with, say, a GMAT score?