Monday 23 June, 2008

Business schools' annual reports

Business schools professors shout themselves hoarse about the importance of standardised scores (the only score that matters in marketing is Net Recommenders) and scoring methods (to compare Company A's profits with Company B's, one must first be sure that both were calculated in, more or less, the same way). 

Very right.

But shouldn't they begin at home, by agreeing on standard methods of measuring success, and even go a step further to issue fairly detailed guidelines of how these figures should be used? For instance, students interested in a career in marketing should look at scores X, Y and Z, whose weights are a%, b% and c%, respectively. 

Considering that most school sites are very similar, education agents have self-interest in recommending one course over the other (How do we know they don't get commissions and only fees?), and magazine ratings are unreliable (No 1 in list A doesn't exist in list B), and all sorts of rumours make their rounds, candidates cannot be blamed if they are totally confused about where to go. 

On the face of it, it is not in the school's self-interest to help, obviously, given the immense asymmetry in information. But do we have conclusive proof that this is indeed so?  

It goes without saying that true excellence cannot be compared, even less quantified. However, that is as true for the marketplace as the academic ivory towers. Why shouldn't the ivory tower dwellers practise what they preach?

On second thoughts, two scores are readily available: first, the application fee, and second, the tuition fee. The number of publications by students and faculty may be useful too, particularly for the more academically inclined. But are these readily available and easy to collate? Can't college associations make it mandatary to publish certain data, and make the data accessible to everyone with, say, a GMAT score? 

2 comments:

Anonymous said...

Leaving aside the practical in favour of The True, how would you go about scoring an MBA school anyway? Median starting salaries? Percentage of graduates that make it to CXO by end of their career? Scoring systems would differ depending on the objective of the prospective student - but that's OK if all the data everyone wants is available.

But what data IS that?

N&P said...

Dear Anonymous,

Err... did you read what I had written? Or did you assume stupidity?

Allow me to repeat: I start by agreeing one-set-cannot-fulfil-all needs. Nonetheless, companies have to publish a certain set of figures. Analysts and investors make what they can out of these figures, picking and combining these as per their needs.

Similarly, institutions can agree on some figures.

The two that you suggest can feature. Some other figures are widely reported (average GPA, average GMAT, average experience), but not so widely understood. They also don't mean much without standard deviations or ranges.

So academics can put there minds together on what these figures should be, and come out with a comprehensive list. (That such a list will not be perfect can hardly be an excuse for not bringing it out. Life, usually, is far from perfect.)

Finally, I point out that application and tution fees may be useful (not universal) indicators, and should be collated.

'All the data everyone wants' will never be avialble. So what? We can start somewhere, can't we?