Credit scoring has never been used to evaluate Charities, only lenders and companies.
I have been addressing the question so far as an “either or situation”, but actually discriminant analysis and credit scoring give philanthropists a continuum score, which allows us to distinguish the very best charities, the well run charities, the medium managed, the under-par, and the hopeless, those that philanthropists should avoid at all costs.
We use 42 different measurements to define our charity scoring equation, which is much more data points than those normally used for profit companies, because charities are much more difficult to evaluate, a fact that is already known to most philanthropists.
One of the reasons being, as I have already mentioned, the lack of effective metrics as profits or share of market when it comes to charities.
However that should not deter us; it only makes the analysis tougher, calling for identification of surrogate proxies for variables such as efficiency and profitability.
From a statistical point of view, this only means that the variance of our classification of charities is somewhat broader than that of companies.
The probability of misclassifying a charity as an AAA when in fact it is only an AA is higher than when we are classifying senior debt of listed companies.
We had to use more indirect measures, such as frequency of board meetings.
We know for a fact that boards that meet every month are more efficient than boards that meet every six months, one of the criteria that discriminate between good and bad charities, in terms of management.
The 42 measurements are basically divided in 6 big areas.
1. Administrative effectiveness.
2. Growth
3. Financial stability
4. Quality of administrative controls
5. Legal compliance
6. Public recognition.
Every single one of the 42 variables or performance measurements is quantifiable; there is not a single subjective measurement in the process.
This is what makes the selection scientific. Anyone using the same criteria and the same set of data would come up with the same selection of charities.
Compare this with the usual selection process for NGO awards, in which normally 10 representatives of the charity sector meet for an afternoon cup of tea and select 10 leaders or 10 projects from a list of previously submitted entries.
Depending on the mood those 10 members wake up in the morning, the results could be completely different.
Have the same committee members choose again six months later, after a bout of amnesia, and I will bet the results will be completely different.
That is why peer review based prizes never attain the same credibility as Olympic Games prizes, which are always based on measurements, such as minutes or distance throwing.
Peer review prizes invariably are tarred with cronyism, favoritism, injustices which blemish the prize winner.
Profesor Kanitz,
I must say that this article is fantastic and, in my opinion, it should be open for all readers then more people would have the chance to learn about this important point of the third sector that is unworthily ignored.
I feel really sorry that this project ended and I can imagine how the NGOs managers, employees and volunteers feel abandoned without a serious award for the sector. I hope someday you can restart this project and also write a book about this subject.
Best regards from a big fan!
Well that was a nice comment. Thank you.