[pullquote]Credit scoring, for some unknown reason, had never been applied to charity evaluation [/pullquote]
Which are the best run charities and NGOs in America?
How to evaluate charities from a scientific and not subjetive point of view?
Which are the best run NGOs in the world?
Which are the charitable organizations that really deserve to receive our donations?
Which are the NGOs that will make the best use of our money?
These are the most pressing questions in Philanthropy, and it is the first question any potential philanthropist asks himself.
Unfortunately there is hardly an objective, technical and well sustained answer.
The benchmarks generally used to evaluate for-profit organizations are not suitable for NGOs.
Many surrogate measures for profit and efficiency have been devised, none of which are truly satisfactory, such as “zero based budgeting”, “budget control”, “benchmarking”, and “project evaluation”, but these help execution not donor pre-evaluation.
None are as efficient as the measures used for companies and this fact alone determines many types of decision behaviors, which are dysfunctional in philanthropic decisions. “Replicacle”, “Scalable”.
No one measures “Good Management”, in fact the most used criteria is “low administrative costs”, which may select just the opposite. Badly run charities.
This is counter productive and leads philanthropists to create their own foundations and their own “pet” projects or foundations than investing in well managed charities that already exist.
Contrast this with the way Foundations invest their funds: in well managed “blue chip charities”, rarely in “venture capital” or “pet start up” projects.
It may not be a coincidence that Warren Buffett chose not to create another foundation, his own, but to give his fortune to a successful foundation already in existence.
Donating to well run charities allows for a much better use of our limited funds, so why so few philanthropists follow this simple and proven road?
Why are so many philanthropists being sidetracked into discussing “replicating” initiatives, “viral” charities, “mezzanine financing”, “take off theories”, ways to make small charities grow, when the best run charities are already there?
The answer is simple.
No one knew how to identify the best run charities nor how to evaluate charity]ies at all.
In 1995, we devised a method for identifying the best run charities in Brazil, and created a National Award for the Best Run Charities of the Year, the “Premio Bem Eficiente”.
The “Premio Bem Eficiente”, in March 2006 had 15;000 references in Google, against 350 of the Peter Drucker Prize which selects the most innovative charity of America.
The “Premio Bem Eficiente” created a revolution in Brazil.
Making public the Best Run Charities in Brazil brought the predicted consequences into effect.
- The 50 best run charities of the year doubled their donation income in the following 3 years.
- Most of the additional money came from people who had never donated before.
- Charities that did not receive the award but submitted data, in the following years saw a 30% increased in donations. So there’s a spill of effect.
- Charities as a whole increase revenues by 5 %.
- The award brought home the message that charities had to strive to become more efficient.
The message was that charities would also have to generate the cost savings and improvement of efficiency targets that globalization has demanded on every company of the world.
Doing more charity with the funding we already have, proved a tough paradigm to break, for a sector accustomed to only doing more as long as more funding were available.
The “Premio Bem Eficiente” took 20 years in the making, and started at Harvard Business School, when I learned the rudiments of credit scoring with Prof. Charlie Williams.
Credit scoring is a statistical method using discriminant analysis that allows bankers and lenders to discriminate between good debtors and bad debtors.
In a Bank one analyses balance sheets and any other information available of companies we know that became bankrupt or insolvent in the following years, and compare these with the same set of data of good companies.
This inevitably gives rise to two clusters of data, the good and the bad.
Then we look at a prospective borrower, analyze his set of data, and determine whether he belongs to the favourable group or the unfavourable group.
I was one of the pioneers in introducing credit scoring in Brazil in 1972, using this technique and in 1974 created the Best Run Company Award as the editor of the Brazilian equivalent to Fortune 500.
In this case rather than looking for the “bad “cluster, we focused on the “good” cluster, and identified the best companies in the country.
The fact that the Best Companies were chosen by a set of objective criteria, measured correctly by a scientific statistical method gave enormous credibility to the selection process and the publication Melhores e Maiores became a sucess..
I had been the editor of such publication for 25 years, when I decided to use my expertise to create a similar award for the Best Run Charities.
Credit scoring, for some unknown reason, had never been applied to charity evaluation, and one can try to guess why.
The immediate thought barrier is to discard the idea away because we do not “lend” to charities, we don’t want to see the money repaid as Banks normally do, the money is simply donated, not what banks usual do.
But that is not really the true description of banking.
Bankers also don’t want to see their loans repaid either, because that is how they make money, that is their business.
What banks really worry about is whether their borrowers are making good enough use of the loans they take, to pay interest and if need be repay the debt.
Once you understand the true spirit of lending, one realizes why credit scoring can be used to score charities with the same positive results.
Philanthropists worry about the good use of their money, even if they don’t expect to earn interest or see their donations paid back.
From this perspective, we set up a comprehensive research comparing data of failed charities and to those which were extremely successful so as to determine what variables, performance measurements and data actually identify good charities from the bad ones.
More about that, in the next chapter.
I have been addressing the question so far as an “either or situation”, but actually discriminant analysis and credit scoring give philanthropists a continuum score, which allows us to distinguish the very best charities, the well run charities, the medium managed, the under-par, and the hopeless, those that philanthropists should avoid at all costs.
Using 42 different measurements to define our charity scoring equation, more data points than those normally used for profit companies, because charities are much more difficult to evaluate, a fact that is already known to most philanthropists.
One of the reasons being, as I have already mentioned the lack of effective metrics as profits or share of market when it comes to charities.
However that should not deter us; it only makes the analysis tougher, calling for identification of surrogate proxies for variables such as efficiency and profitability.
From a statistical point of view, this only means that the variance of our classification of charities is somewhat broader than that of companies.
The probability of misclassifying a charity as an AAA when in fact it is only an AA is higher than when we are classifying senior debt of listed companies.
We had to use more indirect measures, such as frequency of board meetings.
We know for a fact that boards that meet every month are more efficient than boards that meet every six months, one of the criteria that discriminate between good and bad charities, in terms of management.
The 42 measurements are basically divided in 6 big areas.
1. Administrative effectiveness.
2. Growth
3. Financial stability
4. Quality of administrative controls
5. Legal compliance
6. Public recognition.
Every single one of the 42 variables or performance measurements is quantifiable; there is not a single subjective measurement in the process.
This is what makes the selection scientific.
Anyone using the same criteria and the same set of data would come up with the same selection of charities.
Compare this with the usual selection process for NGO awards, in which normally 10 representatives of the charity sector meet for an afternoon cup of tea and select 10 leaders or 10 projects from a list of previously submitted entries.
Depending on the mood those 10 members wake up in the morning, the results could be completely different.
Have the same committee members choose again six months later, after a bout of amnesia, and I will bet the results will be completely different.
That is why peer review based prizes never attain the same credibility as Olympic Games prizes, which are always based on measurements, such as minutes or distance throwing.
Peer review prizes invariably are tarred with cronyism, favoritism, injustices which blemish the prize winner.
Credit scoring charities gets away from all this subjectivity.
One can criticize the criteria used, one can question if the number of variables should be 50 or 55, but one cannot question that the outcome was subjective.
We criticize our criteria every year, and drop some, and include others if the statistics demand it.
The Best Run Companies and The Best Run Charities, the two selection process I conducted for 30 years, never received an accusation of favoritism, or cronyism.
One of the common types of criticism was that we did not develop sector specific criteria, for health and education for example.
This is tougher to contest but basically we were trying to classify different charities, in different fields, and that requires common denominators.
Nor are sector specific evaluation criteria easy to determine objectively.
Education may require a 30-year span of evaluation, for one to be able to evaluate if it was indeed effective.
The gist of our evaluation is management, transparency, information flow, and the premise that well managed charities will be carrying out what donors expect them to do, whatever the mission is or specific field the charity is in.
By the way, stock analysis follows the same basic rule. No Wall Street research house or broker test drives GM cars, or Procter and Gamble diapers, before issuing a BUY recommendation.
They basically analyze management and the financial structure of these companies.
Another issue is refinement of existing measurements or introduction of new ones, non sector specific ones.
Over the last 10 years we have perfected the evaluation process and refinements were introduced as new measurements.
What we always wonder is whether the new refinement or variable really add value to the process?
More often than not, the new classification or list of charities comes out practically the same, identifying say 49 out of the original 50 charities.
When you already have 42 variables, the next one usually contributes very little, for two reasons.
We could actually give the award using only 17 variables, with practically the same degree of confidence as using 42.
But we like the redundancy put because we are treading on new ground.
Secondly, well run charities usually maintain a standard of excellence in everything they do.
If we were to create a totally new evaluation criteria, chances are we would find a similar degree of efficiency as in all the rest.
If rooms are clean, chances are bathrooms and kitchens also are.
The best part of the Award is the Award giving itself. It has a very emotional ceremony, devoted volunteers and managers break up in tears and for many this is the first public recognition in years, many for the first time in their lives.
For Profit companies have the annual distribution of dividends as their reward, charities have nothing.
The “Premio Bem Eficiente” is a coveted and well deserved award for those that receive it and a landmark in the history of philanthropy.
1 Comment on How To Evaluate Charities
Leave a Reply
You must be logged in to post a comment.
Master Kanitz, a question came to my mind while reading this article.
Here you perfectly pointed that some philanthropists, like Gates, decided to start their own foundations instead of investing in well-managed NGO’s and the fact that measuring and finding the best non-profits is definitely not easy is probably a reason for that. But in the case of companies’ foundations, don’t you think that their main motivation is promoting their name and creating a sustainable, socially-correct image for them?