- About Us
I doubt that these studies just made up their figures, but the problem with self-funded studies is that it's so easy to skew studies in more subtle ways:
But let's also give credit where credit is due -- at least most of the studies acknowledge that they were funded by the vendor. Back in 1999, when Microsoft funded the original Mindcraft study, early reports didn't acknowledge Microsoft's funding at all. Yet that study was funded by Microsoft, the Microsoft systems were specially optimized by Microsoft engineers for the test, and the tests (including those of Microsoft's competitor) were even performed at Microsoft. There was an understandable outcry!
In contrast, if you look at the current crop of studies carefully, all but one of the "independent" studies referenced by Microsoft acknowledge that Microsoft funded the study (I didn't find any such statement from Embedded Market Forecasters; perhaps it was truly independent). IDC, to its credit, places the statement "Sponsored by Microsoft Corporation" in bold letters right under the author names, so it's hard to miss, but its report isn't even in the independent list (though it's in another list that Microsoft provides). So I commend those study authors for acknowledging this potential conflict of interest. In a few places, Microsoft's "Get the Facts" page even acknowledges when a study was funded by the company, but it really should specifically identify every self-funded study (not just some of them).
There may be useful information in the self-funded studies, but I don't have any way to be confident with them. There may have been no manipulation at all, but the money flow creates a strong incentive for it, and there's no way to know otherwise. The Object Watch study does claim that there was no editorial control, and suggests that the funding wasn't total -- that's very encouraging, but it's also very hard for someone like me to verify. Most of the other studies don't even say that. The problem is that self-funded studies have a built-in conflict-of-interest that an independent observer can't really examine. Even indirect funding can be a problem ("give me a good report, and I'll give you some/more money later for something else").
What's really needed is more
independent studies that are clearly independent, and
not funded directly or indirectly by a vendor.
NewsForge: You often come across as an ardent Linux partisan. Aren't your studies suspect because of that perceived bias?
Wheeler: Actually, I'm not a Linux advocate. I'm an advocate for considering the use of open source software / free software (OSS/FS). As I clearly state in my "Why OSS/FS? Look at the Numbers!" paper, I think it's a serious problem that "many people fail to even consider OSS/FS products." In fact, my paper's goal is to "show that you should consider using OSS/FS when you're looking for software" (and many more consider OSS/FS now than when I first wrote the paper). But as I also note in the paper, "I use both proprietary and OSS/FS products myself."
I work hard to be unbiased. In particular, I wasn't paid by either side (proprietary or OSS/FS) for writing my papers contrasting them. You can (and should) "follow the money," but in my case, you'll find I have no incentive to be generous to either side.
Do I perceive some advantages for OSS/FS?
Sure, there's no point in considering an option if it has no advantages.
OSS/FS tends to be more flexible (since you can modify the code),
and the openness of the code has fundamental advantages for security.
Mature OSS/FS tends to have a lower initial purchase cost, though total cost
calculations are more complicated.
Most importantly, OSS/FS frees users from the control of
any particular vendor; a user can later self-support or
switch to a different supplier of that same software,
options unavailable to proprietary users.
I believe in the value of competition, and anything
that introduces competition into a market (as OSS/FS is doing)
usually has a very positive impact.
But a particular proprietary program can have key advantages over a particular
OSS/FS program, and that's the sort
of comparison you have to consider on a case by case basis.
NewsForge: Who can we trust to do independent studies? Is anyone truly independent and unbiased?
Wheeler: In the end, the only way to be really sure that you have unbiased results is to do the comparison yourself -- which you have to do anyway, because some measures like total cost of ownership (TCO) and performance are incredibly sensitive to specific environments.
Before you do your own measures, you can certainly try to gain insight from other reports. I highly recommend trying to identify how a given report was funded, and giving more weight to reports that were clearly not paid for by any side.
But even potentially biased reports can give you some useful data,
as long as you're careful with them.
A report paid to review a vendor's own product will often
raise issues that vendor thinks are to the vendor's advantage -- but
those issues might be very important to you, and thus worth thinking about
(and examining the competitor for that attribute).
Also, these vendor-sponsored papers often identify who that vendor
thinks is valid competition -- so make sure you include that
other vendor in your evaluation!
For example, Microsoft has information comparing OpenOffice.org to
(previously noted in Slashdot).
So as an acquirer, that's a tip-off that if I'm
thinking of buying/upgrading Microsoft Office, I'd better also
NewsForge: You say, "What's really needed is more independent studies that are clearly independent, and not funded directly or indirectly by a vendor." Who will do these studies? Who will pay for them? And do you know of any already out there we should look at?
Wheeler: If that were easy to answer, there wouldn't be a need for more independent studies . But I think part of the answer is in groups and organizations that are funded by potential customers, not vendors. Consumer Reports is a good example of this (though they don't focus on software reviews). Magazines can sometimes play this role, though magazine funding is often dominated by vendor advertising, making it difficult to stay objective. And I can imagine organizations banding together (each offering a certain amount of money for a particular review) until they can actually fund a particular review.
Often some of the most interesting and objective studies are from people who are really interested in investigating something else, and through their investigations find interesting new information. The Fuzz studies were like that; here were academicians who devised a new method for measuring reliability, and decided to use it to measure both proprietary and open source software. There was no monetary reason to report one way or another; they simply needed results to demonstrate the method. Reasoning, Inc., has used its tools to examine the source code of proprietary and open source software; their goals are to market the value of their tools and services, and don't care which software is "better."
Another great source is previous customers who have already done the analysis themselves. After all, if you're looking at the alternatives, others have probably done so before you. Please, please, please -- if you've done an in-depth analysis of products on a particular subject, post them on the Web, or at least offer them for sale! You'll get free advertising for your organization, and you'll get free useful corrections and clarifications.
As far as what studies should be looked at, my
tries to identify any cases where I suspected a potential conflict of interest
for the ones claiming an OSS/FS advantage.
But in the end, as I said before, the best independent study is the one you do yourself.