From the moment that the Internet and various social platforms made it particularly easy to communicate a message to a vast number of people, and receive back from them some form of response, crowd-sourcing became an inevitable phenomenon. So much so, that the term has come to be used for many different types of activities.
I use the term crowd-sourcing much as Wikipedia has defined it:
Crowdsourcing is a process that involves outsourcing tasks to a distributed group of people. This process can occur both online and offline. Crowdsourcing is different from an ordinary outsourcing since it is a task or problem that is outsourced to an undefined public rather than a specific body. An example of specific body is paid employees from a company.
I would only add that a “crowd” implies a large number of persons participating in the provisioning of the tasks.
A particular application of crowd-sourcing, that may trace its roots in the on-line world to the BBSs of several decades ago, is as a source of information and knowledge. Typical examples would be the need to find support for troubleshooting a problem, or to support making a decision in an area with which one has little experience. In practice, the crowdsourcing of knowledge and information may follow one of two modes that I will call statistical sourcing and Babylonian sourcing.
Statistical sourcing
By statistical sourcing, I refer to a technique based on the assumption that a wide number of opinions drawn from self-selecting sources will, in the aggregate, provide more accurate information than the knowledge customer is likely to determine on his or her own. The concept, albeit not under this name, was popularized in James Surowiecki’s The Wisdom of Crowds (2004).
Surowiecki illustrates the principle with the television game show, Who Wants to be a Millionaire? It has been demonstrated that the audience has a much better chance of finding the right answers to those questions that perplex the uninitiated than does the contestant, left to his or her own devices. Other examples include the ability of a group to better estimate the number of beans in a jar than can most individuals, or the estimation of the current temperature, and so forth. The basic principle is that the mean value of a the estimates of the group tends to be more accurate than the estimates of all but a few members of that group.
Babylonian sourcing
The roots of what I call Babylonian sourcing are probably as old as human society, if not older. To emphasize its hoary antiquity, I name the practice for the description provided by Herodotos of how the Babylonians would diagnose diseases:
The following custom seems to me the wisest of their institutions next to the one lately praised. They have no physicians, but when a man is ill, they lay him in the public square, and the passers-by come up to him, and if they have ever had his disease themselves or have known any one who has suffered from it, they give him advice, recommending him to do whatever they found good in their own case, or in the case known to them; and no one is allowed to pass the sick man in silence without asking him what his ailment is.
[Persian Wars I:197]
Indeed, the very word used by Herodotos for the place where medical information is exchanged, agora, is the Greek equivalent of the Latin forum, which we have adopted today to describe our on-line exchanges.
Whatever the terms used, the principles remain unchanged. A person requiring knowledge of how to resolve a situation makes the need known in a public place and the denizens of that place communicate whatever information they might have.
Babylonians and Statistics
Both the Babylonian approach and the statistical approach require additional processing of the data and information collected before they can be used to support any decision-making. In the former case, the knowledge seeker should assess the various bits of information provided, filtering out the ones that seem inapplicable or unreliable and finally deciding which advice to follow, assuming that any of the advice is usable. Many different assessment criteria are used, such as:
- the fit with the context
- the internal coherence of the information
- the number of times the same advice is proferred
- the reputation of the advisers
- the degree to which the advice confirms the seeker’s preconceptions
- the emotional response of the seeker to the advice
- etc.
In the latter case, the processing of the data is done in a more objective way. We are reminded of ITIL’s 7-step improvement process. Once collected, the data is filtered for obvious anomalies. It is then normalized and formatted. Finally, it is subjected to one or more algorithms to calculate some aggregated statistics, such as the mean, the distribution, the σ value, etc.
If I have presented these two approaches as a binary opposition, in reality most crowd-sourcing of knowledge combines aspects of both. As we have already seen, a Babylonian may be influenced by a specific statistic, the mode, or the count of which advice is received the most frequently. Similarly, statistical analyses often hide behind the hypothetical objectivity of the analysis, even when the algorithm has been demonstrated to be of little, or even pernicious, value.
Experts vs. Crowds
What is the role of the expert in relation to crowd-sourced knowledge? Indeed, why do we source knowledge from crowds when experts are available? Let us examine the second question first. There are numerous reasons for having recourse to crowd-sourcing as opposed to an expert:
- Expert advice may be costly, whereas the advice from crowds is generally freely given. A corollary to this issue is the inability of many knowledge seekers to assess adequately the value of the knowledge they seek. On the other hand, some seekers have the prejudice of believing that information for which they have paid has more value than free information.
- Finding and validating someone as an expert takes time, whereas crowd-sourcing often gives results very rapidly. O tempora! o mores!
- Expert advice is not always trusted. People love to debunk the experts, telling tales of how companies following expert advice managed to put themselves out of business.
- Expert advice is not always understood, whereas the advice of crowds is often written is plainer language. Some experts may indeed hide behind obfuscating language; others just like to have some fun – μηδὲ βάλητε τοὺς μαργαρίτας ὑμῶν ἔμπροσθεν τῶν χοίρων [Matthew 7:6]. However, the language of crowd-sourced information is often so vague or ambiguous, and the context is generally unknown, that the seeker is hardly any better off than with the mumbo-jumbo of the expert.
- Sometimes, experts can only help to avoid foolish mistakes, but cannot provide definitive answers. Ask your stock broker or banker which stocks will go up and which will go down. Advice from anonymous crowd sources often has the appearance of being definitive, of being a comforting guide to the perplexed.
- Some so-called experts seem incapable of providing any advice. All they can do is tell you that “it depends”. We are reminded of Harry Truman’s quip that he needed to find a one-handed economist. At least crowd-sourced information rarely has this failing.
- Many seekers reserve experts for serious and consequential issues and are more interested in the social exchange with the crowd than with getting the right answer. For example, they might resort to self-help books, online forums, shamans and homeopathic treatments and elixirs for the daily, non-lethal health complaints, but broken bones, major infections and life-threatening diseases send them running to titled physicians.
- Some consider that crowd-sourcing is a means toward empowerment. Many are happy to be liberated from the crushing domination that Mandarins might have in certain contexts. If such people fail, at least they fail due to their own misguided assessments and not due to the bad or misapplied advice of experts.
- Some simply do not know how to find or to identify someone as an expert. They may confuse expertise with charlatanism. This issue is exacerbated by the easy access to self-styled experts throughout the world, via the same platforms as provide crowd-sourced knowledge. How is one to distinguish between a simple voice in the crowd and an expert in sheep’s clothing?
- Seekers may get more value out of the pilgrimage toward knowledge, than from a pre-wrapped solution delivered by an expert. Wandering through the labyrinth of crowd-sourced information may be part of their journeys to enlightenment. Such pilgrims are more in need of a guru than an expert. Think of the role of Eliyahu Goldratt in The Goal.
What crowd-sourced knowledge is not
We have seen various reports about how seemingly intractable problems have been resolved via crowd-sourcing. The typical story tells about how a team of scientists has failed to resolve a problem and, in desperation, that problem is opened up to the general public. Within a short amount of time, some completely unknown genius finds an elegant solution.
I do not consider this scenario to be of the same ilk as the Babylonian and statistical approaches. It is not really crowd-sourcing of knowledge at all. Rather, it is using the same media we use for crowd-sourcing to identify an expert who can provide a solution or the desired knowledge. The knowledge is validated by its intrinsic coherence, not by the social act of adopting advice from a crowd.
Does expert knowledge beat crowd-sourced knowledge?
In his examples of crowd-sourced knowledge, Surowiecki crows that only a few individuals ever beat the crowd’s estimations. But are these individuals simply the expected cases present in any random distribution, or are they experts? It is not within our means to repeat the experiments cited by Surowiecki. It is interesting, however, to compare his results with the results described by Douglas W. Hubbard in his book How to Measure Anything (2nd ed., 2010).
Hubbard presents data that appear to conflict with the statistical approach to crowd-sourced knowledge. He states that the ability to estimate a statistical range within which a certain value probably lies is a skill that can be developed. He refers to persons who have developed this skill as being “calibrated”. Furthermore, according to the research of Nobel prize winner Daniel Kahneman and Amos Tversky, people who have not been calibrated routinely overstate their knowledge—they are overconfident. While calculating the mean of all these overconfident estimates might compensate for their individual inaccuracies, the question remains whether calibrated estimators give more reliable answers than the crowd. Would it be so strange, then, for a very few calibrated experts to provide more useful knowledge than a crowd can provide?
Why is this question important for service management?
Until now, I have escaped mentioning service management at all. What is the significance of crowd-sourced knowledge as opposed to expert knowledge when it comes to managing services?
The vast majority of service management activities are forms of knowledge work. That is to say, these activities concern finding and applying useful knowledge in order to make decisions that influence the quality of a service. We might call the “traditional” approach to this knowledge work to be “expert-based”. In other words, an organization is composed of a series of more or less specialized experts who take responsibility, either individually or as members of teams, for making different types of decisions. For example, when an incident has caused a service to fail, the traditionally approach is to identify which expert is most likely to understand what needs to be done to restore that service as rapidly as possible, and to assign to that expert the responsibility for doing so.
More recently, it has been argued that these decisions would be made more effectively by taking a more “social” approach. This approach entails opening the search for information and knowledge to a more vaguely defined public, rather than a specific, pre-defined person or team. This is precisely, then, crowd-sourcing as per the definition provided initially in this discussion. It is an example of what I have called the Babylonian approach.
Thus, rather than relying finding the one person or team that can handle an incident, the social approach opens the issue to the broader set of stakeholders, including anyone within the service provider organization, the users of the service, in all likelihood third-party suppliers upon whom the service is dependent and, if the organization can tolerate a complete transparency, any interested member of the general public. Instead of pushing the responsibility onto a single team or individual, any stakeholder may pull the case into his or her queue and provide whatever information or knowledge is deemed appropriate. The knowledge seeker then handles whatever returned information in order to resolve the case.
Which is better—expert knowledge or crowd-sourced knowledge?
There does not yet exist any data to substantiate whether the social approach yields better results than the expert approach. The argument made is not so much that crowd-sourcing will yield better results. Instead, the argument is that today’s youth are growing up preferring crowd-sourcing to experts in their daily lives. As they become the dominant force of the workplace, they will naturally bring with them the practices (and prejudices) with which they are accustomed. The old guard (such as myself) is given fair and due warning that it must prepare for the future, either participating in its making, or being swept aside as irrelevant.
It may nonetheless be possible to provide a more rational basis for choosing the most useful combination of approaches. For, the people working in the service provider organization will undoubtedly continue to have personal responsibilities and will maintain individual realms of expertise. To the extent that an expert exists in-house, will it not always be more efficient to assign a task requiring that expertise directly to the expert?
If our services were that simple, I believe the answer would be self-evident. Unfortunately, we have succeeded in creating systems of increasingly convoluted complexity where it is often necessary to bring together two or more different realms of expertise to resolve an issue. There are several consequences of this situation:
- It becomes increasingly difficult to staff an organization with the competencies one might wish to have.
- Even if the skills are present, it becomes increasingly difficult for other members of the staff to be aware of who knows what.
- Consequently, organizations are increasingly required to multi-source their human resources.
Since human resources do not scale well, the alternatives are either to grow an increasingly large organization, which is not feasible in most cases, given the revenues available; or to multi-source those resources, opening the work of service management beyond the confines of the individual enterprise.
One might ask why anyone, especially someone paid to work for one company, would ever contribute to resolving the issues of another company? Why would this be tolerated at all? There are two answers, neither of which give entire satisfaction. First, the assumption is that the successful application of information and knowledge in one case will help improve management everywhere. This would be true if there were regular feedback from the seeker, explaining what advice worked and what did not, and if that feedback were publicly available. Unfortunately, this is rarely the case. The second answer evokes the altruism of the social hive. Any one knowledge worker is prepared to sacrifice his or her time and energy to an issue, on the assumption that the society as a whole will thrive and that someday, the favor may be returned.
Are you ready to strike the death-blow into the heart of the capitalist system?
See also Yang, J; Adamic, L; Ackerman, M (2008), “Crowdsourcing and Knowledge Sharing: Strategic User Behavior on Taskcn”, Proceedings of the 9th ACM Conference on Electronic Commerce


Leave a Reply