Content Sourcing and the Return of Editors

macbook-glasses

Not managing the sources of your content is like having someone you don’t know come in and redecorate your house while you’re away on vacation.

Content abundance and ubiquity has long ago crossed the border between givers and takers, providers and consumers, makers and users.

The chances are excellent that any party’s collection of content is either already experiencing overload or is highly vulnerable to it. The simplicity of searching and retaining copies of content is matched only by the ease of contributing more content to the world at large.

One of the major developments that we are all accustomed to is having content rated for quality by the parties that use the content. This “validation” of value makes us confident that we can experience the same value ourselves when we first decide to use it.

But who is validating the validators?

A major casualty of this development is the regular reliance on editors. How did this happen?

The current safety net is made of having memberships in groups that already screen their members. This screening is not necessarily something aggressive or unwelcome. Usually the group already exists, and then the need for using content within the group is what triggers the content reviews for the purpose of the group.

What is notably different from that is the at-large “social network”. An “open group” of content users evolves through users making public recommendations, either by descriptions or by voting.

Social ranking can be enormously helpful mainly because it forces content into the foreground that may not otherwise have become noticed. But this benefit has a limit to its value. In the social arena, we usually don’t really know why people voted the way they did unless we agree with them after the fact. If it turns out that we disagree, the reasons may simply remind us that we have skipped to opportunity to look at something less highly ramked but quite possibly more valuable.

To get beyond the boost that the ranking gives to discovery, the evaluation needs to associate the ranking with a reason. Rather than take a ranking at face value, we should consider the source of the ranking. Knowing how and why the content was used puts the evaluation into a context that any additional new observer can take as a sign of relevance.

The relevance of the content usage should be the key factor in deciding how the content should be prioritized (higher or lower) for keeping and re-use.

That prioritization is where the decision should first have impact on the list of things that we are already giving the status of Reference content.

What becomes more evident over time is that more and more new content arrives, far more than we can inspect. So, it becomes more and more important to consider the far fewer number of evaluators who can explain why they vote the way they do.

A good content Source is, in effect, an evaluator who can supply the criteria used for rating the content. Their activity is sometimes like that of a broker (who advocates for the content user) or an agent (who advocates for the content producer). Either way, having the Source make the criteria explicit makes the Source more credible and useful. They provide the first line of defense against continual further overload in your reference content collection.