What’s The Least Popular?
Can Least Popular Be Good?
On my last podcast, I discussed the differences between relevancy and popularity, and how our major portals to experiencing the internet have very quickly become slaves to popularity over relevancy, to “head” content over “tail” content. Even if the tail content is more relevant, it is passed over in the search results for the more popular content. This is how Google, Facebook, iTunes and YouTube, our four main “portals” to the internet work today. It wasn’t always this way.
When the internet first started becoming a commercial venture, back in the mid-1990s, it had the huge potential to actually allow any one person to be able to communicate to many other multitudes of people. Pre-Google, and its original algorithm, which if you think about it, was very much based on relevancy = popularity (the first algorithm determined that a web page was relevant due to the sheer number of links it had to it – the PageRank algorithm assumed that the more people linked to a page – aka referring to it – the more authoritative the page was), so the more people who linked to your page, they assumed that it was more relevant.
Before this, most engines used text analytics and other analytics to determine relevancy, or good old-fashioned human curating (that’s what Yahoo! did, when they first started it was painstaking human curation. I distinctly remember meeting with Jerry Yang at their offices in Mountain View back in September of 1995, and the office full of real humans reviewing webpages and determining relevancy. At the time, you couldn’t beat human relevancy, since the algorithms, other than PageRank, didn’t do so well. It was the input of the crowd which made PageRank useful if you think about it, was an early form of automated crowdsourcing, without actually having to go to the crowd to ask them. Of course, this was also the beginning of the end for human curation – the web was getting too big for human curation anyways, as it started to grow exponentially.
So for a while, that worked quite well as well. Early Google did a very good job of delivering the most relevant content because it wasn’t all that easy to link to another website – you have to have the technical skills to write HTML or have someone do it for you. So even then, the list of curators was very small.
Then came blogs and bloggers, who made it much easier to create websites with links. Link-based relevancy started to be gamed, so other methods, such as the textual analysis were added to the algorithm. At the same time, Google became the leading search engine on the web and basically realized that they owned the majority of the access to the rest of the internet. Despite their “don’t be evil” motto, they started to revise the results to feature “head” content (the popular stuff) over the relevant stuff (which might be more relevant).
To test this theory yourself, break tradition the next time you do a Google search, page down to the 10th or 12th page. I’ll bet that you will see some very interesting, very relevant content down there – stuff that you would typically never see on the first page above the fold since Google has figured that it makes more money from featuring the head content over the tail content. They used to sometimes mix some tail content in there, but now it’s all mostly head.
This is the same cycle occurred with blogging, podcasting, and video blogging. When Apple launched iTunes, it was all head content from the major record labels. When podcasting was hot the first time around, Apple decided to pull in all podcast feeds and make podcasts part of its content. It too, for a brief time, featured these “tail” podcasts, but soon after, dropped that in favor of the same strategy as Googles. Rarely are good “tail” podcasts featured. The same goes for YouTube – in the past, it was much easier for “tail” created videos to go viral, now the place is taken over by “head” content.
What this is telling me is that there is a huge opportunity here to build a new kind of search engine, one that places the “tail” content back in contention, as long as it’s relevant. In fact, for a while there, I was thinking about building an anti-search engine – one that would take Google results and remove all of the popular sites or simply display the results in reverse order. Still, might do that.
Some people think that the “tail” is irrelevant, that only the “head” matters. That user generated content which is not necessarily super popular needs a voice too.
To test this theory in the real world, I started doing something that you might find unorthodox.
When you go to a restaurant, do you ask the server “what’s the most popular dish?” when you aren’t quite sure what you want? I wonder why we do that. Are we not individuals with our own tastes? I supposed that you could assume that if you are in a specific restaurant, then that is enough self-selection if other people like you go to this restaurant, then what they like is probably what you like, right?
Try this next time: instead of asking “What’s the most popular item?” ask “What’s the least popular item?” Typically, when I do this, the servers are either flummoxed, say “Everything is good” or in rare instances, actually name a dish. Try the dish – I bet you’ll be pleasantly surprised.
There are tons of gold out there, we just need to dig it up again.
— image: Robert N
Latest posts by Chris Kalaboukis (see all)
- When you Innovate, Think About the Elephant in the Room - May 25, 2017
- To Innovate, You May Have to Repeat Yourself, Repeatedly - May 23, 2017
- Why You Need Dedicated Innovators Innovating Innovations - May 18, 2017