Can Distributed Search Engines Challenge Google’s Dominance?

Who would have thought that the narrative, which fueled the emergence of the internet, would later birth Google that had since dominated the proceedings of the ever-growing internet world?

Although this entity has been at the forefront of the global transition from an industrial-based society to a post-industrial economy, it is, however, glaring that the same level of paradigm shift has kicked off, and there is no saying how it will affect Google’s dominance.

The world is slowly gravitating towards a digital economy, and it remains to be seen whether this colossal force will retain its place at the echelon of the internet food chain.

Herein lies the essence of this article, as it touches on revolutionary thoughts that have come to play in the minds of innovators driving for a new internet that revolves around decentralized technologies.

The web 3.0, as it is called by many, is a new frontier that looks to strip powerhouses of their hard-earned position as the custodians of information. 

It is this possibility that has led some startups to push the concept of a distributed search engine that looks to establish the decentralized ideology behind the Web 3.0.

However, seeing that many have fallen, over the years, in their quest to challenge Google’s dominance in the search engine market, it is only fair to question the potency of this new concept and if it stands a chance against a well-oiled, centralized, and dominant search engine system.

What Is A Distributed Search Engine?

Distributed search engine enablers base their operations on an argument, which decries the net neutrality that the monopolized internet erodes. They look at the colossal entity, in the form of the Google search engine with its 90% market share, and see a monstrosity that has no place in the proposed next phase of the internet ecosystem.

To these bunch of radical minds, a centralized search engine would only undermine the freedom and censorship resistance that a decentralized web avail.

And so, we can define a distributed search engine as a system, which enables a search model that does away with every form of a single point control in all of the tasks relating to indexing, querying, data mining, and crawling.

Unlike traditional search engines, this system relies on a distributed network of computers for its functionalities.

While this definition vaguely describes the workings of a distributed search engine, there is, nonetheless, several technical aspects of the solution that makes it a tad difficult to achieve.

Regardless of the complexities of creating a fully functional and viable distributed search engine, yet, a few startups have continued to tinker with different models to achieving one.

One such model is found in the framework for Cyber Network, which utilizes a protocol that compiles “information onto a knowledge graph.” After compiling this information, the protocol, in conjunction with the Interplanetary File System IPFS (a distributed hypermedia protocol), ranks information using digital tokens and the present “parameters of the network’s load.”

Notably, all this computation is carried out by validators, which eliminates the need for a central server like the Google search engine.

Therefore, the entirety of the network is not susceptible to middlemen factors that try to push content to rank higher for monetary reasons or censor what users access on search results.

Another commendable framework for a distributed search engine is Dweb. Here, Dweb proposes that every internet user should run their search engine protocol. And it plans on achieving it by combining already operational technology, IPFS and IOTA, to enable a distributed database layer and a distributed network.

In essence, an author (content creator or provider) can upload content on the distributed network and permit others to access it. By signing every piece of information uploaded to the Dweb network, it becomes possible to mitigate spam and nefarious activities.

With this, users can directly block a known source of malicious content.

These projects are just two of the growing number of solutions looking to decentralize the way we search the web. These platforms must find a way around the current frailties of the distributed search engine. For one, speed is the ultimate weapon of Google.

To challenge for the top spot, distributed search engines ought to log search speeds that is on par with the standards set by Google’s search engine algorithm.

Likewise, these networks must work their socks off in the area of indexing. Here, it is important to design frameworks that allow distributed nodes to retrieve information that is relevant to search queries, without running the risk of being susceptible to fake search entries.

This might require the use of a consensus mechanism, similar to the one found on the Bitcoin blockchain, which allows validators to vote on the quality of a search result and where it should rank.

Final Thoughts

Like the concept of a decentralized web, distributed search engines pack enough punch to disrupt a market as monotonous as the Google-dominated search engine market.

While there is little evidence, as regards a working product, to back up this claim, the possibilities fueling this paradigm shift and the apparent room for improvements are, however, enough reasons to believe in the validity of a distributed search engine.

read original article here