Risson and Moors \cite{rissonmoors2006} mention three different
approaches for indexing data in P2P overlay networks: \textit{centralized}, \textit{local} and \textit{distributed} indexing. 
Centralized indexing involves the usage of a dedicated server that maintains a global index for all of the data stored in a network. The disadvantage is that it introduces a central point of failure to the system. Moreover, it is relatively expensive to maintain when the alternative is to utilize the computation and storage power of the peers. (example is Napster \cite{napster}).

For local indexing, every peer maintains information for the objects that it has itself submitted to a network. This has two disadvantages. Firstly, since each peer is responsible for its own data, the data is lost when the peer drops out of the system. Secondly, searching local indexes requires flooding the network with search queries. Flooding consumes a lot of bandwidth and only has a limited perimeter around the requester for its searches. Moreover, rare items will not be found with a high probability. (example is Gnutella \cite{klingbergmanfredi2002}).

Distributed indexing allows many peers to store pointers to data stored on different nodes. This concept, although more complicated, has many benefits like avoiding traffic congestion on certain peers, at the same time providing completeness of searched results during network churn (example is Freenet \cite{clarkesandbergwileyhung}).

