B L O C K C H A I N
Definition and Functionality
The InterPlanetary File System (IPFS) is a network and protocol for creating a content-addressable peer-to-peer method for storing and releasing hypermedia in a distributed file system. IPFS was invented by Juan Benet and is now an open source project developed with the help of a community.
What is the InterPlanetary File System?
IPFS is a distributed peer-to-peer file system that attempts to connect all computing devices to the same file system. In a way, IPFS is similar to the World Wide Web, but IPFS could be seen as a single BitTorrent swarm exchanging objects within a git repository. In other words, IPFS provides a high-throughput block storage model with content-addressed hyperlinks. This forms a generalized Merkle-oriented acyclic graph (DAG).
IPFS combines a distributed hash table, an incentive-driven block exchange, and a self-certifying namespace. IPFS has no single point of failure and nodes do not have to trust each other except for the nodes they are connected to.
Distributed Content Delivery saves bandwidth and prevents DDoS attacks that HTTP has to contend with. The file system can be accessed in various ways, including via FUSE and via HTTP. A local file can be added to the IPFS file system to make it available to the world.
Files are identified by their hashes, so it is caching friendly. Other users who view the content help make the content available to others on the network. IPFS has a naming service called IPNS, a PKI-based global namespace. It is used to build trust chains, is compatible with other NSes, and can map DNS, .onion, .bit, etc. to IPNS.
IPFS version control systems
Another powerful feature of the Merkle DAG structure is the ability to build a distributed version control system (VCS). The most popular example is Github, which allows developers to work on projects at the same time. Files on Github are stored and versioned with a Merkle DAG. It allows users to independently duplicate and edit multiple versions of a file, save those versions, and later merge changes with the original file.
IPFS uses a similar model for data objects: As long as objects corresponding to the original data and all new versions are accessible, the entire file history can be retrieved. Since data blocks are stored locally on the network and can be cached indefinitely, this means that IPFS objects can be stored permanently.
In addition, IPFS does not rely on access to Internet protocols. Data can be distributed in overlay networks that are simply built on another network. These properties are remarkable because they are core elements for a censored network. It could be a useful tool for promoting freedom of expression to counter the spread of Internet censorship around the world.
Self-certifying file system
The last essential component of IPFS is the Self-certifying File System (SFS). It is a distributed file system that does not require any special authorizations for data exchange. It is “self-certifying” because the data provided to a client is authenticated by the file name (signed by the server). The result? Users can securely access remote content with the transparency of local storage.
IPFS builds on this concept to create the InterPlanetary Name Space (IPNS). It is an SFS that uses public-key cryptography to self-certify objects published by network users. All objects on IPFS can be uniquely identified, but so can nodes. Each node on the network has a set of public keys, private keys, and a node ID, which is the hash of its public key. Nodes can, therefore, use their private keys to sign data objects they publish, and the authenticity of that data can be verified against the sender’s public key.
Here is a brief summary of the most important components:
- With the Distributed Hash Table, nodes can store and share data without central coordination.
- It enables immediate pre-authentication and verification of the exchanged data using public-key cryptography.
- Merkle DAG enables uniquely identified tamper-proof and permanently stored data
- Users can access earlier versions of the processed data via the version control system.
- Simple conceptual framework