I see you’re using JSON-LD in conceptnet. If you start passing that data around using distributed systems, you'll inevitably want to start incorporating content-addressed links into the data. I recommend looking into IPLD as a data model for handling that. https://ipld.io The spec is still open -- this would be a good time to give feedback and/or spell out your use cases in this space.
Censorship is suppression or prohibition of content. Putting wikipedia on IPFS makes it strongly resistant to many forms of censorship because it uses content-addressing. This means that suppressed content can be redistributed through alternate channels using the same cryptographically verifiable identifier. It also means that you have clarity about which version of the content you're viewing, so if some entity publishes a censored version of your content you have a way to distinguish between the two versions.
If you suppress it in one place, people can put it up somewhere else.
If you block one path, people can make the content available through another path.
If you modify it, people know that you modified it, have clear ways to distinguish between your copy and the unmodified copy, and can request the unmodified version without wondering which version they're getting.
If you destroy all the copies on the network, people can add new copies later and all of the existing links will still work.
Etc...
IPFS can't protect people from a government physically tracking down every copy of the censored content and destroying it -- that requires other efforts external to the protocol (ie. move copies outside their jurisdiction). It does, however, make it possible to move many copies of the content around the world, passing through many hands, serving it through a broad and growing range of paths, without the content losing integrity.
Doesn't the scope of your definition also cover cases of academic journals rejecting low quality? They are in essence censoring mere low quality, not even falsehoods.
An overly broad scope means that censorship loses its moral oomph.
I take it as being about who holds (and therefore owns) the data. It's not that the corporations are evil, it's that we're operating in a system where only the corporations get to accumulate the data in their hands.
+1. And you read it because you're visiting hackernews (also central, private, for-profit). It's a systemic problem. That's why we need new tools that let us operate outside that system.
Agree, I could share (a subset of) my sources as could you.
We could even automate it so if my post reader notices several of my sources following another source it will ask me if I want to look at or follow that source, etc.
That's federated, which is a different kind of decentralized than the one implied in the article. Federated systems are less centralized, but still require regular clients to trust the system they are federated with.
All the tools to create personal websites existed more than 20 years ago. They haven't disappeared.
If anything they're probably cheaper, easier to find and to configure now, and also, more diverse, slicker, have more features, etc... And as pointed out by others, distributed social networks have existed for probably about ten years now.
And yes, the fact that the author chose to publish this pompous trite on medium is laughable.
The big shift to mobile, however, means that people can no longer use their computers to host a server. Napster would probably not achieve critical mass nowadays.
[CORRECTION/RETRACTION]: I misunderstood ipfs. You can totally edit your DAG in ipfs (which gives you "delete" ability). So Yes cypherpunks01 is completely right.
The explanation of how you do "deletes" in a DAG still stands.
[Original comment:]
ipfs doesn't have delete. It's basically a global immutable data store. Brilliantly well suited for a "permanent web" but not well suited for the kind of communication where people want to edit or delete the stuff they've put out in the past.
If you append to a merkle-dag then yes it's immutable, but you can always do the equivalent of a git rebase -- effectively removing a portion of the tree and then selectively re-applying the parts that you want to keep. That creates a new DAG, one that diverges from the DAG that was shared before. For example, if you accidentally commit a database password into your git repository and push it to github, you can go back, edit the git history to exclude that commit, and force-push the new tree to github. This completely removes the info from your git repository. Of course, if someone has already pulled that old code from github, they already have your password -- you can't change that!
I don't see a problem with the truth of the internet being reflected in ipfs.
Consider the internet to be write only, forget maybe (you are not in control of that). If you accidentally commit a password someplace the correct response is changing that password.
That works for passwords, but not for everything. What happens when someone goes "Oops, didn't mean to drop those nudes into my gallery of vacation photos"? Are they irrevocably published? Or can you unpublish them and hope that nobody noticed while they were up?
IPFS reflects the truth of the public internet (and public speech/media in general): With IPFS, and any other internet technology, you can choose to stop sharing data with others, but you can't force other people to stop sharing copies they already have (except through some means outside the protocol, such as DDoS or the law).
I think a design goal of IPFS is that if you publish your nude photos to your personal IPFS node, it wont be sent to another node unless they explicitly request it by content-address. So you can use it to share sensitive data, and you can always layer encryption on top. [1]