-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Content addressed data #2192
Comments
Why not directly abandon the protocol ? Never do duplicating work please. Starting from content-addressing, are you going to implement DHT and other stuff that is already on IPFS ? What's your opinion on IPZN ? BTW, you can find me on telegram |
Adding IPFS protocol support is also a possible option, but I don't want to depend on external application DHT support with many uploaded files would be very inefficient: Eg. if you want to announce your IP to 100 000 files, then you have to connect to 1000s of different computers, because the DHT buckets are distributed between random computers. |
So you don't agree on modularity ?
Instead of saying adding ipfs support, I'd say it's a radical change |
What's ZeroNet for ? Anti-censorship ? Right ? There's a saying, 'grapes are sour when you can't eat them' |
While modularity is important, using IPFS as the main backend doesn't look good to me. One of the reasons is depending on an external tool (that's what nofish said). We can never be sure there are no radical changes that might make ZeroNet stop working. Also, we'll have to spend much time switching to IPFS and making it compatible to classic ZeroNet than what we'd have to do if we just tuned the classic protocol a bit. |
I might want to reword that better: I'm not against an IPFS-based system, but that shouldn't be connected to ZeroNet. |
Are you sure ?
Yeah, but I will be connected to IPZN
Basically, impossible. |
It is.
Sure, you can develop a decentralized system yourself, but don't call it ZeroNet. If it turns out to be better, we'll switch.
POSIX is going to be alive for quite a long while. Same with Python 3.7. |
Additionally, I'm not quite sure but I believe that IPFS + IPZN is slower than classic ZeroNet. |
It isn't.
I don't want to call it ZeroNet
It depends on IPFS. Do you know that DHT is not the only option for routing in IPFS ? |
It is. We have a rather big base for new features.
Ok, then don't say that IPZN is better than ZeroNet. It might be better than ZeroNet in the future if you finish it. @krixano and me worked on another decentralized system that could possibly better (though somewhat non-classical), but we didn't end up implementing it. We didn't advertise it here and there.
Quite probably, but adding DHT (and others) to ZeroNet should be easier than switching to a completely different architecture. |
But why does IPFS have so much code ? Is the code unneceasry ? No.
What ?
As a result, I know nothing about your project.
Maybe, but as I said, are the tons of code of IPFS unnecessary ? It implies there're a lot features need to be done. When you find all the features are done, you will also realize you re-implemented IPFS. I mean IPFS has more features and we should not do duplicating work, just switch to IPFS. So I'd rather re-implement application layer code instead of lower layer code. That's easier |
It unneceasry (sic) for ZeroNet's usecases.
A typo, sorry; it should be "possible be better".
Yes, that's what I'm talking about! Don't announce before an alpha version, we don't want another BolgenOS.
Some of them are unnecessary for ZeroNet usecases.
See above.
That's how it works today: add 10 levels of abstraction and let it figure itself out! We should think about being better, not being easy to build. |
Do you want more awesome features ?
We need ideas and improvements on paperwork to achieve a better design
Modularity is better as well as easy to build
What's that project |
List them. I believe that most of them can be easily implemented in classic ZeroNet and even more easily after adding content-addressed files.
It looks like you learned a new buzzword "IPFS" and now you're saying "IPFS supports more features, go use IPFS!" First, say what you're missing and how rewriting all ZeroNet code to support IPFS will be faster or easier (that's what you're apealling to) than adding them as classic ZeroNet plugins.
We don't want to depend on an external service. We could separate ZeroNet to backend and frontend later when we grow bigger, but we can't just take someone else's project and use it, mostly because we can't add features/fix bugs if IPFS guys won't like that.
This is not related to ZeroNet mostly, so I'll keep short. Think of it as a decentralized gopher-like system. |
For example, FileCoin.
Why don't you add more features to tcp/ip/http/https ?
I doubt if you have ever read about IPFS ? |
Another non-buzzword one plaase. And even then, FileCoin can be implemented as a plugin.
Is this sarcasm?
Sure I did. Please don't ignore my questions and answer: what IPFS features can't be added to ZeroNet? |
Huh, do you think you guys have enough effort ?
Yeah, of course. I mean your concern is nonsense and will never happen, because it's infrastructure like http.
Nonsense |
IPLD claims to be a "merkel forest" that supports all datatypes Thus I find it entirely pointless to fight over what's best |
Don't make us do what you want to. Do it yourself: either write your own network or bring features to ours.
What the heck, merger sites were added, optional files were added, big files were added!...
It looks like a classic "no u". |
@mkg20001 Your arguments look better. While I wouldn't use IPFS, libp2p might be a better solution because it's at least used by many projects, so it's unlikely that breaking changes are added. So, is the plan to switch to libp2p? |
Of course, opensource voluntary
You think these are features ? It's just workarounds for bad design |
Go and read IPFS papers agian I don't know what to say. |
Right. nofish can't be forced to do something unless he or ZeroNet community finds it important (in the latter case, we'll either end up with forking or with convincing nofish). Go find those who like your idea and start development.
Uh, what? Sure, big files might be a hotfix but how is optional/required file separation bad design? |
We can even start at IPFS homepage:
See? Add a file. ZeroNet is not just about files: PeerMessage works without files and should never be. |
Huh, you definitely not knowing about pubsub and IPFS's plan of dynamic web |
Quite probable. Now show me a working implementation of pubsub and the IPFS-like dynamic web in Python. |
You were saying "well pubsub is not ready yet" and "IPFS development is slow", and now you're asking us to switch to something that's not ready! |
Take a look at https://gitlab.com/ipzn/ipzn/wikis/Notes, these are WIP IPFS features
So what ? Just wait. Do you want a toy project ? |
Yes. You can't implement something before its dependencies are ready! |
ZeroNet already contains plugins for som low-level network communications. Other protocols can also be added as new plugins. I agree that it would be good to have ZeroNet more modular, but this can be done in existing code. But too much layering and modularity aren't really needed. ZeroNet is a full self-contained network with support for sites, users and (almost) real-time updates. So it is not really needed to be very layered as most features are already built-in. But IPFS is only a file system for storage without any other functionalities. For it, it is needed to be modular as developers need to implement most functionalities themselves. I'm not really seeing IPFS as ZeroNet competitor but as an addition. So ideally, IPFS would be a plugin to ZeroNet so it would be able to use either ZeroNet protocol, BitTorrent protocol, IPFS protocol or any other protocol depending on what is needed.
If this is implemented in a good way, it can also be efficient. And it is good to support multiple protocols, to make network bigger and reach more users. It could actually be more efficient. For example, you could use BitTorrent, IPFS or Swarm (depending on what is most appropriate/available/not blocked) best for big static content, as they are mostly made for it. Then you could use ZeroNet protocol or libp2p for dynamic content. It is same for DNS systems for ZeroNet. But to do this, you don't need a completely new project. Most of the things can be done with plugins and some of them with some core changes.
What makes Freenet and GNUNet "outdated" and IPFS "very modern". By development and releases certainly not, as both (actually all three) of them have very active development history. And they also have a lot of users. Ok, IPFS is newer and with some better features, but how would you make sure that there will be no better solution than IPFS in the future?
Why not? Yes, to make ZeroNet more robust. If more protocols would be supported, it would be harder to block all of them. Also see above and other comments for more details. |
Sure ?
Sure ? I mentioned pubsub so many times before.
Do you have that effort ? If not, a single best protocol is enough.
IPFS is not a 'plugin' or an alternative protocol, it will be the main protocol.
Yeah, of course. IPFS is just fking modern.
That's not the thing we should think now.
Not really |
However the main reason is I want to take full control of the project, I forgot the say this, uh |
@HelloZeroNet I want to mention something here that's somewhat related to this and deduplication - currently, if we have two separate zites that use the same merged zites type, both of these zites will have a database that contain the exact same information. I believe we should be able to fix this by having one database for a merged zite type handled by the core, and then these two zites would be able to query from this one database by using the standard dbQuery. But there's a major problem with this, both of these zites could have different dbschema's. So... just something to think about. This isn't a full proposal or anything, which is why it's not in a separate issue. |
De-duplicating database is not possible, because the dbschema is defined by the merger site not by the merged one. You can share database between sites by using Cors permission and |
If we don't have 'database', we can. Refer to orbit-db |
Yeah, I know about how merger sites work, hence... "But there's a major problem with this, both of these zites could have different dbschema's." (I've also created many sites that are merger sites, but it seems you've forgotten about that... or you just don't pay attention to ZeroNet devs).
Right... but this doesn't have write permission, so you have to end up resorting to merged zites for that. |
Sure I know, but this is a public conversation and I try to give as general answers as possible to make it usefull to others as well. |
I didn't read all of that expensive comments. |
Keeping dirty folder structure?
I don't care about the dirty compatibility
…________________________________
From: Daniell Mesquita <[email protected]>
Sent: Friday, October 4, 2019 4:41:57 AM
To: HelloZeroNet/ZeroNet <[email protected]>
Cc: blurHY <[email protected]>; Mention <[email protected]>
Subject: Re: [HelloZeroNet/ZeroNet] Proposal: Content addressed data (#2192)
I didn't read all of that expensive comments.
Why starting a browser-wars alike?
Its pretty simple for keeping compatibility. Use .dat folder inside every site to keep its versioning snapshots every update, and IPFS for generating hashes of blobs inside this folder.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#2192?email_source=notifications&email_token=AH5CPRFUFSQG3DZDDBJMFI3QMZKJLA5CNFSM4IVJ6WN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAJQ2RQ#issuecomment-538119494>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AH5CPRC5OKB5BFZFZHCXQ5LQMZKJLANCNFSM4IVJ6WNQ>.
|
That's... not how that works. It may seem intuitive, but it will turn out to get really horrible really quickly or it won't work for most of the cases (for example every user would need to track their own .dat folder for user data, but this wouldn't be enforceable, at least without tons of hacks)
In every second spent fighting in this thread, a second that could've been used to create more code to actually prove one solution to be better over another could've been made instead |
Of course, I will soon give you all a proof-of-concept, but for now I have to wait. You don't understand the theory, and so you say 'give me the proof-of-concept'. |
The first thing you could do is to make your search engine working better than Zirch. |
@blurHY: You've added nothing meaningful to this conversation. Yes, we get it: you're deeply and hopelessly infatuated with IPFS, presumably due to the psychology of previous investment in your InterPlanetary ZeroNet (IPZN). The overwhelming majority of us disagree with your hardline position. Can we please move constructively on? Also, stop belligerently polluting this and other ZeroNet threads. This includes #2198, #2194, #2189, #2062, and the list goes on and on. #2090 is probably the only issue where you offer sound advice unburdened by sarcasm, vitriol, or one-liner exhortations extolling the virtues of IPFS and denigrating everything else in existence. tl;drMore threads like #2090. Less threads like this and all of your other commentary. 谢谢. |
As I said, it's centralized and it has already been abandoned. Moreover, it's not the point. PS: I have nothing to say. For you are lazy to discover other things better than ZeroNet. |
Both ZeroNet and IPFS supports a decent search engine. |
It will be soon banned in China when it is used by many peopple |
Hello @HelloZeroNet I think the first you must do is separate the static content from any .json! The content should be under a different folder! Not in the same where is the json! Also it would be great to have an option to use ZeroNet with no headers, no sandbox, simple serving static content! Verification is already done by the Network using the Bitcoin address! Much like IPFS the content is accessible under the (hash) in this case Bitcoin address. The main problem with ZeroNet is that very difficult for most people unset headers and proxy correctly all request to the backend from clearnet. I example able to use any normal TLD with proxy stuff to ZeroNet backend which stripped from frame and headers. The back-end proxy incoming rewuest from the front-end to 127.0.0.1:43110/raw/example.bit. On the front-end I add my own headers. I can work perfectly on my local machine publish and sign everything on localhost and use ZeroNet over Tor. The back-end downloads my updates... Than people who comes to my example.org domain will eventually proxyed correctly to ZeroNet back-end over firewall where the STATIC example.bit located and voila! Everything works perfectly. I think this is even better than IPFS! Only need a server which acts and front-end and some ZeroNet backend servers as load balancers. Everything publishing/updating ONLY done on localhost. :) Decentralized. Most certainly! I think this is even better than IPFS _dnlink which clearly exposes the gateway... |
I will possible write some guide how to proxy any TLD to content hosted in ZeroNet. As I said it is way better than IPFS. Just need to rethink how many people you allow to use ZeroNet. By allowing to remove all headers and disable the frame SOLELY on purpose to act as a back-end you will eventually open up ZeroNet to the entire world! Frame and headers you included in ZeroNet only useful when there is no need to proxy traffic to it! Like when someone install it the first time on his/her local machine... Back-end don't need to sign anything just download the updated sites... |
@BugsAllOverMe See #2214 for DNS support. |
With a single user site it would be easier to allow to download big files too with default. Some websites contain large files in specified directories.It can be confusing to the users and site owners that some users can not acces to large files easy way.Because the are no mode to setup to automatic way to download all large files when the user download the zite.A button would be easier what show the total of large files size in a single site and only restrict large files download if the user click the button when the site load and not want them.When the user download the zite and currently click the button to not want them and later he think maybe later want the big files them top-right 0 button would be better one button what shows the total size of all large files in the current zite.And allow one click to download them.Plus allow all single large files a easy simple button click in the client to resume a single file download if a user click in a large file using a non flustrating MSG with a download button.What shows the client when a user click a large file.Without need the site owners to integrate +Seed button.And this is working with single and multiple user zites. This easy things can solve many seeding and downloading site +seed button intergration and using issues.Many large files currently hard to download,porly seeded,and dead.If we combinating it with this idea we can get more healt the files with site-independent file storage and de-duplication combination. |
@HelloZeroNet Here is ZeroTalk comment that says:
Nofish confirmed that this isn't how ZeroNet currently works. Also it would be a benefit if the zeronet located bigfile name contains (not necesary be equal to) original human friendly file name so one does not need to relly only on zeronet to find the file. |
Use SHAKE-256 for file hash. |
@slrslr If you add (copy or symlink) a file or a directory to the This way you don't have to write or store your files multiple times and the file names and the content will be the same. If you upload the file using http post, then I think there is no way to avoid it. |
Note that content-addresses data should also be accessible from mutable addresses. This would be useful for updating content and getting newest version easily. This could similar to IPNS where it could be using existing zite functionalities with public and private keys so zite would actually link content-addressed data from it. |
It would be important if the same file using another file name the program can detect it and merge the seeders/leechers.Independent of the site.If we upload the same file to ZeroUp,KopyKate the filename and thus the file hash are changed but the file are absolute the same.This way you can't merge the users who seed the same file.Can not detect the program if somebody upload the same file to ZeroUp,KopyKate ETC.It would be easier for a user to be warned if the current file are exit somewhere else,on another site.Because it's from the same file you need more copies that different people are seed.This is a huge waste of resources.Not a good thing that ZeroUp,KopyKate are renaming the files like the original file.mp4 to 1234567890- file.mp4.This way you and the users can't seed the same file if exits in another site without manual editing the .json file.Existing seeders are lost if the same file are re uploaded or uploaded with another name.And unnecessary copies are stored which take up space on your hard drive. |
The problem will be solved by this feature as the files will be stored/shared independently from the sites. |
Content addressed data access
Why?
What?
Store and access based on file's hash (or merkle root for big files)
How?
File storage
data/static/[download_date]/[filename].[ext]
data/__static__/[download_date]/[hash].[ext]
data/__static__/[download_date]/[partial_hash].[ext]
data/__static__/[partial_hash]/[hash].[ext]
Possible alternative to static content root directory (instead of
data/__static__/
):data-static/
data/__immutable__/
Variables:
Url access
http://127.0.0.1:43110/f/[hash].[ext] (for non-big file)
http://127.0.0.1:43110/bf/[hash].[ext] (for big file)
File name could be added optionally as, but the hash does not depends on the filename:
http://127.0.0.1:43110/f/[hash]/[anyfilename].[ext]
File upload
data/__static__/__add__
: Copy file to this directory, visit ZeroHello Files tab, Click on "Hash added files"File download process
Directory upload
For directory uploads we need to generate a
content.json
that contains the reference to other files.Basically these would be sites where the
content.json
is authenticated by sha512t instead of the public address of the owner.Example:
These directories can be accessed on the web interface using
http://127.0.0.1:43110/d/{sha512t hash of generated content.json}/any_file.jpg
(file list can be displayed on directory access)
Downloaded files and
content.json
stored indata/static/[download_date]/{Directory name}
directory.Each files in the directory also accessible using
http://127.0.0.1:43110/f/602b8a1e5f3fd9ab65325c72eb4c3ced1227f72ba855bef0699e745cecec2754/any_file.jpg
As optimization if the files accessed using a directory reference the peer list can be fetched using
findHashId/getHashId from other peers without accessing the trackers.
Possible problems
Too many tracker requests
Announcing and keep track of peers for large amount (10k+) of files can be problematic.
Solution #1
Send tracker request only for large (10MB+) files.
To get peer list for smaller files we use the current,
getHashfield
/findHashId
solution.Cons:
Solution #2
Announce all files to zero:// trackers, reduce re-announce time to eg. 4 hours (re-announce within 1 minute if new file added)
(sending this amount of request to bittorrent trackers could be problematic)
Don't store peers for file that you have 100% downloaded.
Request for 10k files: 32 * 10k = 320k (optimal case)
Possible optimization #1:
Change tracker communication to request client id token and only communicate hash additions / deletions until the expiry time.
Token expiry time extends with every request.
Possible optimization #2:
Take some risk of hash collision and allow the tracker to specify how many character it needs from the hashes.
(based on how many how many hashes it stores)
Estimated request size to announce 22k files:
Cons:
Download all optional files / help initial seed for specific user
Downloading all optional files in a site or uploaded by a specific user won't be possible anymore:
The optional files no longer will be stored in the user's
content.json
filefiles_optional
node.Solution #1
Add a
files_link
node tocontent.json
that stores uploaded files in the last X days.(with sha512, ext, size, date_added nodes)
The text was updated successfully, but these errors were encountered: