You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This 32768 block limit make the block size a little giant when small random error recovery is the main goal, like for 512 byte/sector of a little defective hard drive (without AF).
Example :
One 200MB file result in about 32768 block count of 6000 byte size per block, while I would expect 1024 byte per block. So my recovery block possibilities are divided by six... and I let you imagine the recovery block lose with an 2GB filesize.
There it take about 7 seconds to create this PAR with about 10 recovery block, and I largely can wait ten more time to create a better efficient PAR file. (on an old core2 arch at about 3.6Ghz)
So, PLEASE! I suggest you to increase this tiny limit, that is not needed anymore twenty years later, while we are now working with files of gigas.
I have give a look at the source and spotted some conditions about this 32768, but I'm asking me if there are not other stuff under the hood to take into account.
I think having a 24-bit limit (or even 32-bit) for the block count is the change todo.
Thanks!
The text was updated successfully, but these errors were encountered:
PAR2 is spec'd to use 16-bit GF, hence the file format is fundamentally restricted to 32768 input blocks.
To go beyond that, you need a different file format.
Note that the block size doesn't have to match some underlying sector size - they can be larger and still work fine.
There's a draft PAR3 specification which currently supports arbitrary sized GF, which enables more than 32768 input blocks.
Okay! so it seems we are stuck with this limit in PAR2, and we can't do anything without rewriting "from scratch".
I was not aware of this PAR3 project under construction, thanks to mention this one, I will follow and test it.
Bigger block size is also working, yes, but it waste bytes not needed to just repair a 512 bytes sector, and it also reduce the number of recovery blocks without having to increase the PAR2 file size.
But WARNING! the case I'm speaking about cannot be take "as usual", all can happen with a defective drive! from some tiny random sectors unreadable to very large surface, to even not detected anymore...
I just want to give me a little chance of recovering, and without using to much space.
This 32768 block limit make the block size a little giant when small random error recovery is the main goal, like for 512 byte/sector of a little defective hard drive (without AF).
Example :
One 200MB file result in about 32768 block count of 6000 byte size per block, while I would expect 1024 byte per block. So my recovery block possibilities are divided by six... and I let you imagine the recovery block lose with an 2GB filesize.
There it take about 7 seconds to create this PAR with about 10 recovery block, and I largely can wait ten more time to create a better efficient PAR file. (on an old core2 arch at about 3.6Ghz)
So, PLEASE! I suggest you to increase this tiny limit, that is not needed anymore twenty years later, while we are now working with files of gigas.
I have give a look at the source and spotted some conditions about this 32768, but I'm asking me if there are not other stuff under the hood to take into account.
I think having a 24-bit limit (or even 32-bit) for the block count is the change todo.
Thanks!
The text was updated successfully, but these errors were encountered: