Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

plans to add streaming API? #12

Open
charles-cooper opened this issue May 19, 2015 · 5 comments
Open

plans to add streaming API? #12

charles-cooper opened this issue May 19, 2015 · 5 comments

Comments

@charles-cooper
Copy link

Are there any plans to add support for the new LZ4 Streaming API? This would be useful for programs which don't want to strictly decompress an entire stream in memory.

http://fastcompression.blogspot.com/2014/05/streaming-api-for-lz4.html

@jjl
Copy link

jjl commented Apr 10, 2016

I've started integrating this in my 'frame' branch jjl@65c9823

@mwotton
Copy link
Owner

mwotton commented Apr 12, 2016

That's great to hear, please PR when you're done :)

On Sun, 10 Apr 2016 12:19 pm James Laver [email protected] wrote:

I've started integrating this in my 'frame' branch jjl/lz4hs@65c9823
jjl@65c9823


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#12 (comment)

@charles-cooper
Copy link
Author

@jjl you may be interested in some work I did in https://github.com/charles-cooper/lz4hs/blob/master/Codec/Compression/LZ4/Decompress.hs

@jjl
Copy link

jjl commented Apr 14, 2016

[@charles-cooper thanks for that. My haskell FFI knowledge is a bit limited, so I'm looking forward to cribbing from your attempt :)

Just to manage expectations, I'm anticipating closing this off sometime in April depending on how my other work goes.

Some of my notes so far:

  • Opaque C types are annoying. Did I do it right?
  • For backcompat, I haven't removed the existing code
  • There doesn't seem to be a way to apply HC over streaming

My use case is a special snowflake. I'm aiming to have 64kb max blocks whereby I'd like to aim for 64kb compressed blocks, but rather than overflow the remaining data, i'd like to detect that the last compressed piece would overflow the buffer and keep it behind for the next block so that data isn't split. I'm intending to use it in conjunction with a Rope of ByteStrings.

What are other people looking to use this for? I'd like to bear this in mind when I finish the work.

Cheers

@nh2
Copy link

nh2 commented Mar 27, 2017

Note I have picked up this task of writing an LZ4 frame implementation as part of a conduit (looking into what you guys have in your branches of course). Not quite done yet (compression works and is compatible with the lz4 command line utility, but I still have to do decompression), but I thought I'd leave that here in order to avoid duplicate work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants