Releases: janhq/cortex.llamacpp
Releases · janhq/cortex.llamacpp
0.1.18-25.06.24
What's Changed
- chore: Update llama.cpp submodule to latest release b3218 by @vansangpfiev in #116
- feat: add noavx linux build by @vansangpfiev in #115
- fix: readme by @vansangpfiev in #117
Full Changelog: v0.1.18-22.06.24...v0.1.18-25.06.24
0.1.12-25.06.24
Full Changelog: v0.1.18-25.06.24...v0.1.12-25.06.24
- Customized release to support stable vulkan
0.1.19
Changes
- Update llama.cpp submodule to latest release b3197 @jan-service-account (#114)
- feat: request larger context for llava-v1.6 @vansangpfiev (#105)
- Update llama.cpp submodule to latest release b3188 @jan-service-account (#112)
- Update llama.cpp submodule to latest release b3184 @jan-service-account (#108)
- Update llama.cpp submodule to latest release b3180 @jan-service-account (#104)
- add clean step @hiento09 (#103)
Contributor
@github-actions[bot], @hiento09, @jan-service-account, @sangjanai and @vansangpfiev
0.1.18-22.06.24
What's Changed
- feat: request larger context for llava-v1.6 by @vansangpfiev in #105
- Update llama.cpp submodule to latest release b3197 by @jan-service-account in #114
Full Changelog: v0.1.17-20.06.24...v0.1.18-22.06.24
0.1.17-20.06.24
What's Changed
- Update llama.cpp submodule to latest release b3188 by @jan-service-account in #112
Full Changelog: v0.1.17-19.06.24...v0.1.17-20.06.24
0.1.17-19.06.24
What's Changed
- add clean step by @hiento09 in #103
- Update llama.cpp submodule to latest release b3180 by @jan-service-account in #104
- Update llama.cpp submodule to latest release b3184 by @jan-service-account in #108
Full Changelog: v0.1.18...v0.1.17-19.06.24
0.1.18
Changes
- Update llama.cpp submodule to latest release b3166 @jan-service-account (#102)
- Linux cuda separate cpu instruction and enable sccache @hiento09 (#101)
- feat: add model_path parameter @vansangpfiev (#89)
- Use note auto-generate @hiento09 (#98)
- Correct script create tag @hiento09 (#97)
- Update llama.cpp submodule to latest release b3153 @jan-service-account (#96)
- Change condition checking pr status @hiento09 (#95)
- Correct condition trigger for quality gate @hiento09 (#92)
- Add nightly Build CI @hiento09 (#90)
- Feature sccache for windows local @hiento09 (#85)
- Update llama.cpp submodule to latest release b3140 @jan-service-account (#84)
Contributor
@github-actions[bot], @hiento09, @jan-service-account and @vansangpfiev
0.1.17-15.06.24
What's Changed
- Update llama.cpp submodule to latest release b3140 by @jan-service-account in #84
- Feature sccache for windows local by @hiento09 in #85
- Add nightly Build CI by @hiento09 in #90
- Correct condition trigger for quality gate by @hiento09 in #92
- Change condition checking pr status by @hiento09 in #95
- Update llama.cpp submodule to latest release b3153 by @jan-service-account in #96
- Correct script create tag by @hiento09 in #97
Full Changelog: v0.1.17...v0.1.17-15.06.24
0.1.17
Changes
- feat: enable flash attention by default @vansangpfiev (#82)
- feat: support use_mmap option in parameter @vansangpfiev (#79)
- fix: remove avx2 check since we have it at cortex-cpp layer @vansangpfiev (#78)
- chore: changes windows CI runners @vansangpfiev (#81)
- feat: enable caching by default @vansangpfiev (#77)
- Update llama.cpp submodule to latest release b3091 @jan-service-account (#76)
- feat: add cache_type parameter @vansangpfiev (#75)
- Update llama.cpp submodule to latest release b3088 @jan-service-account (#74)
- fix: use inference stop words by default, fallback to loading model stop words @vansangpfiev (#72)
- feat: add stop words when loading model @vansangpfiev (#71)
- Update llama.cpp submodule to latest release b3078 @jan-service-account (#70)
- Update llama.cpp submodule to latest release b3070 @jan-service-account (#69)
- Update llama.cpp submodule to latest release b3051 @jan-service-account (#68)
- Update llama.cpp submodule to latest release b3040 @jan-service-account (#67)
Contributor
0.1.16
Changes
- Update llama.cpp submodule to latest release b3091 @jan-service-account (#76)
- feat: add cache_type parameter @vansangpfiev (#75)
- Update llama.cpp submodule to latest release b3088 @jan-service-account (#74)
- fix: use inference stop words by default, fallback to loading model stop words @vansangpfiev (#72)
- feat: add stop words when loading model @vansangpfiev (#71)
- Update llama.cpp submodule to latest release b3078 @jan-service-account (#70)
- Update llama.cpp submodule to latest release b3070 @jan-service-account (#69)
- Update llama.cpp submodule to latest release b3051 @jan-service-account (#68)
- Update llama.cpp submodule to latest release b3040 @jan-service-account (#67)