Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync : llama.cpp #955

Merged
merged 13 commits into from
Sep 8, 2024
Merged

sync : llama.cpp #955

merged 13 commits into from
Sep 8, 2024

Commits on Sep 8, 2024

  1. Threadpool: take 2 (llama/8672)

    * Introduce ggml_compute_threadpool
    
    - OpenMP functional: check
    - Vanilla ggml functional: Check
    - ggml w/threadpool functional: Check
    - OpenMP no regression: No glaring problems
    - Vanilla ggml no regression: No glaring problems
    - ggml w/threadpool no regression: No glaring problems
    
    * Minor fixes
    
    * fixed use after release bug
    
    * fixed a harmless race condition
    
    * Fix Android bulid issue
    
    * fix more race conditions
    
    * fix deadlock for cases where cgraph.n_nodes == 1
    
    and fix --poll case
    
    * threadpool: use cpu_get_num_math to set the default number of threadpool threads
    
    This way we avoid using E-Cores and Hyperthreaded siblings.
    
    * bench: create fresh threadpool for each test
    
    For benchmarking it's better to start a fresh pool for each test with the exact number of threads
    needed for that test. Having larger pools is suboptimal (causes more load, etc).
    
    * atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
    
    This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
    
    * threadpool: make polling the default to match openmp behavior
    
    All command line args now allow for setting poll to 0 (false).
    
    * threadpool: do not wakeup threads in already paused threadpool
    
    * fix potential race condition in check_for_work
    
    * threadpool: do not create two threadpools if their params are identical
    
    * threadpool: reduce pause/resume/wakeup overhead in common cases
    
    We now start threadpool in paused state only if we have two.
    The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
    
    * threadpool: add support for hybrid polling
    
    poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
    poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
    
    The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
    We can tune this further as things evolve.
    
    * threadpool: reduce the number of barrier required
    
    New work is now indicated with an atomic counter that is incremented for
    each new graph that needs to be computed.
    This removes the need for extra barrier for clearing the "new_work" and
    removes the special case for trivial graphs.
    
    * threadpool: remove special-casing for disposable threadpools
    
    With the efficient hybrid polling there is no need to make disposable pools any different.
    This simplifies the overall logic and reduces branching.
    
    Include n_threads in debug print for disposable threadpool.
    
    Declare pause and stop flags as atomic_bool
    This doesn't actually generate any memory barriers and simply informs
    the thread sanitizer that these flags can be written & read by different
    threads without locking.
    
    * threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
    
    This fixes the race condition with very small graphs where the main thread happens to
    start a new graph while the workers are just about to exit from barriers.
    
    * threadpool: use relaxed order for chunk sync
    
    Full memory barrier is an overkill for this since each thread works on different chunk
    
    * threadpool: remove abort_callback from threadpool state
    
    * threadpool: better naming for thread/cpumask releated functions
    
    * threadpool: consistent use of int type for n_threads params
    
    * threadpool: add support for ggml_threadpool_params_default/init
    
    Also removes the need for explicit mask_specified param.
    all-zero cpumask means use default (usually inherited) cpu affinity mask.
    
    * threadpool: move typedef into ggml.h
    
    * threadpool: fix apply_priority() function name
    
    * threadpool: fix swift wrapper errors due to n_threads int type cleanup
    
    * threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
    
    * threadpool: replace checks for compute_thread ret code with proper status check
    
    * threadpool: simplify threadpool init logic and fix main thread affinity application
    
    Most of the init code is now exactly the same between threadpool and openmp.
    
    * threadpool: update threadpool resume/pause function names
    
    * threadpool: enable openmp by default for now
    
    * threadpool: don't forget to free workers state when omp is enabled
    
    * threadpool: avoid updating process priority on the platforms that do not require it
    
    On Windows we need to change overall process priority class in order to set thread priorities,
    but on Linux, Mac, etc we do not need to touch the overall process settings.
    
    * threadpool: update calling thread prio and affinity only at start/resume
    
    This avoids extra syscalls for each graph_compute()
    
    * llama-bench: turn threadpool params into vectors, add output headers, etc
    
    * llama-bench: add support for cool off between tests --delay
    
    This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
    --delay (disabled by default) introduces the sleep for N seconds before starting each test.
    
    * threadpool: move process priority setting into the apps (bench and cli)
    
    This avoids changing the overall process priority on Windows for the apps
    that use ggml/llama.cpp directy.
    
    * threadpool: move all pause/resume logic into ggml
    
    * threadpool: futher api cleanup and prep for future refactoring
    
    All threadpool related functions and structs use ggml_threadpool prefix.
    
    * threadpool: minor indent fixes
    
    * threadpool: improve setprioty error message
    
    * Update examples/llama-bench/llama-bench.cpp
    
    Co-authored-by: slaren <[email protected]>
    
    * threadpool: fix indent in set_threadpool call
    
    * use int32_t for n_thread type in public llama.cpp API
    
    * threadpool: use _new and _free instead of _create and _release
    
    * fix two more public APIs to use int32_t for n_threads
    
    * build: set _GNU_SOURCE for Adroid
    
    ---------
    
    Co-authored-by: Max Krasnyansky <[email protected]>
    Co-authored-by: fmz <[email protected]>
    Co-authored-by: Max Krasnyansky <[email protected]>
    Co-authored-by: slaren <[email protected]>
    5 people authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    49dbb39 View commit details
    Browse the repository at this point in the history
  2. llama : support RWKV v6 models (llama/8980)

    * convert_hf_to_gguf: Add support for RWKV v6
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Add RWKV tokenization
    
    * Fix build
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Do not use special tokens when matching in RWKV tokenizer
    
    * Fix model loading
    
    * Add (broken) placeholder graph builder for RWKV
    
    * Add workaround for kv cache
    
    * Add logits conversion to rwkv5
    
    * Add rwkv5 layer norms
    
    * Add time mix KVRG & correct merge mistake
    
    * Add remaining time mix parameters
    
    * Add time mix output loading
    
    * Add placeholder llm_build_time_mix
    
    * Fix build
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Load more tensors for rwkv v6
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Fix rwkv tokenizer
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * ggml: Add unary operator Exp
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * RWKV v6 graph building
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Add ``rescale_every_n_layers`` parameter
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Add ``wkv.head_size`` key for RWKV
    
    so it doesn't reuse Mamba ssm parameters
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Fix offloading layers to CUDA
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Fix parallel inferencing for RWKV
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Remove trailing whitespaces
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * build_rwkv: Avoid using inplace operations
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * convert_hf_to_gguf: rwkv: Avoid using ``eval``
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * convert_hf_to_gguf: rwkv tokenizer: Don't escape sequences manually
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Update convert_hf_to_gguf.py
    
    Co-authored-by: compilade <[email protected]>
    
    * ggml: Add backward computation for unary op ``exp``
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Update convert_hf_to_gguf.py
    
    Co-authored-by: compilade <[email protected]>
    
    * Update convert_hf_to_gguf.py
    
    Co-authored-by: compilade <[email protected]>
    
    * Use MODEL_ARCH.RWKV6 instead of MODEL_ARCH.RWKV
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * build_rwkv6: Simplify graph
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Detect model.type
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Fix tensor loading for 7B/14B models
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Fix group_norm assertion failure with Metal
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Clean up
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Add quantization tensor exclusion
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Use the new advanced batch splits
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * Update src/llama.cpp
    
    Co-authored-by: compilade <[email protected]>
    
    * llama: rwkv6: Use ``ggml_norm`` instead of ``ggml_group_norm``
    
    Co-authored-by: compilade <[email protected]>
    
    * llama: rwkv6: Apply code style and misc changes
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * converter: Use class name ``Rwkv6Model``
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Make use of key ``feed_forward_length``
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Add kv ``time_mix_extra_dim`` and ``time_decay_extra_dim``
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * converter: Match ``new_name`` instead of ``name`` for float32 explicit tensors
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Keep ``time_mix_w1/w2`` as F32
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Remove unused nodes
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Apply code format changes
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * llama: rwkv6: Add lora for some supported tensors
    
    Currently att.key/receptance/value/gate/output, ffn.receptance/key/value, as well as head.weight
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    * rwkv : speed-up tokenization using trie
    
    * minor : style + indentation
    
    * llama: rwkv6: Avoid division by zero
    
    Co-authored-by: compilade <[email protected]>
    
    * ggml: rwkv_wkv: Avoid copying the state
    
    Signed-off-by: Molly Sophia <[email protected]>
    
    ---------
    
    Signed-off-by: Molly Sophia <[email protected]>
    Co-authored-by: Layl Bongers <[email protected]>
    Co-authored-by: compilade <[email protected]>
    Co-authored-by: Georgi Gerganov <[email protected]>
    4 people committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    345f566 View commit details
    Browse the repository at this point in the history
  3. ggml : add pthread includes on FreeBSD (llama/9258)

    yuri@FreeBSD authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    a2bcd99 View commit details
    Browse the repository at this point in the history
  4. Fix DMMV dequantization (llama/9279)

    Fixed dmmv dequant for ncols== GGML_SYCL_DMMV_X
    OuadiElfarouki authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    30fd902 View commit details
    Browse the repository at this point in the history
  5. ggml : AVX2 support for Q4_0_8_8 (llama/8713)

    * Add AVX2 based implementations for quantize_q8_0_4x8, ggml_gemv_q4_0_8x8_q8_0 and ggml_gemm_q4_0_8x8_q8_0 functions
    
    * Update code to fix issues occuring due to non alignment of elements to be processed as multiple of 16 in MSVC
    
    * Update comments and indentation
    
    * Make updates to reduce number of load instructions
    Srihari-mcw authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    8e6b2c5 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    3ac8a80 View commit details
    Browse the repository at this point in the history
  7. ggml-quants : ternary packing for TriLMs and BitNet b1.58 (llama/8151)

    * ggml-quants : 1.625 bpw ternary packing for BitNet 1.58b
    
    * ggml-quants : faster 1.625 bpw AVX2 vec_dot
    
    Not using a lookup table anymore makes it match q4_0 speed.
    
    * gguf-py : fix formatting
    
    * llama : remove spaces on empty line
    
    * ggml-quants : subtract 1 when back in epi8
    
    This makes the 1.625 bpw type go faster than q4_0. Still not the fastest.
    
    * ggml-quants : Q2_2 now faster than Q4_K on with AVX2
    
    * ggml-quants : cleanup Q1_3 code formatting
    
    * ggml-quants : ARM NEON vec_dot for q2_2 and q1_3
    
    * ggml-quants : use ceiling division when quantizing q1_3
    
    * convert-hf : simplify BitNet pre-quantization
    
    This still results in the exact same tensor weights and scales,
    but it reveals some weirdness in the current algorithm.
    
    * convert-hf : allow converting the weird BitNet 1.3B
    
    Its FFN size is 5460 which is not convenient.
    The offending tensors are kept in F16,
    which makes the final model 5.01 bpw.
    
    * bitnet : replace 1.58b with b1.58, as in the paper
    
    * ggml-quants : fix build failure on Windows
    
    * ggml-quants : attempt to fix Arm 32-bit support
    
    * ggml : add some informative comments in q1_3 vec_dot
    
    * ggml : add TQ1_0 and TQ2_0 ternary quantization types
    
    * ggml : even faster TQ2_0
    
    * ggml : also faster TQ1_0
    
    Same optimization as for TQ2_0 by offsetting the sum instead of the weights.
    This makes TQ1_0 almost as fast as Q8_0 on AVX2.
    
    * ggml : fix build issues in certain environments
    
    * ggml : add NEON vec_dot implementation for TQ1_0 and TQ2_0
    
    * ggml : avoid directly using vmlal_high_s8, for 32-bit ARM compat
    
    The compiler seems smart enough to use the same instruction
    even when using vget_high_s8 instead.
    
    * ggml : remove q1_3 and q2_2
    
    No more 1.625 bpw and 2.000 bpw,
    now instead using 1.6875 bpw and 2.0625 bpw
    with TQ1_0 and TQ2_0, respectively.
    
    * llama : remove the separate scale tensors of BitNet b1.58
    
    They won't be needed, since the remaining ternary quant types have
    built-in scales.
    
    * ggml-quants : rename fields of TQ1_0 and TQ2_0 structs for consistency
    
    * ggml-quants : allow using vdotq_s32 in TQ2_0 vec_dot
    
    Not yet tested on hardware which supports it,
    might not work or might not even compile. But also it might.
    It should make the performance better on recent ARM CPUs.
    
    * ggml-quants : remove comment about possible format change of TQ2_0
    
    Making it slightly more convenient for AVX512
    but less convenient for everything else is not worth the trouble.
    
    * gguf-py : Numpy (de)quantization for TQ1_0 and TQ2_0
    
    * ggml-quants : use roundf instead of nearest_int for TQ1_0 and TQ2_0
    
    This does not change anything for ternary models,
    since their values should never end up being in halfway cases anyway.
    
    * convert : allow direct conversion to TQ1_0 and TQ2_0
    
    The token embeddings and output tensors are kept in F16
    to allow quantizing them to Q4_K and Q6_K with llama-quantize.
    
    * llama : handle fallback for TQ1_0 and TQ2_0 with Q4_0
    
    Q4_0 is not completely symmetric (so not lossless for ternary models),
    but it should be good enough.
    
    * ggml-quants : allow using ARM dot product instructions for TQ1_0
    
    * ggml-quants : deduplicate TQ1_0 and TQ2_0 __ARM_FEATURE_DOTPROD support
    
    * ggml : remove unused ggml_mul special case
    
    It would otherwise conflict with the more general
    optimization coming with Mamba-2.
    
    * ggml : handle TQ1_0 and TQ2_0 in dequantization-based operators
    
    * test-backend-ops : add TQ1_0 and TQ2_0 comments for later
    
    Not yet adding uncommented, because some backends like SYCL and Metal
    do not properly handle unknown types in supports_op for GGML_OP_MUL_MAT.
    (and Metal also doesn't handle it with GGML_OP_GET_ROWS)
    Support for TQ1_0 and TQ2_0 for other backends than CPU
    will be added in follow-up pull requests.
    compilade authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    ea921fb View commit details
    Browse the repository at this point in the history
  8. Improve Vulkan shader build system (llama/9239)

    * Improve Vulkan shader builds system
    
    - Add dependency to vulkan-shaders-gen to rebuild shaders when changing the shader compilation utility.
    - Add option to generate debug info for Vulkan shaders to provide shader source to Vulkan shader profiling tools
    
    * remove not required self dependency
    mtavenrath authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    4c58012 View commit details
    Browse the repository at this point in the history
  9. ggml : fix missing cpu_set_t on emscripten (llama/9336)

    * ggml : fix missing cpu_set_t on emscripten
    
    * better version
    
    * bring back android part
    ngxson authored and ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    218c953 View commit details
    Browse the repository at this point in the history
  10. Configuration menu
    Copy the full SHA
    d64663e View commit details
    Browse the repository at this point in the history
  11. Configuration menu
    Copy the full SHA
    d9950b2 View commit details
    Browse the repository at this point in the history
  12. sync : llama.cpp

    ggerganov committed Sep 8, 2024
    Configuration menu
    Copy the full SHA
    d1fd200 View commit details
    Browse the repository at this point in the history
  13. Configuration menu
    Copy the full SHA
    e08f270 View commit details
    Browse the repository at this point in the history