Releases: TheBlackPlague/StockDory
Starfish (0.1)
π Starfish 0.1
The StockDory Authors proudly present the latest version of StockDory, codenamed Starfish
.
StockDory is the C++ rewrite of the renowned StockNemo. Improving at a similar pace, since the project started, considerable strength improvements have occurred. Notable among them are the switch to a larger neural network architecture, faster neural network inference, improved transposition table usage, accurate static exchange evaluation, better time management, and numerous other assembly-level optimizations.
π¦ ELO Gained
Book: UHO_XXL_+0.90_+1.19
ELO | 218.64 +- 4.45 (95%)
CONF | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 20008 W: 12702 L: 1546 D: 5760
ELO | 203.54 +- 4.12 (95%)
CONF | 60.0+0.60s Threads=1 Hash=256MB
GAMES | N: 20000 W: 11830 L: 1292 D: 6878
π Rating Lists
CCRL:
- Blitz: 3384 (50th) β Tested version: v3
π§ Neural Architecture
IN ACCUMULATOR HIDDEN OUT
______________ _______ ______________________________________________________________ _____
| WHITE: (768) | -> | (384) | -> | ClippedReLU(0, 1) -> (384) \ | | |
| | | | | \ | | |
| | | | | CONCATENATE(ColorToMove): (768) | -> | (1) |
| | | | | / | | |
| BLACK: (768) | -> | (384) | -> | ClippedReLU(0, 1) -> (384) / | | |
-------------- ------- -------------------------------------------------------------- -----
Codename: Aurora
ID: 334ab2818f
π Changelog
- βοΈ FIX: Use Transposition Table Stored Move in Move Ordering by @TheBlackPlague in #4
- π¦ IMP: Better Static Evaluation Bounds by @TheBlackPlague in #2
- π¦ IMP: Prefetching Transposition Table Entries by @TheBlackPlague in #3
- π¦ IMP: Alter PV and Transposition Move with respect to Alpha Changes by @TheBlackPlague in #5
- π DOC: README Addition by @TheBlackPlague in #7
- π¦ IMP: Dynamic NMP Formula based on Static Evaluation by @TheBlackPlague in #9
- π¦ IMP: Move Iteration respecting History Bonus by @TheBlackPlague in #11
- π NEW: Neural Network JSON to NNUE Converter by @TheBlackPlague in #12
- π¦ IMP: Support Aurora Architecture in Neural Network Converter by @TheBlackPlague in #13
- π¦ IMP: Allocate more thinking time for Search by @TheBlackPlague in #14
- π§ NN: Aurora-855bc8d7cb by @TheBlackPlague in #15
- π¦ IMP: Increase QSearch Max Depth by @TheBlackPlague in #17
- π¦ IMP: Improve Time Allocation and Optimal Usage by @TheBlackPlague in #18
- π¦ IMP: Improve Transposition Table Entry Alignment by @TheBlackPlague in #19
- π¦ IMP: Compile same architecture networks from Testing Framework by @TheBlackPlague in #21
- π¦ IMP: Better CMake CPM Installation Logging by @TheBlackPlague in #22
- π§ NN: Aurora-20581290fd by @unixwizard in #23
- π¦ IMP: LMR - Reduce Less if Checking by @TheBlackPlague in #24
- π§ NN: Aurora-24de239dff by @unixwizard in #26
- π¦ IMP: Improve Time Checking by @TheBlackPlague in #27
- π¦ IMP: Implement Limitation System & Node Limits by @TheBlackPlague in #29
- π¦ IMP: Enable FLTO & PGO by @TheBlackPlague in #30
- π¦ IMP: Remove QSearch Limit & Increase Repetition History Storage by @TheBlackPlague in #31
- π¦ IMP: Add a King SEE Value by @TheBlackPlague in #32
- π¦ IMP: Accurate SEE (Static Exchange Evaluation) by @TheBlackPlague in #33
- π§ NN: Aurora-5b3145691f by @TheBlackPlague in #34
- π¦ IMP: SEE (Static Exchange Evaluation) Policy by @TheBlackPlague in #35
- π§ NN: Aurora-334ab2818f by @unixwizard in #36
- π¦ IMP: Switch to BS Thread Pool Library by @TheBlackPlague in #37
- π NEW: Automated Build Continuous Integration by @TheBlackPlague in #1
- π¦ IMP: Simplify Transposition Table Lookup by @TheBlackPlague in #38
New Contributors
- @unixwizard made their first contribution at #23
Full Changelog: https://github.com/TheBlackPlague/StockDory/commits/0.1
π» Download
The executable binaries provided in the assets are named with respect to the operating system and x86-64 micro-architecture levels (as agreed upon by Intel & AMD, among other CPU authorities). Depending on your CPU, some of these binaries will perform better, and others will outright not work. So, it's essential that you download the right one.
List of CPU instruction sets that each micro-architecture level can use:
- Default: CMPXCHG16B, LAHF/SAHF, POPCNT, SSE3, SSE4.1, SSE4.2, SSSE3
- V2: AVX, CMPXCHG16B, LAHF/SAHF, POPCNT, SSE3, SSE4.1, SSE4.2, SSSE3
- V3: AVX2, BMI1, BMI2, F16C, FMA, LZCNT, MOVBE, AVX, CMPXCHG16B, LAHF/SAHF, POPCNT, SSE3, SSE4.1, SSE4.2, SSSE3
- V4: AVX512F, AVX512BW, AVX512CD, AVX512DQ, AVX512VL, AVX2, BMI1, BMI2, F16C, FMA, LZCNT, MOVBE, AVX, CMPXCHG16B, LAHF/SAHF, POPCNT, SSE3, SSE4.1, SSE4.2, SSSE3
With access to the most instruction sets, V4 binaries are the most performant. However, not all CPUs have those instruction sets, so on CPUs that lack those instruction sets, that binary won't work. Match your CPU with the relevant micro-architecture level to get the most performant version of the engine.
For Apple's custom SoC(s), the micro-architecture levels don't apply. Instead:
- M1: Apple's M1 Chip
- M2: Apple's M2 Chip
Support for Apple's custom SoC(s) is limited as of now.