-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add count tokens function #17
base: master
Are you sure you want to change the base?
Conversation
I don't actually want to encode into tokens for my use case, quickly count to check my request won't exceed the limit. This should be faster since we don't initialize the memory for the output array. ``` const crypto = require('crypto'); // Generate a random string of a given length function generateRandomString(length) { return crypto.randomBytes(length).toString('hex'); } const {encode, decode, countTokens} = require('gpt-3-encoder') let str = 'This is an example sentence to try encoding out on!' // let now = Date.now(); let encoded = encode(str) console.log('Encoded this string looks like: ', encoded) console.log('We can look at each token and what it represents) let tokencount = 0; for(let token of encoded){ tokencount ++; console.log({token, string: decode([token])}) } console.log("there are n tokens: ", tokencount); let decoded = decode(encoded) console.log('We can decode it back into:\n', decoded) let now = Date.now(); // todo: write an benchmark for the above method vs int countTokens(str) str = generateRandomString(10000); console.time('fencode'); encoded = encode(str); console.log(`First encode to cache string n stuff in mem`); console.timeEnd('fencode'); console.log(`Original string length: ${str.length}`); // Benchmark the encode function console.time('encode'); encoded = encode(str); console.log(`Encoded string length: ${encoded.length}`); console.timeEnd('encode'); // Benchmark the countTokens function console.time('countTokens'); let tokenCount = countTokens(str); console.log(`Number of tokens: ${tokenCount}`); console.timeEnd('countTokens'); console.log(`Original string length: ${str.length}`); console.log(`Encoded string length: ${encoded.length}`); console.log(`Number of tokens: ${tokenCount}`); ``` ``` We can decode it back into: This is an example sentence to try encoding out on! First encode to cache string n stuff in mem fencode: 163.57ms Original string length: 20000 Encoded string length: 11993 encode: 124.265ms Number of tokens: 11993 countTokens: 29.2ms Original string length: 20000 Encoded string length: 11993 Number of tokens: 11993 ```
Co-authored-by: Andrew Healey <[email protected]>
I don't actually want to encode into tokens for my use case, quickly count to check my request won't exceed the limit. This should be faster since we don't initialize the memory for the output array. ``` const crypto = require('crypto'); // Generate a random string of a given length function generateRandomString(length) { return crypto.randomBytes(length).toString('hex'); } const {encode, decode, countTokens} = require('gpt-3-encoder') let str = 'This is an example sentence to try encoding out on!' // let now = Date.now(); let encoded = encode(str) console.log('Encoded this string looks like: ', encoded) console.log('We can look at each token and what it represents) let tokencount = 0; for(let token of encoded){ tokencount ++; console.log({token, string: decode([token])}) } console.log("there are n tokens: ", tokencount); let decoded = decode(encoded) console.log('We can decode it back into:\n', decoded) let now = Date.now(); // todo: write an benchmark for the above method vs int countTokens(str) str = generateRandomString(10000); console.time('fencode'); encoded = encode(str); console.log(`First encode to cache string n stuff in mem`); console.timeEnd('fencode'); console.log(`Original string length: ${str.length}`); // Benchmark the encode function console.time('encode'); encoded = encode(str); console.log(`Encoded string length: ${encoded.length}`); console.timeEnd('encode'); // Benchmark the countTokens function console.time('countTokens'); let tokenCount = countTokens(str); console.log(`Number of tokens: ${tokenCount}`); console.timeEnd('countTokens'); console.log(`Original string length: ${str.length}`); console.log(`Encoded string length: ${encoded.length}`); console.log(`Number of tokens: ${tokenCount}`); ``` ``` We can decode it back into: This is an example sentence to try encoding out on! First encode to cache string n stuff in mem fencode: 163.57ms Original string length: 20000 Encoded string length: 11993 encode: 124.265ms Number of tokens: 11993 countTokens: 29.2ms Original string length: 20000 Encoded string length: 11993 Number of tokens: 11993 ``` Co-authored-by: Kier <[email protected]>
I applied this in my fork: https://www.npmjs.com/package/@nick.heiner/gpt-3-encoder. |
merge back the one other change and follow nick with moving to version 1.2 for feature add.
package.json
Outdated
"name": "gpt-3-encoder", | ||
"version": "1.1.3", | ||
"name": "@nick.heiner/gpt-3-encoder", | ||
"version": "1.2.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we revert the diff on the package.json here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can probably do it more incrementally but if you are interested in biting the bullet I made a few more improvements. Passed tests in my fork etc. seem to work well and even got browserify to bundle it all. I did this npm revert to 1.2.0-rc0
README.md
Outdated
@@ -1,3 +1,7 @@ | |||
# This is a fork of https://github.com/latitudegames/GPT-3-Encoder. I made this fork so I could apply some PRs that had been sent to the upstream repo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another spot where we could revert this change for cleaning it up to merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, we don't want to pull in the changes from my fork.
Is this package here abandoned? Then I'll move to your fork, @syonfox |
I don't actually want to encode into tokens for my use case, quickly count to check my request won't exceed the limit.
This should be faster since we don't initialize the memory for the output array.