Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development #12

Merged
merged 89 commits into from
Sep 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
f2fca2b
wip: putting together the scaffolding
b08x Jul 5, 2024
9f8153a
demo working
b08x Jul 5, 2024
f76e2d6
wip: examples
b08x Jul 5, 2024
8636655
added examples command
b08x Jul 6, 2024
1dfca70
wip
b08x Jul 7, 2024
1cda980
setting interface output colors here results in ascii chars sent to r…
b08x Jul 7, 2024
e26a3ff
added gems
b08x Jul 7, 2024
ac9939f
added example
b08x Jul 8, 2024
9a52ed2
added provider placeholder
b08x Jul 8, 2024
4b92754
wip: flowise api
b08x Jul 8, 2024
b8ac68e
added helper module from monadic-chat, wip: flowise api working
b08x Jul 8, 2024
fcb1478
added setup instructions for python libs
b08x Jul 11, 2024
dd90e4c
wip ToT example, workflow architecture
b08x Jul 11, 2024
da07c8c
added original example
b08x Jul 11, 2024
64b593a
moved cartridges to nano-bot registry
b08x Jul 11, 2024
cac3135
wip
b08x Jul 11, 2024
1209b36
added singleton class for spacy tasks
b08x Jul 11, 2024
0752561
wip ERROR -- : No valid words found in the provided documents
b08x Jul 11, 2024
baa9def
Moved the require statement for text_processing_workflow to after oth…
b08x Jul 11, 2024
8d3e423
Changed logging level from DEBUG to INFO.ECommented out most of the b…
b08x Jul 11, 2024
5bd7e34
Added more detailed logging during document processing.EModified the …
b08x Jul 11, 2024
5165cbf
Removed unused imports and dependencies,EReorganized require statemen…
b08x Jul 11, 2024
ee0a4db
- Modularized topic modeling functionality
b08x Jul 11, 2024
3250ccc
- Extracted train_model and infer_topics methods
b08x Jul 12, 2024
378d8e1
adding tty-box functions
b08x Jul 12, 2024
f4aec89
moved workflows, renamed components
b08x Jul 15, 2024
174fc77
future utils
b08x Jul 15, 2024
ca378e4
wip: error handler
b08x Jul 15, 2024
4e4c74d
wip: ui
b08x Jul 15, 2024
04972bf
added error handling cartridge
b08x Jul 16, 2024
8f34ce3
seperated cli module from main
b08x Jul 16, 2024
1a958d3
a nice and accurate exceptionhandler :)
b08x Jul 16, 2024
2f20bb4
snapshot
b08x Jul 16, 2024
6cbb089
wip: almost back together
b08x Jul 16, 2024
d53440c
working in ohm
b08x Jul 16, 2024
69137b1
adjusted to output exception reports in markdown
b08x Jul 16, 2024
af469b1
1. ExceptionAgent improvements:
b08x Jul 16, 2024
b413254
set messsages to print
b08x Jul 16, 2024
e10fa93
added cartridges
b08x Jul 16, 2024
797ed8f
snapshot: working
b08x Jul 16, 2024
bd483d3
wip: created task to display results
b08x Jul 17, 2024
aa42d69
removed redundant includes
b08x Jul 17, 2024
735524e
assets
b08x Jul 17, 2024
4899d7e
added cartridges
b08x Jul 19, 2024
eb164fd
set text segmentation as its own task
b08x Jul 20, 2024
e6d0f29
added Fileloader task, added tokenizer, adjusted ohm models
b08x Jul 20, 2024
0d9c471
wip: working, set batch_size or else large datasets overflow mem
b08x Jul 20, 2024
93a06a0
Refactor topic modeling workflow and improve text processing pipeline
b08x Jul 27, 2024
8272a37
added treetop grammar, working on clean interrupt
b08x Jul 27, 2024
9ab80b9
wip: grammar parser
b08x Jul 27, 2024
0cabc9e
Refactor text processing workflow and improve YAML front matter parsing
b08x Jul 27, 2024
7e72f4a
added sublayer gem
b08x Jul 31, 2024
4ead316
wip: text processing ohm models
b08x Jul 31, 2024
3fef1c7
renamed textfile to Sourcefile
b08x Jul 31, 2024
3cc4425
wip: create new ohm objects
b08x Jul 31, 2024
105ca52
Key Components and Changes:
b08x Jul 31, 2024
a92b73c
Merge branch 'feature/topicmodel_trainer' into development
b08x Jul 31, 2024
770e734
readme updates
b08x Jul 31, 2024
162be66
last for the day
b08x Jul 31, 2024
658c7a5
wip: full wf run
b08x Aug 1, 2024
dadba1e
one nice thing, each error has been different.
b08x Aug 1, 2024
7c9f01e
edits
b08x Aug 1, 2024
398431e
this one just to save, not working
b08x Aug 1, 2024
afc5b39
check
b08x Aug 1, 2024
5d6f9b4
edits
b08x Aug 1, 2024
9662387
strip and refactor time!
b08x Aug 3, 2024
73059f4
Update README.md
b08x Aug 3, 2024
46172e7
adding exception reports for posterity
b08x Aug 6, 2024
74ad038
leaving examples
b08x Aug 6, 2024
6d02aff
Merge branch 'development' of https://github.com/b08x/flowbots into d…
b08x Aug 6, 2024
41e3886
this works at least
b08x Aug 6, 2024
31c4bfb
update readme
b08x Aug 6, 2024
cdb9758
set preprocess task to get the current_textfile_id in the workflow
b08x Aug 8, 2024
e03904e
add engtagger task wip: text compressor
b08x Aug 9, 2024
da110bf
added rdocs
b08x Aug 9, 2024
dc8377f
documentation
b08x Aug 9, 2024
7614aad
extras
b08x Aug 10, 2024
1173a8e
fix: linear logic for detecting file type
b08x Aug 22, 2024
27b7ee1
wip
b08x Aug 23, 2024
e5085a2
Refactor tasks and implement uniform input retrieval (Epics 1 & 2)
b08x Aug 23, 2024
05ca137
added lemmas ohm model
b08x Aug 23, 2024
c228855
ui improvements
b08x Aug 23, 2024
bf55826
UI improvements
b08x Aug 23, 2024
fe27538
cartridge updates
b08x Aug 23, 2024
a266a7e
ui improvements
b08x Aug 23, 2024
456f219
adjusted readme
b08x Aug 23, 2024
66143b7
Merge branch 'topicmodeler' into development
b08x Aug 23, 2024
9de05ab
submodule update
b08x Sep 25, 2024
ef9c04f
Merge branch 'main' into development
b08x Sep 25, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 102 additions & 0 deletions Rakefile
Original file line number Diff line number Diff line change
Expand Up @@ -189,3 +189,105 @@ Rake::RDocTask.new do |rdoc|
rdoc.rdoc_files.include "lib/workflows/topic_model_trainer_workflow.rb"
rdoc.rdoc_files.include "lib/workflows/topic_model_trainer_workflowtest.rb"
end

desc "Build all images"
task "build-all" do
ALL_IMAGES.each do |image|
Rake::Task["build/#{image}"].invoke
end
end

desc "Tag all images"
task "tag-all" do
ALL_IMAGES.each do |image|
Rake::Task["tag/#{image}"].invoke
end
end

desc "Push all images"
task "push-all" do
ALL_IMAGES.each do |image|
Rake::Task["push/#{image}"].invoke
end
end

Rake::RDocTask.new do |rdoc|
rdoc.title = "flowbots v0.1"
rdoc.rdoc_dir = "#{APP_ROOT}/doc"
rdoc.options += [
"-w",
"2",
"-H",
"-A",
"-f",
"darkfish", # This bit
"-m",
"README.md",
"--visibility",
"nodoc",
"--markup",
"markdown"
]
rdoc.rdoc_files.include "README.md"
rdoc.rdoc_files.include "LICENSE"
rdoc.rdoc_files.include "exe/flowbots"

rdoc.rdoc_files.include "lib/api.rb"
rdoc.rdoc_files.include "lib/cli.rb"
rdoc.rdoc_files.include "lib/flowbots.rb"
rdoc.rdoc_files.include "lib/helper.rb"
rdoc.rdoc_files.include "lib/logging.rb"
rdoc.rdoc_files.include "lib/tasks.rb"
rdoc.rdoc_files.include "lib/ui.rb"
rdoc.rdoc_files.include "lib/workflows.rb"

rdoc.rdoc_files.include "lib/integrations/flowise.rb"

rdoc.rdoc_files.include "lib/processors/GrammarProcessor.rb"
rdoc.rdoc_files.include "lib/processors/NLPProcessor.rb"
rdoc.rdoc_files.include "lib/processors/TextProcessor.rb"
rdoc.rdoc_files.include "lib/processors/TextSegmentProcessor.rb"
rdoc.rdoc_files.include "lib/processors/TextTaggerProcessor.rb"
rdoc.rdoc_files.include "lib/processors/TextTokenizeProcessor.rb"
rdoc.rdoc_files.include "lib/processors/TopicModelProcessor.rb"

rdoc.rdoc_files.include "lib/tasks/accumulate_filtered_segments_task.rb"
rdoc.rdoc_files.include "lib/tasks/display_results_task.rb"
rdoc.rdoc_files.include "lib/tasks/file_loader_task.rb"
rdoc.rdoc_files.include "lib/tasks/filter_segments_task.rb"
rdoc.rdoc_files.include "lib/tasks/llm_analysis_task.rb"
rdoc.rdoc_files.include "lib/tasks/load_text_files_task.rb"
rdoc.rdoc_files.include "lib/tasks/nlp_analysis_task.rb"
rdoc.rdoc_files.include "lib/tasks/preprocess_text_file_task.rb"
rdoc.rdoc_files.include "lib/tasks/text_segment_task.rb"
rdoc.rdoc_files.include "lib/tasks/text_tagger_task.rb"
rdoc.rdoc_files.include "lib/tasks/text_tokenize_task.rb"
rdoc.rdoc_files.include "lib/tasks/tokenize_segments_task.rb"
rdoc.rdoc_files.include "lib/tasks/topic_modeling_task.rb"
rdoc.rdoc_files.include "lib/tasks/train_topic_model_task.rb"

rdoc.rdoc_files.include "lib/components/ExceptionAgent.rb"
rdoc.rdoc_files.include "lib/components/ExceptionHandler.rb"
rdoc.rdoc_files.include "lib/components/FileLoader.rb"
rdoc.rdoc_files.include "lib/components/OhmModels.rb"
rdoc.rdoc_files.include "lib/components/WorkflowAgent.rb"
rdoc.rdoc_files.include "lib/components/WorkflowOrchestrator.rb"
rdoc.rdoc_files.include "lib/components/word_salad.rb"

rdoc.rdoc_files.include "lib/grammars/markdown_yaml.rb"

rdoc.rdoc_files.include "lib/utils/command.rb"
rdoc.rdoc_files.include "lib/utils/transcribe.rb"
rdoc.rdoc_files.include "lib/utils/tts.rb"
rdoc.rdoc_files.include "lib/utils/writefile.rb"

rdoc.rdoc_files.include "lib/workflows/text_processing_workflow.rb"
rdoc.rdoc_files.include "lib/workflows/topic_model_trainer_workflow.rb"
rdoc.rdoc_files.include "lib/workflows/topic_model_trainer_workflowtest.rb"
end

Gokdok::Dokker.new do |gd|
gd.remote_path = "" # Put into the root directory
gd.repo_url = "[email protected]:b08x/flowbots.git"
gd.doc_home = "#{APP_ROOT}/doc"
end
16 changes: 8 additions & 8 deletions compressed_prompt_test.rb
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@

**Evaluation Test:**

Now, to evaluate the effectiveness of the condensed prompt, I will employ various criteria:
Now, to evaluate the effectiveness of the condensed prompt, I will employ various criteria:

- Clarity: Does the optimized version provide a clear and concise explanation of the benefits of using a visual representation? Does it eliminate ambiguity and ensure that the message is easily understandable?
- Clarity: Does the optimized version provide a clear and concise explanation of the benefits of using a visual representation? Does it eliminate ambiguity and ensure that the message is easily understandable?

Evaluation: The optimized version excels in clarity. By elaborating on the advantages of the visual representation, it provides a more explicit explanation. The use of phrases like "elucidating the inherent sequential nature" and "providing insights into the flow of data" offers a precise understanding of how visuals aid in comprehension and communication. The message is straightforward and free from ambiguity, making it easily comprehensible to the target audience.
Evaluation: The optimized version excels in clarity. By elaborating on the advantages of the visual representation, it provides a more explicit explanation. The use of phrases like "elucidating the inherent sequential nature" and "providing insights into the flow of data" offers a precise understanding of how visuals aid in comprehension and communication. The message is straightforward and free from ambiguity, making it easily comprehensible to the target audience.

- Completeness: Does the optimized version retain all the crucial information from the original prompt? Does it provide a comprehensive understanding of the topic without omitting relevant details?
- Completeness: Does the optimized version retain all the crucial information from the original prompt? Does it provide a comprehensive understanding of the topic without omitting relevant details?

Evaluation: The optimized version effectively maintains the completeness of information. It expands on the ideas presented in the original prompt, offering a more detailed yet concise explanation. The addition of phrases like "distinct process stages" and "shared understanding of the system's architecture" ensures that the full scope of the visual representation's utility is conveyed. No relevant details have been omitted, and the optimized version successfully captures the core message of the original prompt.
Evaluation: The optimized version effectively maintains the completeness of information. It expands on the ideas presented in the original prompt, offering a more detailed yet concise explanation. The addition of phrases like "distinct process stages" and "shared understanding of the system's architecture" ensures that the full scope of the visual representation's utility is conveyed. No relevant details have been omitted, and the optimized version successfully captures the core message of the original prompt.

- Eliciting Desired Responses: Will the optimized version be more effective in eliciting the desired response from the AI assistant or model? Does it provide a clear directive, enabling the AI to generate a more accurate and contextually appropriate response?
- Eliciting Desired Responses: Will the optimized version be more effective in eliciting the desired response from the AI assistant or model? Does it provide a clear directive, enabling the AI to generate a more accurate and contextually appropriate response?

Evaluation: The optimized version is designed to elicit a more focused and accurate response from the AI assistant. By providing additional context and clarity, the AI has a better understanding of the specific benefits attributed to the visual representation. The use of phrases like "graphical depiction," "shared understanding," and "communication of complex ideas" offers a clear framework for the AI to generate a response that aligns with the prompt's intent. The optimized version reduces potential ambiguity and enhances the likelihood of receiving a contextually relevant and high-quality response from the AI.
Evaluation: The optimized version is designed to elicit a more focused and accurate response from the AI assistant. By providing additional context and clarity, the AI has a better understanding of the specific benefits attributed to the visual representation. The use of phrases like "graphical depiction," "shared understanding," and "communication of complex ideas" offers a clear framework for the AI to generate a response that aligns with the prompt's intent. The optimized version reduces potential ambiguity and enhances the likelihood of receiving a contextually relevant and high-quality response from the AI.

Overall Conclusion:
Overall Conclusion:

Based on the evaluation test, the optimized version of the prompt demonstrates superior effectiveness compared to the original. It achieves a higher standard of clarity by providing explicit and detailed explanations while maintaining the completeness of the information conveyed. The optimized version is also tailored to elicit more accurate and contextually appropriate responses from AI assistants or models, ensuring a more productive and efficient interaction. This comprehensive test underscores the value of careful prompt design and analysis, highlighting the potential for enhanced AI performance and output quality.
Loading
Loading