Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Several open shells won't synchronize history #41

Open
ehogstrand opened this issue Mar 20, 2019 · 5 comments
Open

Several open shells won't synchronize history #41

ehogstrand opened this issue Mar 20, 2019 · 5 comments

Comments

@ehogstrand
Copy link

When running several instances of PowerShell, z won't synchronize the history between them.
Improvement would be to read history, before writing, so other instances history will be remembered and shared.

@badmotorfinger
Copy link
Owner

This would involve reading the history file before every command is looked up.

Some type of cross process store would be more efficient.

@gerardog
Copy link

May I take a look into this?

@badmotorfinger
Copy link
Owner

@gerardog - That would be great, I would appreciate it.

@gerardog
Copy link

Proposal: Modify the format of ".cdHistory" text file database, just a little bit, to use as cross process store.

Currently each line in that file represents a folder, its ranking and its last accessed date. The change is twofold:

  • Allow duplicate items: a Folder appearing twice with ranking 25 and also with rank 1, has an effective rank of 26.
  • Every process can append lines to the file (a read-append shared lock). For example: Every time you call 'z', you can add a line to increase the rank of a directory.
  • Also you would read the new lines added by other processes. Each process remembers it has read until position=$lastPosition. On each invocation, if the file size or timestamp has changed, we read only what's new from the file, optimizing the process.
  • This turns the db it into a journal-like structure. A cleanup or compression is needed every X. Possible trigger could be whenever you reading the DB and find more than ..(30?) repeated items. The cleanup consists of reading the whole file and rewriting with processed duplicates, and if size>$max_size (300?) eliminating items ranking below that.

@badmotorfinger
Copy link
Owner

I like this idea.

What about something like a sqlite db? Seems like it would be more suited to a cross process read/write scenario. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants