-
-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The way I'm using cacache is very slow #71
Comments
Index files are append-only in order to preserve the high-parallelism invariant. In the JS version of cacache, I wrote a "garbage collector" that could be run "offline" (aka, when you can reasonably guarantee single-process, single-thread access to the cache), and it would iterate over all entries and reduce them to their latest entry value. You can pretty trivially write this yourself by using the functions in the |
That's a massive caveat for our use case.
And there's a window between the delete and the insert where the index is not present. We were using cacache to protect us from power failures and similar failures. |
This doesn't seem to work:
It works as a cacache::read, but it doesn't seem to reduce index size. Any hints? Thanks. |
At first glance, index::delete is an insert(null) which is a no-op? |
This works, but is neither pretty nor robust:
|
oh duh. I forgot that delete just inserts a null. Yeah, I think it would be nice to have built-in "vacuum"/GC support. I just haven't gotten around to it. |
A cache read by key now takes about ~30 seconds for my application.
A clue:
Usage pattern: write to a small number of keys (<10) every few seconds. On program start, read those keys.
The cache is used to dump state to disk so that it can be read on program start after unclean exit.
The index file for each key is about 280M, over 1M entries.
It appears that you're keeping the entire history? Is this just for reliability reasons, because there doesn't appear to be an API to read older versions of a key. Is there a way to reliably trim history to get my speed back?
The text was updated successfully, but these errors were encountered: