Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expirable-lru: cache.Get() not refreshing entry's TTL #186

Open
shellohunter opened this issue Jan 3, 2025 · 13 comments
Open

expirable-lru: cache.Get() not refreshing entry's TTL #186

shellohunter opened this issue Jan 3, 2025 · 13 comments

Comments

@shellohunter
Copy link

The following test case will fail. Seems there's no way to keep an entry alive.
IMO, Peek retrieves data without updating its TTL, but Get does. So if we keep Get its value before it expires, wee keep it alive. Am I correct?

func TestExpirableLRUKeepAlive(t *testing.T) {
	cache := expirable.NewLRU[string, string](5, nil, time.Millisecond*100)
	cache.Add("A", "Apple")

	start := time.Now()
	for range time.NewTicker(time.Millisecond * 10).C {
		elapsed := time.Since(start).Milliseconds()
		if v, ok := cache.Get("A"); !ok || v != "Apple" {
			t.Fatalf("entry expired after %dms", elapsed)
		}
		if elapsed > 500 {
			break
		}
		t.Logf("so far so good: %dms", elapsed)
	}
}
@guerinoni
Copy link

What do you expect from this? I see it fails after 100ms, the exp is 100ms actually

@shellohunter
Copy link
Author

What I'm expecting is: when an item is Get from cache, its TTL get restored too 100ms. so if we keep Getting it before it expires, it will always keep alive.

In my scenario I need a GC like mechanism, a frequently used item will keep alive in cache as long as necessary, and get purged until not in use for a given time.

@guerinoni
Copy link

This make sense, I have a patch for this and I can open a PR but I found the test TestLoadingExpired that is expecting something different, I mean my changes will change the current behavior, I think is a wanted behaviour to NOT update expiration when call a Get, but I agree with your version.

@mukeshjc what are your thoughts on this?

@mukeshjc
Copy link
Contributor

mukeshjc commented Jan 9, 2025

This is a design choice that impacts the semantics and usage of the cache.

  • The TTL acts as a hard expiration time from when the entry was last written
  • Reading an item frequently won't extend its lifetime
  • The only way to refresh an item's TTL is to explicitly Add it again

This is useful in scenarios where you want cache entries to expire at a predictable time regardless of access patterns. For example, if caching authorization tokens that should expire at a fixed time (and auto-evicted from cache), you wouldn't want reads to extend their lifetime.

@shellohunter
Copy link
Author

@mukeshjc IMO both scenarios are sensible, how about making it optional?

@mukeshjc
Copy link
Contributor

mukeshjc commented Jan 9, 2025

There will be deeper changes needed to support that usecase and honestly would be overloading the current implementation with multiple optionalities.

The current implementation has a clear, single responsibility. LRU and TTL mechanisms serves a distinct purpose: LRU for space management, TTL for time-based validity. These work independently and predictably

Example edge case:

  1. when the cache is full, and new item is to be added - how do we predictably evict an entry? Incase there is no entry with an expired TTL, do we evict an item that was least frequently accessed even if it's TTL hasn't expired? If yes, then TTL becomes completely insignificant right?

Additionally, the usecase of keeping a value "alive" in cache when Geting it, is similar to a SimpleLRU implementation without the TTL support at all. As long as we keep Geting the key, it stays at the front of LRU queue never getting evicted. Adding a TTL based expiry to this, just muddies the water a lot.

@guerinoni
Copy link

The behaviour described by @shellohunter is useful in case of you set a key and you don't actually GET for a TTL, so make sense to evict.
BTW I agree with both of your thoughts, in this specific case we can implement this as opt-in or don't implement at all :) (and the usecase must be satisfied by another pkg)

@shellohunter
Copy link
Author

This make sense, I have a patch for this and I can open a PR ....

@guerinoni could you share the patch somewhere? I'd like to see if I should stick with this lib in my scenario.

@guerinoni
Copy link

@shellohunter if you would like i can implement an expire version here https://github.com/guerinoni/sieve

@shellohunter
Copy link
Author

looks like sieve is not thread safe? which is my another concern.

@guerinoni
Copy link

@shellohunter it is :) (there is a version single thread, and thread safe)

@shellohunter
Copy link
Author

I'd like to try it, plz ping me when it's available, and thanks for your sharing! @guerinoni

@guerinoni
Copy link

@shellohunter try to test this if it feeds your need

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants