You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Refactor: Improve clarity and structure of README (#9)
This commit significantly revises the README.md to enhance clarity, readability, and user-friendliness.
Key improvements include:
- Restructured "Notable Features": Grouped features under thematic headings (Ease of Use, Robustness, Performance) for better scannability.
- Clarified "Cache Layer" vs. "Cache Library": Simplified the explanation in "The design" section, making the rationale behind `sc`'s `Set()`-less design more accessible.
- Enhanced "Usage" Example: Added detailed comments, demonstrated usage of an alternative cache backend (LRU), and included necessary imports and error handling for a more complete example.
- Language Refinement: Performed a general proofread for typos, grammar, and consistent terminology.
- Reorganized "Inspirations": Moved this section to the end under a new "Acknowledgements" heading to improve the overall flow of the document.
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
// It will automatically call the given function if value is missing.
33
-
foo, _:= cache.Get(context.Background(), "foo")
46
+
// Create a new cache instance:
47
+
// - string: The type of the cache key.
48
+
// - *HeavyData: The type of the value to be cached.
49
+
// - retrieveHeavyData: The function to call when a cache miss occurs.
50
+
// - 1*time.Minute: freshFor - How long the item is considered fresh.
51
+
// During this period, Get() returns the cached value directly.
52
+
// - 2*time.Minute: ttl - Time To Live. Overall duration an item remains in the cache.
53
+
// If freshFor < ttl, after freshFor has passed (but before ttl expires),
54
+
// Get() will return the stale data and trigger a background refresh.
55
+
// - sc.WithLRUBackend(500): Optional. Specifies the cache backend.
56
+
// Here, an LRU cache with a capacity of 500 items is used.
57
+
// The default is an unbounded map-based cache.
58
+
cache, err:= sc.New[string, *HeavyData](
59
+
retrieveHeavyData,
60
+
1*time.Minute, // freshFor
61
+
2*time.Minute, // ttl
62
+
sc.WithLRUBackend(500), // Use LRU cache with capacity 500
63
+
)
64
+
if err != nil {
65
+
panic(err)
66
+
}
67
+
68
+
// --- First call to Get ---
69
+
// The cache is empty for key "foo", so retrieveHeavyData will be called.
70
+
fmt.Println("Requesting 'foo' for the first time...")
71
+
foo, err:= cache.Get(context.Background(), "foo")
72
+
if err != nil {
73
+
panic(err)
74
+
}
75
+
fmt.Printf("Got foo: %+v\n", foo)
76
+
77
+
// --- Second call to Get ---
78
+
// "foo" is now in the cache and is fresh, so retrieveHeavyData will NOT be called.
79
+
fmt.Println("\nRequesting 'foo' again (should be cached)...")
80
+
foo, err = cache.Get(context.Background(), "foo")
81
+
if err != nil {
82
+
panic(err)
83
+
}
84
+
fmt.Printf("Got foo again: %+v\n", foo)
85
+
86
+
// --- Example for a different key ---
87
+
fmt.Println("\nRequesting 'bar' for the first time...")
88
+
bar, err:= cache.Get(context.Background(), "bar")
89
+
if err != nil {
90
+
panic(err)
91
+
}
92
+
fmt.Printf("Got bar: %+v\n", bar)
93
+
94
+
// Wait for freshFor (1 min) + a bit, but less than ttl (2 min).
95
+
// This timing helps demonstrate behavior around freshFor/ttl boundaries.
96
+
// For this specific example, it mostly shows that after 1 min, the item is still cached.
97
+
fmt.Println("\nWaiting for 1 minute and 5 seconds...")
98
+
time.Sleep(1*time.Minute + 5*time.Second)
99
+
100
+
// "foo" is now stale (past freshFor), but still within ttl.
101
+
// If freshFor were shorter than ttl (as it is here: 1 min < 2 min), Get() on a stale item
102
+
// returns the stale data and triggers a background refresh.
103
+
// With freshFor (1 min) < ttl (2 min), graceful replacement is active.
104
+
// The exact timing of the sleep might not always coincide with observing the
105
+
// "retrieveHeavyData called..." print from a background refresh in this demo,
106
+
// but the mechanism is in place. The key point is that data remains available.
107
+
fmt.Println("\nRequesting 'foo' after 1 min 5 sec (graceful refresh might occur if not already updated)...")
108
+
foo, err = cache.Get(context.Background(), "foo")
109
+
if err != nil {
110
+
panic(err)
111
+
}
112
+
fmt.Printf("Got foo after wait: %+v\n", foo)
113
+
// If retrieveHeavyData was called again for "foo" above, it means a background refresh happened.
34
114
}
115
+
35
116
```
36
117
37
-
For a more detailed guide, see [reference](https://pkg.go.dev/github.com/motoki317/sc).
118
+
For a more detailed guide, including other backend options and advanced configurations, see the [Go Reference](https://pkg.go.dev/github.com/motoki317/sc).
38
119
39
120
## Notable Features
40
121
41
-
- Simple to use: wrap your function with `New()` and just call `Get()`.
42
-
- There is no `Set()` method. Calling `Get()` will automatically retrieve the value for you.
43
-
- This prevents [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) problem idiomatically (see below).
44
-
- Supports 1.18 generics - both key and value are generic.
45
-
- No `interface{}` or `any` used other than in type parameters, even in internal implementations.
46
-
- All methods are safe to be called from multiple goroutines.
47
-
- Ensures only a single goroutine is launched per key to retrieve value.
48
-
- Allows 'graceful cache replacement' (if `freshFor` < `ttl`) - a single goroutine is launched in the background to
49
-
re-fetch a fresh value while serving stale value to readers.
50
-
- Allows strict request coalescing (`EnableStrictCoalescing()` option) - ensures that all returned values are fresh (a
51
-
niche use-case).
122
+
sc offers a range of features designed for simplicity, robustness, and performance:
123
+
124
+
**Ease of Use & Idiomatic Design:**
125
+
-**Simple API:** Wrap your function with `New()` and retrieve values with `Get()`.
126
+
-**No `Set()` Method:**`Get()` automatically handles value retrieval, promoting an idiomatic design that prevents [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) by design (see "Why no Set() method?" section for details).
127
+
128
+
**Robustness & Modern Go Features:**
129
+
-**Generics Support:** Leverages Go 1.18 generics for type safety for both keys and values, avoiding `interface{}` or `any` in internal implementations beyond type parameters.
130
+
-**Concurrency Safety:** All methods are safe for concurrent use from multiple goroutines.
131
+
132
+
**Performance & Concurrency Control:**
133
+
-**Single Flight Execution:** Ensures only one goroutine is launched per key to fetch the value, preventing redundant work.
134
+
-**Graceful Cache Replacement:** Allows serving stale data while a single background goroutine re-fetches a fresh value (when `freshFor` < `ttl`). This minimizes latency spikes.
135
+
-**Strict Request Coalescing:** Offers an option (`EnableStrictCoalescing()`) to ensure all callers receive fresh data, suitable for specific use-cases.
@@ -59,51 +143,46 @@ For a more detailed guide, see [reference](https://pkg.go.dev/github.com/motoki3
59
143
60
144
## The design
61
145
62
-
### Why no Set() method? / Why cannot I dynamically provide load function to Get() method?
63
-
64
-
Short answer: sc is designed as a foolproof 'cache layer', not an overly complicated 'cache library'.
65
-
66
-
Long answer:
67
-
68
-
sc is designed as a simple, foolproof 'cache layer'.
69
-
Users of sc simply wrap data-retrieving functions and retrieve values via the cache.
70
-
By doing so, sc automatically reuses retrieved values and minimizes load on your data-store.
71
-
72
-
Now, let's imagine how users would use a more standard cache library with `Set()` method.
73
-
One could use `Get()` and `Set()` method to build the following logic:
146
+
### Why no `Set()` method? The "Cache Layer" Philosophy
74
147
75
-
1.`Get()` from the cache.
76
-
2. If the value is not in the cache, retrieve it from the source.
77
-
3.`Set()` the value.
148
+
**The Core Idea:**`sc` is intentionally designed as a **"cache layer"** that sits seamlessly between your application and data source, rather than a general-purpose **"cache library"** that requires manual management. This distinction is key to its simplicity and robustness.
78
149
79
-
This is probably the most common use-case, and it is fine for most applications.
80
-
But if you do not write it properly, the following problems may occur:
150
+
**`sc` as a Cache Layer:**
151
+
You provide `sc` with a function that knows how to fetch your data. From then on, you simply call `cache.Get()`. `sc` takes care of:
152
+
- Calling your function to get the data if it's not cached or is stale.
153
+
- Storing the data.
154
+
- Returning the cached data on subsequent calls.
155
+
- Automatically preventing issues like cache stampede (multiple, simultaneous fetches for the same data).
81
156
82
-
- If data flow is large, cache stampede might occur.
83
-
- Accidentally using different keys for `Get()` and `Set()`.
84
-
- Over-caching or under-caching by using inappropriate keys.
157
+
**The Problem with a Manual `Set()` Method:**
158
+
Traditional cache libraries often provide `Get()` and `Set()` methods. A typical workflow might look like this:
159
+
1. Try to `Get()` data from the cache.
160
+
2. If not found (cache miss), fetch data from the source.
161
+
3.`Set()` the fetched data into the cache.
85
162
86
-
sc solves the problems mentioned above by acting as a 'cache layer'.
163
+
While this offers flexibility, it also introduces potential pitfalls, especially in concurrent applications:
164
+
-**Cache Stampede:** Without careful locking, multiple requests experiencing a cache miss might all try to fetch and set the data simultaneously, overwhelming the data source.
165
+
-**Key Mismatches:** Developers might accidentally use different keys for `Get()` and `Set()`, leading to inconsistent caching.
166
+
-**Inconsistent Data Loading:** Logic for fetching data might be scattered or duplicated if not centralized.
87
167
88
-
- sc will manage the requests for you - no risk of accidentally writing a bad caching logic and overloading your data-store with cache stampede.
89
-
- No manual `Set()` needed - no risk of accidentally using different keys.
90
-
- Only the cache key is passed to the pre-provided replacement function - no risk of over-caching or under-caching.
168
+
**`sc`'s Solution: No `Set()` by Design**
169
+
By omitting a `Set()` method and requiring the data-fetching logic upfront (during cache instance creation), `sc` inherently avoids these problems:
170
+
-**Built-in Cache Stampede Prevention:**`sc` manages data retrieval, ensuring only one fetch operation occurs per key at any given time.
171
+
-**Guaranteed Key Consistency:** The same key used for `Get()` is used for the internal data retrieval function.
172
+
-**Centralized Data Fetching Logic:** Your data retrieval logic is defined once, making it easier to manage and reason about.
91
173
92
-
This is why sc does not have a `Set()` method, and forces you to provide replacement function on setup.
93
-
In this way, there is no risk of cache stampede and possible bugs described above -
94
-
sc will handle it for you.
174
+
This design makes `sc` a "foolproof" cache layer: it handles the complexities of caching for you, reducing the likelihood of common caching-related bugs.
95
175
96
-
### But I still want to manually `Set()` value on update!
176
+
### What if I need to update or invalidate cached data?
97
177
98
-
By the nature of the design, sc is a no-write-allocate type cache.
99
-
You update the value on the data-store, and then call `Forget()` to clear the value on the cache.
100
-
sc will automatically load the value next time `Get()` is called.
178
+
`sc` operates as a "no-write-allocate" cache. This means your application should:
179
+
1. Update the original data in your primary data store (e.g., database).
180
+
2. Tell `sc` to remove the old data from the cache by calling `cache.Forget(key)`.
101
181
102
-
One could design another cache layer library with `Set()` method which automatically calls the pre-provided
103
-
update function which updates the data-store, then updates the value on the cache.
104
-
But that would add whole another level of complexity - sc aims to be a simple cache layer.
182
+
The next time `cache.Get(key)` is called for that item, `sc` will automatically fetch the updated data from your data source using the function you provided at setup.
105
183
106
-
## Inspirations from
184
+
This approach keeps data consistency clear: your data store is the source of truth, and `sc` is a performance layer that reflects it. Attempting to `Set()` data directly into the cache that differs from the data source could lead to inconsistencies. `sc`'s design prioritizes simplicity and predictability.
185
+
## Acknowledgements
107
186
108
187
I would like to thank the following libraries for giving me ideas:
0 commit comments