/build/cargo-vendor-dir/regex-automata-0.4.5/src/util/pool.rs
Line | Count | Source (jump to first uncovered line) |
1 | | // This module provides a relatively simple thread-safe pool of reusable |
2 | | // objects. For the most part, it's implemented by a stack represented by a |
3 | | // Mutex<Vec<T>>. It has one small trick: because unlocking a mutex is somewhat |
4 | | // costly, in the case where a pool is accessed by the first thread that tried |
5 | | // to get a value, we bypass the mutex. Here are some benchmarks showing the |
6 | | // difference. |
7 | | // |
8 | | // 2022-10-15: These benchmarks are from the old regex crate and they aren't |
9 | | // easy to reproduce because some rely on older implementations of Pool that |
10 | | // are no longer around. I've left the results here for posterity, but any |
11 | | // enterprising individual should feel encouraged to re-litigate the way Pool |
12 | | // works. I am not at all certain it is the best approach. |
13 | | // |
14 | | // 1) misc::anchored_literal_long_non_match 21 (18571 MB/s) |
15 | | // 2) misc::anchored_literal_long_non_match 107 (3644 MB/s) |
16 | | // 3) misc::anchored_literal_long_non_match 45 (8666 MB/s) |
17 | | // 4) misc::anchored_literal_long_non_match 19 (20526 MB/s) |
18 | | // |
19 | | // (1) represents our baseline: the master branch at the time of writing when |
20 | | // using the 'thread_local' crate to implement the pool below. |
21 | | // |
22 | | // (2) represents a naive pool implemented completely via Mutex<Vec<T>>. There |
23 | | // is no special trick for bypassing the mutex. |
24 | | // |
25 | | // (3) is the same as (2), except it uses Mutex<Vec<Box<T>>>. It is twice as |
26 | | // fast because a Box<T> is much smaller than the T we use with a Pool in this |
27 | | // crate. So pushing and popping a Box<T> from a Vec is quite a bit faster |
28 | | // than for T. |
29 | | // |
30 | | // (4) is the same as (3), but with the trick for bypassing the mutex in the |
31 | | // case of the first-to-get thread. |
32 | | // |
33 | | // Why move off of thread_local? Even though (4) is a hair faster than (1) |
34 | | // above, this was not the main goal. The main goal was to move off of |
35 | | // thread_local and find a way to *simply* re-capture some of its speed for |
36 | | // regex's specific case. So again, why move off of it? The *primary* reason is |
37 | | // because of memory leaks. See https://github.com/rust-lang/regex/issues/362 |
38 | | // for example. (Why do I want it to be simple? Well, I suppose what I mean is, |
39 | | // "use as much safe code as possible to minimize risk and be as sure as I can |
40 | | // be that it is correct.") |
41 | | // |
42 | | // My guess is that the thread_local design is probably not appropriate for |
43 | | // regex since its memory usage scales to the number of active threads that |
44 | | // have used a regex, where as the pool below scales to the number of threads |
45 | | // that simultaneously use a regex. While neither case permits contraction, |
46 | | // since we own the pool data structure below, we can add contraction if a |
47 | | // clear use case pops up in the wild. More pressingly though, it seems that |
48 | | // there are at least some use case patterns where one might have many threads |
49 | | // sitting around that might have used a regex at one point. While thread_local |
50 | | // does try to reuse space previously used by a thread that has since stopped, |
51 | | // its maximal memory usage still scales with the total number of active |
52 | | // threads. In contrast, the pool below scales with the total number of threads |
53 | | // *simultaneously* using the pool. The hope is that this uses less memory |
54 | | // overall. And if it doesn't, we can hopefully tune it somehow. |
55 | | // |
56 | | // It seems that these sort of conditions happen frequently |
57 | | // in FFI inside of other more "managed" languages. This was |
58 | | // mentioned in the issue linked above, and also mentioned here: |
59 | | // https://github.com/BurntSushi/rure-go/issues/3. And in particular, users |
60 | | // confirm that disabling the use of thread_local resolves the leak. |
61 | | // |
62 | | // There were other weaker reasons for moving off of thread_local as well. |
63 | | // Namely, at the time, I was looking to reduce dependencies. And for something |
64 | | // like regex, maintenance can be simpler when we own the full dependency tree. |
65 | | // |
66 | | // Note that I am not entirely happy with this pool. It has some subtle |
67 | | // implementation details and is overall still observable (even with the |
68 | | // thread owner optimization) in benchmarks. If someone wants to take a crack |
69 | | // at building something better, please file an issue. Even if it means a |
70 | | // different API. The API exposed by this pool is not the minimal thing that |
71 | | // something like a 'Regex' actually needs. It could adapt to, for example, |
72 | | // an API more like what is found in the 'thread_local' crate. However, we do |
73 | | // really need to support the no-std alloc-only context, or else the regex |
74 | | // crate wouldn't be able to support no-std alloc-only. However, I'm generally |
75 | | // okay with making the alloc-only context slower (as it is here), although I |
76 | | // do find it unfortunate. |
77 | | |
78 | | /*! |
79 | | A thread safe memory pool. |
80 | | |
81 | | The principal type in this module is a [`Pool`]. It main use case is for |
82 | | holding a thread safe collection of mutable scratch spaces (usually called |
83 | | `Cache` in this crate) that regex engines need to execute a search. This then |
84 | | permits sharing the same read-only regex object across multiple threads while |
85 | | having a quick way of reusing scratch space in a thread safe way. This avoids |
86 | | needing to re-create the scratch space for every search, which could wind up |
87 | | being quite expensive. |
88 | | */ |
89 | | |
90 | | /// A thread safe pool that works in an `alloc`-only context. |
91 | | /// |
92 | | /// Getting a value out comes with a guard. When that guard is dropped, the |
93 | | /// value is automatically put back in the pool. The guard provides both a |
94 | | /// `Deref` and a `DerefMut` implementation for easy access to an underlying |
95 | | /// `T`. |
96 | | /// |
97 | | /// A `Pool` impls `Sync` when `T` is `Send` (even if `T` is not `Sync`). This |
98 | | /// is possible because a pool is guaranteed to provide a value to exactly one |
99 | | /// thread at any time. |
100 | | /// |
101 | | /// Currently, a pool never contracts in size. Its size is proportional to the |
102 | | /// maximum number of simultaneous uses. This may change in the future. |
103 | | /// |
104 | | /// A `Pool` is a particularly useful data structure for this crate because |
105 | | /// many of the regex engines require a mutable "cache" in order to execute |
106 | | /// a search. Since regexes themselves tend to be global, the problem is then: |
107 | | /// how do you get a mutable cache to execute a search? You could: |
108 | | /// |
109 | | /// 1. Use a `thread_local!`, which requires the standard library and requires |
110 | | /// that the regex pattern be statically known. |
111 | | /// 2. Use a `Pool`. |
112 | | /// 3. Make the cache an explicit dependency in your code and pass it around. |
113 | | /// 4. Put the cache state in a `Mutex`, but this means only one search can |
114 | | /// execute at a time. |
115 | | /// 5. Create a new cache for every search. |
116 | | /// |
117 | | /// A `thread_local!` is perhaps the best choice if it works for your use case. |
118 | | /// Putting the cache in a mutex or creating a new cache for every search are |
119 | | /// perhaps the worst choices. Of the remaining two choices, whether you use |
120 | | /// this `Pool` or thread through a cache explicitly in your code is a matter |
121 | | /// of taste and depends on your code architecture. |
122 | | /// |
123 | | /// # Warning: may use a spin lock |
124 | | /// |
125 | | /// When this crate is compiled _without_ the `std` feature, then this type |
126 | | /// may used a spin lock internally. This can have subtle effects that may |
127 | | /// be undesirable. See [Spinlocks Considered Harmful][spinharm] for a more |
128 | | /// thorough treatment of this topic. |
129 | | /// |
130 | | /// [spinharm]: https://matklad.github.io/2020/01/02/spinlocks-considered-harmful.html |
131 | | /// |
132 | | /// # Example |
133 | | /// |
134 | | /// This example shows how to share a single hybrid regex among multiple |
135 | | /// threads, while also safely getting exclusive access to a hybrid's |
136 | | /// [`Cache`](crate::hybrid::regex::Cache) without preventing other searches |
137 | | /// from running while your thread uses the `Cache`. |
138 | | /// |
139 | | /// ``` |
140 | | /// use regex_automata::{ |
141 | | /// hybrid::regex::{Cache, Regex}, |
142 | | /// util::{lazy::Lazy, pool::Pool}, |
143 | | /// Match, |
144 | | /// }; |
145 | | /// |
146 | | /// static RE: Lazy<Regex> = |
147 | | /// Lazy::new(|| Regex::new("foo[0-9]+bar").unwrap()); |
148 | | /// static CACHE: Lazy<Pool<Cache>> = |
149 | | /// Lazy::new(|| Pool::new(|| RE.create_cache())); |
150 | | /// |
151 | | /// let expected = Some(Match::must(0, 3..14)); |
152 | | /// assert_eq!(expected, RE.find(&mut CACHE.get(), b"zzzfoo12345barzzz")); |
153 | | /// ``` |
154 | | pub struct Pool<T, F = fn() -> T>(alloc::boxed::Box<inner::Pool<T, F>>); |
155 | | |
156 | | impl<T, F> Pool<T, F> { |
157 | | /// Create a new pool. The given closure is used to create values in |
158 | | /// the pool when necessary. |
159 | 0 | pub fn new(create: F) -> Pool<T, F> { |
160 | 0 | Pool(alloc::boxed::Box::new(inner::Pool::new(create))) |
161 | 0 | } |
162 | | } |
163 | | |
164 | | impl<T: Send, F: Fn() -> T> Pool<T, F> { |
165 | | /// Get a value from the pool. The caller is guaranteed to have |
166 | | /// exclusive access to the given value. Namely, it is guaranteed that |
167 | | /// this will never return a value that was returned by another call to |
168 | | /// `get` but was not put back into the pool. |
169 | | /// |
170 | | /// When the guard goes out of scope and its destructor is called, then |
171 | | /// it will automatically be put back into the pool. Alternatively, |
172 | | /// [`PoolGuard::put`] may be used to explicitly put it back in the pool |
173 | | /// without relying on its destructor. |
174 | | /// |
175 | | /// Note that there is no guarantee provided about which value in the |
176 | | /// pool is returned. That is, calling get, dropping the guard (causing |
177 | | /// the value to go back into the pool) and then calling get again is |
178 | | /// *not* guaranteed to return the same value received in the first `get` |
179 | | /// call. |
180 | | #[inline] |
181 | 0 | pub fn get(&self) -> PoolGuard<'_, T, F> { |
182 | 0 | PoolGuard(self.0.get()) |
183 | 0 | } |
184 | | } |
185 | | |
186 | | impl<T: core::fmt::Debug, F> core::fmt::Debug for Pool<T, F> { |
187 | 0 | fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { |
188 | 0 | f.debug_tuple("Pool").field(&self.0).finish() |
189 | 0 | } |
190 | | } |
191 | | |
192 | | /// A guard that is returned when a caller requests a value from the pool. |
193 | | /// |
194 | | /// The purpose of the guard is to use RAII to automatically put the value |
195 | | /// back in the pool once it's dropped. |
196 | | pub struct PoolGuard<'a, T: Send, F: Fn() -> T>(inner::PoolGuard<'a, T, F>); |
197 | | |
198 | | impl<'a, T: Send, F: Fn() -> T> PoolGuard<'a, T, F> { |
199 | | /// Consumes this guard and puts it back into the pool. |
200 | | /// |
201 | | /// This circumvents the guard's `Drop` implementation. This can be useful |
202 | | /// in circumstances where the automatic `Drop` results in poorer codegen, |
203 | | /// such as calling non-inlined functions. |
204 | | #[inline] |
205 | 0 | pub fn put(this: PoolGuard<'_, T, F>) { |
206 | 0 | inner::PoolGuard::put(this.0); |
207 | 0 | } |
208 | | } |
209 | | |
210 | | impl<'a, T: Send, F: Fn() -> T> core::ops::Deref for PoolGuard<'a, T, F> { |
211 | | type Target = T; |
212 | | |
213 | | #[inline] |
214 | 0 | fn deref(&self) -> &T { |
215 | 0 | self.0.value() |
216 | 0 | } |
217 | | } |
218 | | |
219 | | impl<'a, T: Send, F: Fn() -> T> core::ops::DerefMut for PoolGuard<'a, T, F> { |
220 | | #[inline] |
221 | 0 | fn deref_mut(&mut self) -> &mut T { |
222 | 0 | self.0.value_mut() |
223 | 0 | } |
224 | | } |
225 | | |
226 | | impl<'a, T: Send + core::fmt::Debug, F: Fn() -> T> core::fmt::Debug |
227 | | for PoolGuard<'a, T, F> |
228 | | { |
229 | 0 | fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { |
230 | 0 | f.debug_tuple("PoolGuard").field(&self.0).finish() |
231 | 0 | } |
232 | | } |
233 | | |
234 | | #[cfg(feature = "std")] |
235 | | mod inner { |
236 | | use core::{ |
237 | | cell::UnsafeCell, |
238 | | panic::{RefUnwindSafe, UnwindSafe}, |
239 | | sync::atomic::{AtomicUsize, Ordering}, |
240 | | }; |
241 | | |
242 | | use alloc::{boxed::Box, vec, vec::Vec}; |
243 | | |
244 | | use std::{sync::Mutex, thread_local}; |
245 | | |
246 | | /// An atomic counter used to allocate thread IDs. |
247 | | /// |
248 | | /// We specifically start our counter at 3 so that we can use the values |
249 | | /// less than it as sentinels. |
250 | | static COUNTER: AtomicUsize = AtomicUsize::new(3); |
251 | | |
252 | | /// A thread ID indicating that there is no owner. This is the initial |
253 | | /// state of a pool. Once a pool has an owner, there is no way to change |
254 | | /// it. |
255 | | static THREAD_ID_UNOWNED: usize = 0; |
256 | | |
257 | | /// A thread ID indicating that the special owner value is in use and not |
258 | | /// available. This state is useful for avoiding a case where the owner |
259 | | /// of a pool calls `get` before putting the result of a previous `get` |
260 | | /// call back into the pool. |
261 | | static THREAD_ID_INUSE: usize = 1; |
262 | | |
263 | | /// This sentinel is used to indicate that a guard has already been dropped |
264 | | /// and should not be re-dropped. We use this because our drop code can be |
265 | | /// called outside of Drop and thus there could be a bug in the internal |
266 | | /// implementation that results in trying to put the same guard back into |
267 | | /// the same pool multiple times, and *that* could result in UB if we |
268 | | /// didn't mark the guard as already having been put back in the pool. |
269 | | /// |
270 | | /// So this isn't strictly necessary, but this let's us define some |
271 | | /// routines as safe (like PoolGuard::put_imp) that we couldn't otherwise |
272 | | /// do. |
273 | | static THREAD_ID_DROPPED: usize = 2; |
274 | | |
275 | | /// The number of stacks we use inside of the pool. These are only used for |
276 | | /// non-owners. That is, these represent the "slow" path. |
277 | | /// |
278 | | /// In the original implementation of this pool, we only used a single |
279 | | /// stack. While this might be okay for a couple threads, the prevalence of |
280 | | /// 32, 64 and even 128 core CPUs has made it untenable. The contention |
281 | | /// such an environment introduces when threads are doing a lot of searches |
282 | | /// on short haystacks (a not uncommon use case) is palpable and leads to |
283 | | /// huge slowdowns. |
284 | | /// |
285 | | /// This constant reflects a change from using one stack to the number of |
286 | | /// stacks that this constant is set to. The stack for a particular thread |
287 | | /// is simply chosen by `thread_id % MAX_POOL_STACKS`. The idea behind |
288 | | /// this setup is that there should be a good chance that accesses to the |
289 | | /// pool will be distributed over several stacks instead of all of them |
290 | | /// converging to one. |
291 | | /// |
292 | | /// This is not a particularly smart or dynamic strategy. Fixing this to a |
293 | | /// specific number has at least two downsides. First is that it will help, |
294 | | /// say, an 8 core CPU more than it will a 128 core CPU. (But, crucially, |
295 | | /// it will still help the 128 core case.) Second is that this may wind |
296 | | /// up being a little wasteful with respect to memory usage. Namely, if a |
297 | | /// regex is used on one thread and then moved to another thread, then it |
298 | | /// could result in creating a new copy of the data in the pool even though |
299 | | /// only one is actually needed. |
300 | | /// |
301 | | /// And that memory usage bit is why this is set to 8 and not, say, 64. |
302 | | /// Keeping it at 8 limits, to an extent, how much unnecessary memory can |
303 | | /// be allocated. |
304 | | /// |
305 | | /// In an ideal world, we'd be able to have something like this: |
306 | | /// |
307 | | /// * Grow the number of stacks as the number of concurrent callers |
308 | | /// increases. I spent a little time trying this, but even just adding an |
309 | | /// atomic addition/subtraction for each pop/push for tracking concurrent |
310 | | /// callers led to a big perf hit. Since even more work would seemingly be |
311 | | /// required than just an addition/subtraction, I abandoned this approach. |
312 | | /// * The maximum amount of memory used should scale with respect to the |
313 | | /// number of concurrent callers and *not* the total number of existing |
314 | | /// threads. This is primarily why the `thread_local` crate isn't used, as |
315 | | /// as some environments spin up a lot of threads. This led to multiple |
316 | | /// reports of extremely high memory usage (often described as memory |
317 | | /// leaks). |
318 | | /// * Even more ideally, the pool should contract in size. That is, it |
319 | | /// should grow with bursts and then shrink. But this is a pretty thorny |
320 | | /// issue to tackle and it might be better to just not. |
321 | | /// * It would be nice to explore the use of, say, a lock-free stack |
322 | | /// instead of using a mutex to guard a `Vec` that is ultimately just |
323 | | /// treated as a stack. The main thing preventing me from exploring this |
324 | | /// is the ABA problem. The `crossbeam` crate has tools for dealing with |
325 | | /// this sort of problem (via its epoch based memory reclamation strategy), |
326 | | /// but I can't justify bringing in all of `crossbeam` as a dependency of |
327 | | /// `regex` for this. |
328 | | /// |
329 | | /// See this issue for more context and discussion: |
330 | | /// https://github.com/rust-lang/regex/issues/934 |
331 | | const MAX_POOL_STACKS: usize = 8; |
332 | | |
333 | 0 | thread_local!( |
334 | 0 | /// A thread local used to assign an ID to a thread. |
335 | 0 | static THREAD_ID: usize = { |
336 | 0 | let next = COUNTER.fetch_add(1, Ordering::Relaxed); |
337 | 0 | // SAFETY: We cannot permit the reuse of thread IDs since reusing a |
338 | 0 | // thread ID might result in more than one thread "owning" a pool, |
339 | 0 | // and thus, permit accessing a mutable value from multiple threads |
340 | 0 | // simultaneously without synchronization. The intent of this panic |
341 | 0 | // is to be a sanity check. It is not expected that the thread ID |
342 | 0 | // space will actually be exhausted in practice. Even on a 32-bit |
343 | 0 | // system, it would require spawning 2^32 threads (although they |
344 | 0 | // wouldn't all need to run simultaneously, so it is in theory |
345 | 0 | // possible). |
346 | 0 | // |
347 | 0 | // This checks that the counter never wraps around, since atomic |
348 | 0 | // addition wraps around on overflow. |
349 | 0 | if next == 0 { |
350 | 0 | panic!("regex: thread ID allocation space exhausted"); |
351 | 0 | } |
352 | 0 | next |
353 | 0 | }; |
354 | 0 | ); |
355 | | |
356 | | /// This puts each stack in the pool below into its own cache line. This is |
357 | | /// an absolutely critical optimization that tends to have the most impact |
358 | | /// in high contention workloads. Without forcing each mutex protected |
359 | | /// into its own cache line, high contention exacerbates the performance |
360 | | /// problem by causing "false sharing." By putting each mutex in its own |
361 | | /// cache-line, we avoid the false sharing problem and the affects of |
362 | | /// contention are greatly reduced. |
363 | | #[derive(Debug)] |
364 | | #[repr(C, align(64))] |
365 | | struct CacheLine<T>(T); |
366 | | |
367 | | /// A thread safe pool utilizing std-only features. |
368 | | /// |
369 | | /// The main difference between this and the simplistic alloc-only pool is |
370 | | /// the use of std::sync::Mutex and an "owner thread" optimization that |
371 | | /// makes accesses by the owner of a pool faster than all other threads. |
372 | | /// This makes the common case of running a regex within a single thread |
373 | | /// faster by avoiding mutex unlocking. |
374 | | pub(super) struct Pool<T, F> { |
375 | | /// A function to create more T values when stack is empty and a caller |
376 | | /// has requested a T. |
377 | | create: F, |
378 | | /// Multiple stacks of T values to hand out. These are used when a Pool |
379 | | /// is accessed by a thread that didn't create it. |
380 | | /// |
381 | | /// Conceptually this is `Mutex<Vec<Box<T>>>`, but sharded out to make |
382 | | /// it scale better under high contention work-loads. We index into |
383 | | /// this sequence via `thread_id % stacks.len()`. |
384 | | stacks: Vec<CacheLine<Mutex<Vec<Box<T>>>>>, |
385 | | /// The ID of the thread that owns this pool. The owner is the thread |
386 | | /// that makes the first call to 'get'. When the owner calls 'get', it |
387 | | /// gets 'owner_val' directly instead of returning a T from 'stack'. |
388 | | /// See comments elsewhere for details, but this is intended to be an |
389 | | /// optimization for the common case that makes getting a T faster. |
390 | | /// |
391 | | /// It is initialized to a value of zero (an impossible thread ID) as a |
392 | | /// sentinel to indicate that it is unowned. |
393 | | owner: AtomicUsize, |
394 | | /// A value to return when the caller is in the same thread that |
395 | | /// first called `Pool::get`. |
396 | | /// |
397 | | /// This is set to None when a Pool is first created, and set to Some |
398 | | /// once the first thread calls Pool::get. |
399 | | owner_val: UnsafeCell<Option<T>>, |
400 | | } |
401 | | |
402 | | // SAFETY: Since we want to use a Pool from multiple threads simultaneously |
403 | | // behind an Arc, we need for it to be Sync. In cases where T is sync, |
404 | | // Pool<T> would be Sync. However, since we use a Pool to store mutable |
405 | | // scratch space, we wind up using a T that has interior mutability and is |
406 | | // thus itself not Sync. So what we *really* want is for our Pool<T> to by |
407 | | // Sync even when T is not Sync (but is at least Send). |
408 | | // |
409 | | // The only non-sync aspect of a Pool is its 'owner_val' field, which is |
410 | | // used to implement faster access to a pool value in the common case of |
411 | | // a pool being accessed in the same thread in which it was created. The |
412 | | // 'stack' field is also shared, but a Mutex<T> where T: Send is already |
413 | | // Sync. So we only need to worry about 'owner_val'. |
414 | | // |
415 | | // The key is to guarantee that 'owner_val' can only ever be accessed from |
416 | | // one thread. In our implementation below, we guarantee this by only |
417 | | // returning the 'owner_val' when the ID of the current thread matches the |
418 | | // ID of the thread that first called 'Pool::get'. Since this can only ever |
419 | | // be one thread, it follows that only one thread can access 'owner_val' at |
420 | | // any point in time. Thus, it is safe to declare that Pool<T> is Sync when |
421 | | // T is Send. |
422 | | // |
423 | | // If there is a way to achieve our performance goals using safe code, then |
424 | | // I would very much welcome a patch. As it stands, the implementation |
425 | | // below tries to balance safety with performance. The case where a Regex |
426 | | // is used from multiple threads simultaneously will suffer a bit since |
427 | | // getting a value out of the pool will require unlocking a mutex. |
428 | | // |
429 | | // We require `F: Send + Sync` because we call `F` at any point on demand, |
430 | | // potentially from multiple threads simultaneously. |
431 | | unsafe impl<T: Send, F: Send + Sync> Sync for Pool<T, F> {} |
432 | | |
433 | | // If T is UnwindSafe, then since we provide exclusive access to any |
434 | | // particular value in the pool, the pool should therefore also be |
435 | | // considered UnwindSafe. |
436 | | // |
437 | | // We require `F: UnwindSafe + RefUnwindSafe` because we call `F` at any |
438 | | // point on demand, so it needs to be unwind safe on both dimensions for |
439 | | // the entire Pool to be unwind safe. |
440 | | impl<T: UnwindSafe, F: UnwindSafe + RefUnwindSafe> UnwindSafe for Pool<T, F> {} |
441 | | |
442 | | // If T is UnwindSafe, then since we provide exclusive access to any |
443 | | // particular value in the pool, the pool should therefore also be |
444 | | // considered RefUnwindSafe. |
445 | | // |
446 | | // We require `F: UnwindSafe + RefUnwindSafe` because we call `F` at any |
447 | | // point on demand, so it needs to be unwind safe on both dimensions for |
448 | | // the entire Pool to be unwind safe. |
449 | | impl<T: UnwindSafe, F: UnwindSafe + RefUnwindSafe> RefUnwindSafe |
450 | | for Pool<T, F> |
451 | | { |
452 | | } |
453 | | |
454 | | impl<T, F> Pool<T, F> { |
455 | | /// Create a new pool. The given closure is used to create values in |
456 | | /// the pool when necessary. |
457 | 0 | pub(super) fn new(create: F) -> Pool<T, F> { |
458 | 0 | // FIXME: Now that we require 1.65+, Mutex::new is available as |
459 | 0 | // const... So we can almost mark this function as const. But of |
460 | 0 | // course, we're creating a Vec of stacks below (we didn't when I |
461 | 0 | // originally wrote this code). It seems like the best way to work |
462 | 0 | // around this would be to use a `[Stack; MAX_POOL_STACKS]` instead |
463 | 0 | // of a `Vec<Stack>`. I refrained from making this change at time |
464 | 0 | // of writing (2023/10/08) because I was making a lot of other |
465 | 0 | // changes at the same time and wanted to do this more carefully. |
466 | 0 | // Namely, because of the cache line optimization, that `[Stack; |
467 | 0 | // MAX_POOL_STACKS]` would be quite big. It's unclear how bad (if |
468 | 0 | // at all) that would be. |
469 | 0 | // |
470 | 0 | // Another choice would be to lazily allocate the stacks, but... |
471 | 0 | // I'm not so sure about that. Seems like a fair bit of complexity? |
472 | 0 | // |
473 | 0 | // Maybe there's a simple solution I'm missing. |
474 | 0 | // |
475 | 0 | // ... OK, I tried to fix this. First, I did it by putting `stacks` |
476 | 0 | // in an `UnsafeCell` and using a `Once` to lazily initialize it. |
477 | 0 | // I benchmarked it and everything looked okay. I then made this |
478 | 0 | // function `const` and thought I was just about done. But the |
479 | 0 | // public pool type wraps its inner pool in a `Box` to keep its |
480 | 0 | // size down. Blech. |
481 | 0 | // |
482 | 0 | // So then I thought that I could push the box down into this |
483 | 0 | // type (and leave the non-std version unboxed) and use the same |
484 | 0 | // `UnsafeCell` technique to lazily initialize it. This has the |
485 | 0 | // downside of the `Once` now needing to get hit in the owner fast |
486 | 0 | // path, but maybe that's OK? However, I then realized that we can |
487 | 0 | // only lazily initialize `stacks`, `owner` and `owner_val`. The |
488 | 0 | // `create` function needs to be put somewhere outside of the box. |
489 | 0 | // So now the pool is a `Box`, `Once` and a function. Now we're |
490 | 0 | // starting to defeat the point of boxing in the first place. So I |
491 | 0 | // backed out that change too. |
492 | 0 | // |
493 | 0 | // Back to square one. I maybe we just don't make a pool's |
494 | 0 | // constructor const and live with it. It's probably not a huge |
495 | 0 | // deal. |
496 | 0 | let mut stacks = Vec::with_capacity(MAX_POOL_STACKS); |
497 | 0 | for _ in 0..stacks.capacity() { |
498 | 0 | stacks.push(CacheLine(Mutex::new(vec![]))); |
499 | 0 | } |
500 | 0 | let owner = AtomicUsize::new(THREAD_ID_UNOWNED); |
501 | 0 | let owner_val = UnsafeCell::new(None); // init'd on first access |
502 | 0 | Pool { create, stacks, owner, owner_val } |
503 | 0 | } |
504 | | } |
505 | | |
506 | | impl<T: Send, F: Fn() -> T> Pool<T, F> { |
507 | | /// Get a value from the pool. This may block if another thread is also |
508 | | /// attempting to retrieve a value from the pool. |
509 | | #[inline] |
510 | 0 | pub(super) fn get(&self) -> PoolGuard<'_, T, F> { |
511 | 0 | // Our fast path checks if the caller is the thread that "owns" |
512 | 0 | // this pool. Or stated differently, whether it is the first thread |
513 | 0 | // that tried to extract a value from the pool. If it is, then we |
514 | 0 | // can return a T to the caller without going through a mutex. |
515 | 0 | // |
516 | 0 | // SAFETY: We must guarantee that only one thread gets access |
517 | 0 | // to this value. Since a thread is uniquely identified by the |
518 | 0 | // THREAD_ID thread local, it follows that if the caller's thread |
519 | 0 | // ID is equal to the owner, then only one thread may receive this |
520 | 0 | // value. This is also why we can get away with what looks like a |
521 | 0 | // racy load and a store. We know that if 'owner == caller', then |
522 | 0 | // only one thread can be here, so we don't need to worry about any |
523 | 0 | // other thread setting the owner to something else. |
524 | 0 | let caller = THREAD_ID.with(|id| *id); |
525 | 0 | let owner = self.owner.load(Ordering::Acquire); |
526 | 0 | if caller == owner { |
527 | | // N.B. We could also do a CAS here instead of a load/store, |
528 | | // but ad hoc benchmarking suggests it is slower. And a lot |
529 | | // slower in the case where `get_slow` is common. |
530 | 0 | self.owner.store(THREAD_ID_INUSE, Ordering::Release); |
531 | 0 | return self.guard_owned(caller); |
532 | 0 | } |
533 | 0 | self.get_slow(caller, owner) |
534 | 0 | } |
535 | | |
536 | | /// This is the "slow" version that goes through a mutex to pop an |
537 | | /// allocated value off a stack to return to the caller. (Or, if the |
538 | | /// stack is empty, a new value is created.) |
539 | | /// |
540 | | /// If the pool has no owner, then this will set the owner. |
541 | | #[cold] |
542 | 0 | fn get_slow( |
543 | 0 | &self, |
544 | 0 | caller: usize, |
545 | 0 | owner: usize, |
546 | 0 | ) -> PoolGuard<'_, T, F> { |
547 | 0 | if owner == THREAD_ID_UNOWNED { |
548 | | // This sentinel means this pool is not yet owned. We try to |
549 | | // atomically set the owner. If we do, then this thread becomes |
550 | | // the owner and we can return a guard that represents the |
551 | | // special T for the owner. |
552 | | // |
553 | | // Note that we set the owner to a different sentinel that |
554 | | // indicates that the owned value is in use. The owner ID will |
555 | | // get updated to the actual ID of this thread once the guard |
556 | | // returned by this function is put back into the pool. |
557 | 0 | let res = self.owner.compare_exchange( |
558 | 0 | THREAD_ID_UNOWNED, |
559 | 0 | THREAD_ID_INUSE, |
560 | 0 | Ordering::AcqRel, |
561 | 0 | Ordering::Acquire, |
562 | 0 | ); |
563 | 0 | if res.is_ok() { |
564 | | // SAFETY: A successful CAS above implies this thread is |
565 | | // the owner and that this is the only such thread that |
566 | | // can reach here. Thus, there is no data race. |
567 | 0 | unsafe { |
568 | 0 | *self.owner_val.get() = Some((self.create)()); |
569 | 0 | } |
570 | 0 | return self.guard_owned(caller); |
571 | 0 | } |
572 | 0 | } |
573 | 0 | let stack_id = caller % self.stacks.len(); |
574 | | // We try to acquire exclusive access to this thread's stack, and |
575 | | // if so, grab a value from it if we can. We put this in a loop so |
576 | | // that it's easy to tweak and experiment with a different number |
577 | | // of tries. In the end, I couldn't see anything obviously better |
578 | | // than one attempt in ad hoc testing. |
579 | 0 | for _ in 0..1 { |
580 | 0 | let mut stack = match self.stacks[stack_id].0.try_lock() { |
581 | 0 | Err(_) => continue, |
582 | 0 | Ok(stack) => stack, |
583 | | }; |
584 | 0 | if let Some(value) = stack.pop() { |
585 | 0 | return self.guard_stack(value); |
586 | 0 | } |
587 | 0 | // Unlock the mutex guarding the stack before creating a fresh |
588 | 0 | // value since we no longer need the stack. |
589 | 0 | drop(stack); |
590 | 0 | let value = Box::new((self.create)()); |
591 | 0 | return self.guard_stack(value); |
592 | | } |
593 | | // We're only here if we could get access to our stack, so just |
594 | | // create a new value. This seems like it could be wasteful, but |
595 | | // waiting for exclusive access to a stack when there's high |
596 | | // contention is brutal for perf. |
597 | 0 | self.guard_stack_transient(Box::new((self.create)())) |
598 | 0 | } |
599 | | |
600 | | /// Puts a value back into the pool. Callers don't need to call this. |
601 | | /// Once the guard that's returned by 'get' is dropped, it is put back |
602 | | /// into the pool automatically. |
603 | | #[inline] |
604 | 0 | fn put_value(&self, value: Box<T>) { |
605 | 0 | let caller = THREAD_ID.with(|id| *id); |
606 | 0 | let stack_id = caller % self.stacks.len(); |
607 | | // As with trying to pop a value from this thread's stack, we |
608 | | // merely attempt to get access to push this value back on the |
609 | | // stack. If there's too much contention, we just give up and throw |
610 | | // the value away. |
611 | | // |
612 | | // Interestingly, in ad hoc benchmarking, it is beneficial to |
613 | | // attempt to push the value back more than once, unlike when |
614 | | // popping the value. I don't have a good theory for why this is. |
615 | | // I guess if we drop too many values then that winds up forcing |
616 | | // the pop operation to create new fresh values and thus leads to |
617 | | // less reuse. There's definitely a balancing act here. |
618 | 0 | for _ in 0..10 { |
619 | 0 | let mut stack = match self.stacks[stack_id].0.try_lock() { |
620 | 0 | Err(_) => continue, |
621 | 0 | Ok(stack) => stack, |
622 | 0 | }; |
623 | 0 | stack.push(value); |
624 | 0 | return; |
625 | | } |
626 | 0 | } |
627 | | |
628 | | /// Create a guard that represents the special owned T. |
629 | | #[inline] |
630 | 0 | fn guard_owned(&self, caller: usize) -> PoolGuard<'_, T, F> { |
631 | 0 | PoolGuard { pool: self, value: Err(caller), discard: false } |
632 | 0 | } |
633 | | |
634 | | /// Create a guard that contains a value from the pool's stack. |
635 | | #[inline] |
636 | 0 | fn guard_stack(&self, value: Box<T>) -> PoolGuard<'_, T, F> { |
637 | 0 | PoolGuard { pool: self, value: Ok(value), discard: false } |
638 | 0 | } |
639 | | |
640 | | /// Create a guard that contains a value from the pool's stack with an |
641 | | /// instruction to throw away the value instead of putting it back |
642 | | /// into the pool. |
643 | | #[inline] |
644 | 0 | fn guard_stack_transient(&self, value: Box<T>) -> PoolGuard<'_, T, F> { |
645 | 0 | PoolGuard { pool: self, value: Ok(value), discard: true } |
646 | 0 | } |
647 | | } |
648 | | |
649 | | impl<T: core::fmt::Debug, F> core::fmt::Debug for Pool<T, F> { |
650 | 0 | fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { |
651 | 0 | f.debug_struct("Pool") |
652 | 0 | .field("stacks", &self.stacks) |
653 | 0 | .field("owner", &self.owner) |
654 | 0 | .field("owner_val", &self.owner_val) |
655 | 0 | .finish() |
656 | 0 | } |
657 | | } |
658 | | |
659 | | /// A guard that is returned when a caller requests a value from the pool. |
660 | | pub(super) struct PoolGuard<'a, T: Send, F: Fn() -> T> { |
661 | | /// The pool that this guard is attached to. |
662 | | pool: &'a Pool<T, F>, |
663 | | /// This is Err when the guard represents the special "owned" value. |
664 | | /// In which case, the value is retrieved from 'pool.owner_val'. And |
665 | | /// in the special case of `Err(THREAD_ID_DROPPED)`, it means the |
666 | | /// guard has been put back into the pool and should no longer be used. |
667 | | value: Result<Box<T>, usize>, |
668 | | /// When true, the value should be discarded instead of being pushed |
669 | | /// back into the pool. We tend to use this under high contention, and |
670 | | /// this allows us to avoid inflating the size of the pool. (Because |
671 | | /// under contention, we tend to create more values instead of waiting |
672 | | /// for access to a stack of existing values.) |
673 | | discard: bool, |
674 | | } |
675 | | |
676 | | impl<'a, T: Send, F: Fn() -> T> PoolGuard<'a, T, F> { |
677 | | /// Return the underlying value. |
678 | | #[inline] |
679 | 0 | pub(super) fn value(&self) -> &T { |
680 | 0 | match self.value { |
681 | 0 | Ok(ref v) => &**v, |
682 | | // SAFETY: This is safe because the only way a PoolGuard gets |
683 | | // created for self.value=Err is when the current thread |
684 | | // corresponds to the owning thread, of which there can only |
685 | | // be one. Thus, we are guaranteed to be providing exclusive |
686 | | // access here which makes this safe. |
687 | | // |
688 | | // Also, since 'owner_val' is guaranteed to be initialized |
689 | | // before an owned PoolGuard is created, the unchecked unwrap |
690 | | // is safe. |
691 | 0 | Err(id) => unsafe { |
692 | 0 | // This assert is *not* necessary for safety, since we |
693 | 0 | // should never be here if the guard had been put back into |
694 | 0 | // the pool. This is a sanity check to make sure we didn't |
695 | 0 | // break an internal invariant. |
696 | 0 | debug_assert_ne!(THREAD_ID_DROPPED, id); |
697 | 0 | (*self.pool.owner_val.get()).as_ref().unwrap_unchecked() |
698 | | }, |
699 | | } |
700 | 0 | } |
701 | | |
702 | | /// Return the underlying value as a mutable borrow. |
703 | | #[inline] |
704 | 0 | pub(super) fn value_mut(&mut self) -> &mut T { |
705 | 0 | match self.value { |
706 | 0 | Ok(ref mut v) => &mut **v, |
707 | | // SAFETY: This is safe because the only way a PoolGuard gets |
708 | | // created for self.value=None is when the current thread |
709 | | // corresponds to the owning thread, of which there can only |
710 | | // be one. Thus, we are guaranteed to be providing exclusive |
711 | | // access here which makes this safe. |
712 | | // |
713 | | // Also, since 'owner_val' is guaranteed to be initialized |
714 | | // before an owned PoolGuard is created, the unwrap_unchecked |
715 | | // is safe. |
716 | 0 | Err(id) => unsafe { |
717 | 0 | // This assert is *not* necessary for safety, since we |
718 | 0 | // should never be here if the guard had been put back into |
719 | 0 | // the pool. This is a sanity check to make sure we didn't |
720 | 0 | // break an internal invariant. |
721 | 0 | debug_assert_ne!(THREAD_ID_DROPPED, id); |
722 | 0 | (*self.pool.owner_val.get()).as_mut().unwrap_unchecked() |
723 | | }, |
724 | | } |
725 | 0 | } |
726 | | |
727 | | /// Consumes this guard and puts it back into the pool. |
728 | | #[inline] |
729 | 0 | pub(super) fn put(this: PoolGuard<'_, T, F>) { |
730 | 0 | // Since this is effectively consuming the guard and putting the |
731 | 0 | // value back into the pool, there's no reason to run its Drop |
732 | 0 | // impl after doing this. I don't believe there is a correctness |
733 | 0 | // problem with doing so, but there's definitely a perf problem |
734 | 0 | // by redoing this work. So we avoid it. |
735 | 0 | let mut this = core::mem::ManuallyDrop::new(this); |
736 | 0 | this.put_imp(); |
737 | 0 | } |
738 | | |
739 | | /// Puts this guard back into the pool by only borrowing the guard as |
740 | | /// mutable. This should be called at most once. |
741 | | #[inline(always)] |
742 | 0 | fn put_imp(&mut self) { |
743 | 0 | match core::mem::replace(&mut self.value, Err(THREAD_ID_DROPPED)) { |
744 | 0 | Ok(value) => { |
745 | 0 | // If we were told to discard this value then don't bother |
746 | 0 | // trying to put it back into the pool. This occurs when |
747 | 0 | // the pop operation failed to acquire a lock and we |
748 | 0 | // decided to create a new value in lieu of contending for |
749 | 0 | // the lock. |
750 | 0 | if self.discard { |
751 | 0 | return; |
752 | 0 | } |
753 | 0 | self.pool.put_value(value); |
754 | | } |
755 | | // If this guard has a value "owned" by the thread, then |
756 | | // the Pool guarantees that this is the ONLY such guard. |
757 | | // Therefore, in order to place it back into the pool and make |
758 | | // it available, we need to change the owner back to the owning |
759 | | // thread's ID. But note that we use the ID that was stored in |
760 | | // the guard, since a guard can be moved to another thread and |
761 | | // dropped. (A previous iteration of this code read from the |
762 | | // THREAD_ID thread local, which uses the ID of the current |
763 | | // thread which may not be the ID of the owning thread! This |
764 | | // also avoids the TLS access, which is likely a hair faster.) |
765 | 0 | Err(owner) => { |
766 | 0 | // If we hit this point, it implies 'put_imp' has been |
767 | 0 | // called multiple times for the same guard which in turn |
768 | 0 | // corresponds to a bug in this implementation. |
769 | 0 | assert_ne!(THREAD_ID_DROPPED, owner); |
770 | 0 | self.pool.owner.store(owner, Ordering::Release); |
771 | | } |
772 | | } |
773 | 0 | } |
774 | | } |
775 | | |
776 | | impl<'a, T: Send, F: Fn() -> T> Drop for PoolGuard<'a, T, F> { |
777 | | #[inline] |
778 | 0 | fn drop(&mut self) { |
779 | 0 | self.put_imp(); |
780 | 0 | } |
781 | | } |
782 | | |
783 | | impl<'a, T: Send + core::fmt::Debug, F: Fn() -> T> core::fmt::Debug |
784 | | for PoolGuard<'a, T, F> |
785 | | { |
786 | 0 | fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { |
787 | 0 | f.debug_struct("PoolGuard") |
788 | 0 | .field("pool", &self.pool) |
789 | 0 | .field("value", &self.value) |
790 | 0 | .finish() |
791 | 0 | } |
792 | | } |
793 | | } |
794 | | |
795 | | // FUTURE: We should consider using Mara Bos's nearly-lock-free version of this |
796 | | // here: https://gist.github.com/m-ou-se/5fdcbdf7dcf4585199ce2de697f367a4. |
797 | | // |
798 | | // One reason why I did things with a "mutex" below is that it isolates the |
799 | | // safety concerns to just the Mutex, where as the safety of Mara's pool is a |
800 | | // bit more sprawling. I also expect this code to not be used that much, and |
801 | | // so is unlikely to get as much real world usage with which to test it. That |
802 | | // means the "obviously correct" lever is an important one. |
803 | | // |
804 | | // The specific reason to use Mara's pool is that it is likely faster and also |
805 | | // less likely to hit problems with spin-locks, although it is not completely |
806 | | // impervious to them. |
807 | | // |
808 | | // The best solution to this problem, probably, is a truly lock free pool. That |
809 | | // could be done with a lock free linked list. The issue is the ABA problem. It |
810 | | // is difficult to avoid, and doing so is complex. BUT, the upshot of that is |
811 | | // that if we had a truly lock free pool, then we could also use it above in |
812 | | // the 'std' pool instead of a Mutex because it should be completely free the |
813 | | // problems that come from spin-locks. |
814 | | #[cfg(not(feature = "std"))] |
815 | | mod inner { |
816 | | use core::{ |
817 | | cell::UnsafeCell, |
818 | | panic::{RefUnwindSafe, UnwindSafe}, |
819 | | sync::atomic::{AtomicBool, Ordering}, |
820 | | }; |
821 | | |
822 | | use alloc::{boxed::Box, vec, vec::Vec}; |
823 | | |
824 | | /// A thread safe pool utilizing alloc-only features. |
825 | | /// |
826 | | /// Unlike the std version, it doesn't seem possible(?) to implement the |
827 | | /// "thread owner" optimization because alloc-only doesn't have any concept |
828 | | /// of threads. So the best we can do is just a normal stack. This will |
829 | | /// increase latency in alloc-only environments. |
830 | | pub(super) struct Pool<T, F> { |
831 | | /// A stack of T values to hand out. These are used when a Pool is |
832 | | /// accessed by a thread that didn't create it. |
833 | | stack: Mutex<Vec<Box<T>>>, |
834 | | /// A function to create more T values when stack is empty and a caller |
835 | | /// has requested a T. |
836 | | create: F, |
837 | | } |
838 | | |
839 | | // If T is UnwindSafe, then since we provide exclusive access to any |
840 | | // particular value in the pool, it should therefore also be considered |
841 | | // RefUnwindSafe. |
842 | | impl<T: UnwindSafe, F: UnwindSafe> RefUnwindSafe for Pool<T, F> {} |
843 | | |
844 | | impl<T, F> Pool<T, F> { |
845 | | /// Create a new pool. The given closure is used to create values in |
846 | | /// the pool when necessary. |
847 | | pub(super) const fn new(create: F) -> Pool<T, F> { |
848 | | Pool { stack: Mutex::new(vec![]), create } |
849 | | } |
850 | | } |
851 | | |
852 | | impl<T: Send, F: Fn() -> T> Pool<T, F> { |
853 | | /// Get a value from the pool. This may block if another thread is also |
854 | | /// attempting to retrieve a value from the pool. |
855 | | #[inline] |
856 | | pub(super) fn get(&self) -> PoolGuard<'_, T, F> { |
857 | | let mut stack = self.stack.lock(); |
858 | | let value = match stack.pop() { |
859 | | None => Box::new((self.create)()), |
860 | | Some(value) => value, |
861 | | }; |
862 | | PoolGuard { pool: self, value: Some(value) } |
863 | | } |
864 | | |
865 | | #[inline] |
866 | | fn put(&self, guard: PoolGuard<'_, T, F>) { |
867 | | let mut guard = core::mem::ManuallyDrop::new(guard); |
868 | | if let Some(value) = guard.value.take() { |
869 | | self.put_value(value); |
870 | | } |
871 | | } |
872 | | |
873 | | /// Puts a value back into the pool. Callers don't need to call this. |
874 | | /// Once the guard that's returned by 'get' is dropped, it is put back |
875 | | /// into the pool automatically. |
876 | | #[inline] |
877 | | fn put_value(&self, value: Box<T>) { |
878 | | let mut stack = self.stack.lock(); |
879 | | stack.push(value); |
880 | | } |
881 | | } |
882 | | |
883 | | impl<T: core::fmt::Debug, F> core::fmt::Debug for Pool<T, F> { |
884 | | fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { |
885 | | f.debug_struct("Pool").field("stack", &self.stack).finish() |
886 | | } |
887 | | } |
888 | | |
889 | | /// A guard that is returned when a caller requests a value from the pool. |
890 | | pub(super) struct PoolGuard<'a, T: Send, F: Fn() -> T> { |
891 | | /// The pool that this guard is attached to. |
892 | | pool: &'a Pool<T, F>, |
893 | | /// This is None after the guard has been put back into the pool. |
894 | | value: Option<Box<T>>, |
895 | | } |
896 | | |
897 | | impl<'a, T: Send, F: Fn() -> T> PoolGuard<'a, T, F> { |
898 | | /// Return the underlying value. |
899 | | #[inline] |
900 | | pub(super) fn value(&self) -> &T { |
901 | | self.value.as_deref().unwrap() |
902 | | } |
903 | | |
904 | | /// Return the underlying value as a mutable borrow. |
905 | | #[inline] |
906 | | pub(super) fn value_mut(&mut self) -> &mut T { |
907 | | self.value.as_deref_mut().unwrap() |
908 | | } |
909 | | |
910 | | /// Consumes this guard and puts it back into the pool. |
911 | | #[inline] |
912 | | pub(super) fn put(this: PoolGuard<'_, T, F>) { |
913 | | // Since this is effectively consuming the guard and putting the |
914 | | // value back into the pool, there's no reason to run its Drop |
915 | | // impl after doing this. I don't believe there is a correctness |
916 | | // problem with doing so, but there's definitely a perf problem |
917 | | // by redoing this work. So we avoid it. |
918 | | let mut this = core::mem::ManuallyDrop::new(this); |
919 | | this.put_imp(); |
920 | | } |
921 | | |
922 | | /// Puts this guard back into the pool by only borrowing the guard as |
923 | | /// mutable. This should be called at most once. |
924 | | #[inline(always)] |
925 | | fn put_imp(&mut self) { |
926 | | if let Some(value) = self.value.take() { |
927 | | self.pool.put_value(value); |
928 | | } |
929 | | } |
930 | | } |
931 | | |
932 | | impl<'a, T: Send, F: Fn() -> T> Drop for PoolGuard<'a, T, F> { |
933 | | #[inline] |
934 | | fn drop(&mut self) { |
935 | | self.put_imp(); |
936 | | } |
937 | | } |
938 | | |
939 | | impl<'a, T: Send + core::fmt::Debug, F: Fn() -> T> core::fmt::Debug |
940 | | for PoolGuard<'a, T, F> |
941 | | { |
942 | | fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { |
943 | | f.debug_struct("PoolGuard") |
944 | | .field("pool", &self.pool) |
945 | | .field("value", &self.value) |
946 | | .finish() |
947 | | } |
948 | | } |
949 | | |
950 | | /// A spin-lock based mutex. Yes, I have read spinlocks cosnidered |
951 | | /// harmful[1], and if there's a reasonable alternative choice, I'll |
952 | | /// happily take it. |
953 | | /// |
954 | | /// I suspect the most likely alternative here is a Treiber stack, but |
955 | | /// implementing one correctly in a way that avoids the ABA problem looks |
956 | | /// subtle enough that I'm not sure I want to attempt that. But otherwise, |
957 | | /// we only need a mutex in order to implement our pool, so if there's |
958 | | /// something simpler we can use that works for our `Pool` use case, then |
959 | | /// that would be great. |
960 | | /// |
961 | | /// Note that this mutex does not do poisoning. |
962 | | /// |
963 | | /// [1]: https://matklad.github.io/2020/01/02/spinlocks-considered-harmful.html |
964 | | #[derive(Debug)] |
965 | | struct Mutex<T> { |
966 | | locked: AtomicBool, |
967 | | data: UnsafeCell<T>, |
968 | | } |
969 | | |
970 | | // SAFETY: Since a Mutex guarantees exclusive access, as long as we can |
971 | | // send it across threads, it must also be Sync. |
972 | | unsafe impl<T: Send> Sync for Mutex<T> {} |
973 | | |
974 | | impl<T> Mutex<T> { |
975 | | /// Create a new mutex for protecting access to the given value across |
976 | | /// multiple threads simultaneously. |
977 | | const fn new(value: T) -> Mutex<T> { |
978 | | Mutex { |
979 | | locked: AtomicBool::new(false), |
980 | | data: UnsafeCell::new(value), |
981 | | } |
982 | | } |
983 | | |
984 | | /// Lock this mutex and return a guard providing exclusive access to |
985 | | /// `T`. This blocks if some other thread has already locked this |
986 | | /// mutex. |
987 | | #[inline] |
988 | | fn lock(&self) -> MutexGuard<'_, T> { |
989 | | while self |
990 | | .locked |
991 | | .compare_exchange( |
992 | | false, |
993 | | true, |
994 | | Ordering::AcqRel, |
995 | | Ordering::Acquire, |
996 | | ) |
997 | | .is_err() |
998 | | { |
999 | | core::hint::spin_loop(); |
1000 | | } |
1001 | | // SAFETY: The only way we're here is if we successfully set |
1002 | | // 'locked' to true, which implies we must be the only thread here |
1003 | | // and thus have exclusive access to 'data'. |
1004 | | let data = unsafe { &mut *self.data.get() }; |
1005 | | MutexGuard { locked: &self.locked, data } |
1006 | | } |
1007 | | } |
1008 | | |
1009 | | /// A guard that derefs to &T and &mut T. When it's dropped, the lock is |
1010 | | /// released. |
1011 | | #[derive(Debug)] |
1012 | | struct MutexGuard<'a, T> { |
1013 | | locked: &'a AtomicBool, |
1014 | | data: &'a mut T, |
1015 | | } |
1016 | | |
1017 | | impl<'a, T> core::ops::Deref for MutexGuard<'a, T> { |
1018 | | type Target = T; |
1019 | | |
1020 | | #[inline] |
1021 | | fn deref(&self) -> &T { |
1022 | | self.data |
1023 | | } |
1024 | | } |
1025 | | |
1026 | | impl<'a, T> core::ops::DerefMut for MutexGuard<'a, T> { |
1027 | | #[inline] |
1028 | | fn deref_mut(&mut self) -> &mut T { |
1029 | | self.data |
1030 | | } |
1031 | | } |
1032 | | |
1033 | | impl<'a, T> Drop for MutexGuard<'a, T> { |
1034 | | #[inline] |
1035 | | fn drop(&mut self) { |
1036 | | // Drop means 'data' is no longer accessible, so we can unlock |
1037 | | // the mutex. |
1038 | | self.locked.store(false, Ordering::Release); |
1039 | | } |
1040 | | } |
1041 | | } |
1042 | | |
1043 | | #[cfg(test)] |
1044 | | mod tests { |
1045 | | use core::panic::{RefUnwindSafe, UnwindSafe}; |
1046 | | |
1047 | | use alloc::{boxed::Box, vec, vec::Vec}; |
1048 | | |
1049 | | use super::*; |
1050 | | |
1051 | | #[test] |
1052 | | fn oibits() { |
1053 | | fn assert_oitbits<T: Send + Sync + UnwindSafe + RefUnwindSafe>() {} |
1054 | | assert_oitbits::<Pool<Vec<u32>>>(); |
1055 | | assert_oitbits::<Pool<core::cell::RefCell<Vec<u32>>>>(); |
1056 | | assert_oitbits::< |
1057 | | Pool< |
1058 | | Vec<u32>, |
1059 | | Box< |
1060 | | dyn Fn() -> Vec<u32> |
1061 | | + Send |
1062 | | + Sync |
1063 | | + UnwindSafe |
1064 | | + RefUnwindSafe, |
1065 | | >, |
1066 | | >, |
1067 | | >(); |
1068 | | } |
1069 | | |
1070 | | // Tests that Pool implements the "single owner" optimization. That is, the |
1071 | | // thread that first accesses the pool gets its own copy, while all other |
1072 | | // threads get distinct copies. |
1073 | | #[cfg(feature = "std")] |
1074 | | #[test] |
1075 | | fn thread_owner_optimization() { |
1076 | | use std::{cell::RefCell, sync::Arc, vec}; |
1077 | | |
1078 | | let pool: Arc<Pool<RefCell<Vec<char>>>> = |
1079 | | Arc::new(Pool::new(|| RefCell::new(vec!['a']))); |
1080 | | pool.get().borrow_mut().push('x'); |
1081 | | |
1082 | | let pool1 = pool.clone(); |
1083 | | let t1 = std::thread::spawn(move || { |
1084 | | let guard = pool1.get(); |
1085 | | guard.borrow_mut().push('y'); |
1086 | | }); |
1087 | | |
1088 | | let pool2 = pool.clone(); |
1089 | | let t2 = std::thread::spawn(move || { |
1090 | | let guard = pool2.get(); |
1091 | | guard.borrow_mut().push('z'); |
1092 | | }); |
1093 | | |
1094 | | t1.join().unwrap(); |
1095 | | t2.join().unwrap(); |
1096 | | |
1097 | | // If we didn't implement the single owner optimization, then one of |
1098 | | // the threads above is likely to have mutated the [a, x] vec that |
1099 | | // we stuffed in the pool before spawning the threads. But since |
1100 | | // neither thread was first to access the pool, and because of the |
1101 | | // optimization, we should be guaranteed that neither thread mutates |
1102 | | // the special owned pool value. |
1103 | | // |
1104 | | // (Technically this is an implementation detail and not a contract of |
1105 | | // Pool's API.) |
1106 | | assert_eq!(vec!['a', 'x'], *pool.get().borrow()); |
1107 | | } |
1108 | | |
1109 | | // This tests that if the "owner" of a pool asks for two values, then it |
1110 | | // gets two distinct values and not the same one. This test failed in the |
1111 | | // course of developing the pool, which in turn resulted in UB because it |
1112 | | // permitted getting aliasing &mut borrows to the same place in memory. |
1113 | | #[test] |
1114 | | fn thread_owner_distinct() { |
1115 | | let pool = Pool::new(|| vec!['a']); |
1116 | | |
1117 | | { |
1118 | | let mut g1 = pool.get(); |
1119 | | let v1 = &mut *g1; |
1120 | | let mut g2 = pool.get(); |
1121 | | let v2 = &mut *g2; |
1122 | | v1.push('b'); |
1123 | | v2.push('c'); |
1124 | | assert_eq!(&mut vec!['a', 'b'], v1); |
1125 | | assert_eq!(&mut vec!['a', 'c'], v2); |
1126 | | } |
1127 | | // This isn't technically guaranteed, but we |
1128 | | // expect to now get the "owned" value (the first |
1129 | | // call to 'get()' above) now that it's back in |
1130 | | // the pool. |
1131 | | assert_eq!(&mut vec!['a', 'b'], &mut *pool.get()); |
1132 | | } |
1133 | | |
1134 | | // This tests that we can share a guard with another thread, mutate the |
1135 | | // underlying value and everything works. This failed in the course of |
1136 | | // developing a pool since the pool permitted 'get()' to return the same |
1137 | | // value to the owner thread, even before the previous value was put back |
1138 | | // into the pool. This in turn resulted in this test producing a data race. |
1139 | | #[cfg(feature = "std")] |
1140 | | #[test] |
1141 | | fn thread_owner_sync() { |
1142 | | let pool = Pool::new(|| vec!['a']); |
1143 | | { |
1144 | | let mut g1 = pool.get(); |
1145 | | let mut g2 = pool.get(); |
1146 | | std::thread::scope(|s| { |
1147 | | s.spawn(|| { |
1148 | | g1.push('b'); |
1149 | | }); |
1150 | | s.spawn(|| { |
1151 | | g2.push('c'); |
1152 | | }); |
1153 | | }); |
1154 | | |
1155 | | let v1 = &mut *g1; |
1156 | | let v2 = &mut *g2; |
1157 | | assert_eq!(&mut vec!['a', 'b'], v1); |
1158 | | assert_eq!(&mut vec!['a', 'c'], v2); |
1159 | | } |
1160 | | |
1161 | | // This isn't technically guaranteed, but we |
1162 | | // expect to now get the "owned" value (the first |
1163 | | // call to 'get()' above) now that it's back in |
1164 | | // the pool. |
1165 | | assert_eq!(&mut vec!['a', 'b'], &mut *pool.get()); |
1166 | | } |
1167 | | |
1168 | | // This tests that if we move a PoolGuard that is owned by the current |
1169 | | // thread to another thread and drop it, then the thread owner doesn't |
1170 | | // change. During development of the pool, this test failed because the |
1171 | | // PoolGuard assumed it was dropped in the same thread from which it was |
1172 | | // created, and thus used the current thread's ID as the owner, which could |
1173 | | // be different than the actual owner of the pool. |
1174 | | #[cfg(feature = "std")] |
1175 | | #[test] |
1176 | | fn thread_owner_send_drop() { |
1177 | | let pool = Pool::new(|| vec!['a']); |
1178 | | // Establishes this thread as the owner. |
1179 | | { |
1180 | | pool.get().push('b'); |
1181 | | } |
1182 | | std::thread::scope(|s| { |
1183 | | // Sanity check that we get the same value back. |
1184 | | // (Not technically guaranteed.) |
1185 | | let mut g = pool.get(); |
1186 | | assert_eq!(&vec!['a', 'b'], &*g); |
1187 | | // Now push it to another thread and drop it. |
1188 | | s.spawn(move || { |
1189 | | g.push('c'); |
1190 | | }) |
1191 | | .join() |
1192 | | .unwrap(); |
1193 | | }); |
1194 | | // Now check that we're still the owner. This is not technically |
1195 | | // guaranteed by the API, but is true in practice given the thread |
1196 | | // owner optimization. |
1197 | | assert_eq!(&vec!['a', 'b', 'c'], &*pool.get()); |
1198 | | } |
1199 | | } |