ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/SynchronousQueue.java
Revision: 1.104
Committed: Thu Aug 28 12:59:36 2014 UTC (9 years, 9 months ago) by jsr166
Branch: MAIN
Changes since 1.103: +0 -13 lines
Log Message:
delete orphaned objectFieldOffset method

File Contents

# Content
1 /*
2 * Written by Doug Lea, Bill Scherer, and Michael Scott with
3 * assistance from members of JCP JSR-166 Expert Group and released to
4 * the public domain, as explained at
5 * http://creativecommons.org/publicdomain/zero/1.0/
6 */
7
8 package java.util.concurrent;
9 import java.util.concurrent.locks.LockSupport;
10 import java.util.concurrent.locks.ReentrantLock;
11 import java.util.*;
12 import java.util.Spliterator;
13 import java.util.Spliterators;
14 import java.util.stream.Stream;
15 import java.util.function.Consumer;
16
17 /**
18 * A {@linkplain BlockingQueue blocking queue} in which each insert
19 * operation must wait for a corresponding remove operation by another
20 * thread, and vice versa. A synchronous queue does not have any
21 * internal capacity, not even a capacity of one. You cannot
22 * {@code peek} at a synchronous queue because an element is only
23 * present when you try to remove it; you cannot insert an element
24 * (using any method) unless another thread is trying to remove it;
25 * you cannot iterate as there is nothing to iterate. The
26 * <em>head</em> of the queue is the element that the first queued
27 * inserting thread is trying to add to the queue; if there is no such
28 * queued thread then no element is available for removal and
29 * {@code poll()} will return {@code null}. For purposes of other
30 * {@code Collection} methods (for example {@code contains}), a
31 * {@code SynchronousQueue} acts as an empty collection. This queue
32 * does not permit {@code null} elements.
33 *
34 * <p>Synchronous queues are similar to rendezvous channels used in
35 * CSP and Ada. They are well suited for handoff designs, in which an
36 * object running in one thread must sync up with an object running
37 * in another thread in order to hand it some information, event, or
38 * task.
39 *
40 * <p>This class supports an optional fairness policy for ordering
41 * waiting producer and consumer threads. By default, this ordering
42 * is not guaranteed. However, a queue constructed with fairness set
43 * to {@code true} grants threads access in FIFO order.
44 *
45 * <p>This class and its iterator implement all of the
46 * <em>optional</em> methods of the {@link Collection} and {@link
47 * Iterator} interfaces.
48 *
49 * <p>This class is a member of the
50 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
51 * Java Collections Framework</a>.
52 *
53 * @since 1.5
54 * @author Doug Lea and Bill Scherer and Michael Scott
55 * @param <E> the type of elements held in this collection
56 */
57 public class SynchronousQueue<E> extends AbstractQueue<E>
58 implements BlockingQueue<E>, java.io.Serializable {
59 private static final long serialVersionUID = -3223113410248163686L;
60
61 /*
62 * This class implements extensions of the dual stack and dual
63 * queue algorithms described in "Nonblocking Concurrent Objects
64 * with Condition Synchronization", by W. N. Scherer III and
65 * M. L. Scott. 18th Annual Conf. on Distributed Computing,
66 * Oct. 2004 (see also
67 * http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/duals.html).
68 * The (Lifo) stack is used for non-fair mode, and the (Fifo)
69 * queue for fair mode. The performance of the two is generally
70 * similar. Fifo usually supports higher throughput under
71 * contention but Lifo maintains higher thread locality in common
72 * applications.
73 *
74 * A dual queue (and similarly stack) is one that at any given
75 * time either holds "data" -- items provided by put operations,
76 * or "requests" -- slots representing take operations, or is
77 * empty. A call to "fulfill" (i.e., a call requesting an item
78 * from a queue holding data or vice versa) dequeues a
79 * complementary node. The most interesting feature of these
80 * queues is that any operation can figure out which mode the
81 * queue is in, and act accordingly without needing locks.
82 *
83 * Both the queue and stack extend abstract class Transferer
84 * defining the single method transfer that does a put or a
85 * take. These are unified into a single method because in dual
86 * data structures, the put and take operations are symmetrical,
87 * so nearly all code can be combined. The resulting transfer
88 * methods are on the long side, but are easier to follow than
89 * they would be if broken up into nearly-duplicated parts.
90 *
91 * The queue and stack data structures share many conceptual
92 * similarities but very few concrete details. For simplicity,
93 * they are kept distinct so that they can later evolve
94 * separately.
95 *
96 * The algorithms here differ from the versions in the above paper
97 * in extending them for use in synchronous queues, as well as
98 * dealing with cancellation. The main differences include:
99 *
100 * 1. The original algorithms used bit-marked pointers, but
101 * the ones here use mode bits in nodes, leading to a number
102 * of further adaptations.
103 * 2. SynchronousQueues must block threads waiting to become
104 * fulfilled.
105 * 3. Support for cancellation via timeout and interrupts,
106 * including cleaning out cancelled nodes/threads
107 * from lists to avoid garbage retention and memory depletion.
108 *
109 * Blocking is mainly accomplished using LockSupport park/unpark,
110 * except that nodes that appear to be the next ones to become
111 * fulfilled first spin a bit (on multiprocessors only). On very
112 * busy synchronous queues, spinning can dramatically improve
113 * throughput. And on less busy ones, the amount of spinning is
114 * small enough not to be noticeable.
115 *
116 * Cleaning is done in different ways in queues vs stacks. For
117 * queues, we can almost always remove a node immediately in O(1)
118 * time (modulo retries for consistency checks) when it is
119 * cancelled. But if it may be pinned as the current tail, it must
120 * wait until some subsequent cancellation. For stacks, we need a
121 * potentially O(n) traversal to be sure that we can remove the
122 * node, but this can run concurrently with other threads
123 * accessing the stack.
124 *
125 * While garbage collection takes care of most node reclamation
126 * issues that otherwise complicate nonblocking algorithms, care
127 * is taken to "forget" references to data, other nodes, and
128 * threads that might be held on to long-term by blocked
129 * threads. In cases where setting to null would otherwise
130 * conflict with main algorithms, this is done by changing a
131 * node's link to now point to the node itself. This doesn't arise
132 * much for Stack nodes (because blocked threads do not hang on to
133 * old head pointers), but references in Queue nodes must be
134 * aggressively forgotten to avoid reachability of everything any
135 * node has ever referred to since arrival.
136 */
137
138 /**
139 * Shared internal API for dual stacks and queues.
140 */
141 abstract static class Transferer<E> {
142 /**
143 * Performs a put or take.
144 *
145 * @param e if non-null, the item to be handed to a consumer;
146 * if null, requests that transfer return an item
147 * offered by producer.
148 * @param timed if this operation should timeout
149 * @param nanos the timeout, in nanoseconds
150 * @return if non-null, the item provided or received; if null,
151 * the operation failed due to timeout or interrupt --
152 * the caller can distinguish which of these occurred
153 * by checking Thread.interrupted.
154 */
155 abstract E transfer(E e, boolean timed, long nanos);
156 }
157
158 /** The number of CPUs, for spin control */
159 static final int NCPUS = Runtime.getRuntime().availableProcessors();
160
161 /**
162 * The number of times to spin before blocking in timed waits.
163 * The value is empirically derived -- it works well across a
164 * variety of processors and OSes. Empirically, the best value
165 * seems not to vary with number of CPUs (beyond 2) so is just
166 * a constant.
167 */
168 static final int maxTimedSpins = (NCPUS < 2) ? 0 : 32;
169
170 /**
171 * The number of times to spin before blocking in untimed waits.
172 * This is greater than timed value because untimed waits spin
173 * faster since they don't need to check times on each spin.
174 */
175 static final int maxUntimedSpins = maxTimedSpins * 16;
176
177 /**
178 * The number of nanoseconds for which it is faster to spin
179 * rather than to use timed park. A rough estimate suffices.
180 */
181 static final long spinForTimeoutThreshold = 1000L;
182
183 /** Dual stack */
184 static final class TransferStack<E> extends Transferer<E> {
185 /*
186 * This extends Scherer-Scott dual stack algorithm, differing,
187 * among other ways, by using "covering" nodes rather than
188 * bit-marked pointers: Fulfilling operations push on marker
189 * nodes (with FULFILLING bit set in mode) to reserve a spot
190 * to match a waiting node.
191 */
192
193 /* Modes for SNodes, ORed together in node fields */
194 /** Node represents an unfulfilled consumer */
195 static final int REQUEST = 0;
196 /** Node represents an unfulfilled producer */
197 static final int DATA = 1;
198 /** Node is fulfilling another unfulfilled DATA or REQUEST */
199 static final int FULFILLING = 2;
200
201 /** Returns true if m has fulfilling bit set. */
202 static boolean isFulfilling(int m) { return (m & FULFILLING) != 0; }
203
204 /** Node class for TransferStacks. */
205 static final class SNode {
206 volatile SNode next; // next node in stack
207 volatile SNode match; // the node matched to this
208 volatile Thread waiter; // to control park/unpark
209 Object item; // data; or null for REQUESTs
210 int mode;
211 // Note: item and mode fields don't need to be volatile
212 // since they are always written before, and read after,
213 // other volatile/atomic operations.
214
215 SNode(Object item) {
216 this.item = item;
217 }
218
219 boolean casNext(SNode cmp, SNode val) {
220 return cmp == next &&
221 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
222 }
223
224 /**
225 * Tries to match node s to this node, if so, waking up thread.
226 * Fulfillers call tryMatch to identify their waiters.
227 * Waiters block until they have been matched.
228 *
229 * @param s the node to match
230 * @return true if successfully matched to s
231 */
232 boolean tryMatch(SNode s) {
233 if (match == null &&
234 UNSAFE.compareAndSwapObject(this, matchOffset, null, s)) {
235 Thread w = waiter;
236 if (w != null) { // waiters need at most one unpark
237 waiter = null;
238 LockSupport.unpark(w);
239 }
240 return true;
241 }
242 return match == s;
243 }
244
245 /**
246 * Tries to cancel a wait by matching node to itself.
247 */
248 void tryCancel() {
249 UNSAFE.compareAndSwapObject(this, matchOffset, null, this);
250 }
251
252 boolean isCancelled() {
253 return match == this;
254 }
255
256 // Unsafe mechanics
257 private static final sun.misc.Unsafe UNSAFE;
258 private static final long matchOffset;
259 private static final long nextOffset;
260
261 static {
262 try {
263 UNSAFE = sun.misc.Unsafe.getUnsafe();
264 Class<?> k = SNode.class;
265 matchOffset = UNSAFE.objectFieldOffset
266 (k.getDeclaredField("match"));
267 nextOffset = UNSAFE.objectFieldOffset
268 (k.getDeclaredField("next"));
269 } catch (Exception e) {
270 throw new Error(e);
271 }
272 }
273 }
274
275 /** The head (top) of the stack */
276 volatile SNode head;
277
278 boolean casHead(SNode h, SNode nh) {
279 return h == head &&
280 UNSAFE.compareAndSwapObject(this, headOffset, h, nh);
281 }
282
283 /**
284 * Creates or resets fields of a node. Called only from transfer
285 * where the node to push on stack is lazily created and
286 * reused when possible to help reduce intervals between reads
287 * and CASes of head and to avoid surges of garbage when CASes
288 * to push nodes fail due to contention.
289 */
290 static SNode snode(SNode s, Object e, SNode next, int mode) {
291 if (s == null) s = new SNode(e);
292 s.mode = mode;
293 s.next = next;
294 return s;
295 }
296
297 /**
298 * Puts or takes an item.
299 */
300 @SuppressWarnings("unchecked")
301 E transfer(E e, boolean timed, long nanos) {
302 /*
303 * Basic algorithm is to loop trying one of three actions:
304 *
305 * 1. If apparently empty or already containing nodes of same
306 * mode, try to push node on stack and wait for a match,
307 * returning it, or null if cancelled.
308 *
309 * 2. If apparently containing node of complementary mode,
310 * try to push a fulfilling node on to stack, match
311 * with corresponding waiting node, pop both from
312 * stack, and return matched item. The matching or
313 * unlinking might not actually be necessary because of
314 * other threads performing action 3:
315 *
316 * 3. If top of stack already holds another fulfilling node,
317 * help it out by doing its match and/or pop
318 * operations, and then continue. The code for helping
319 * is essentially the same as for fulfilling, except
320 * that it doesn't return the item.
321 */
322
323 SNode s = null; // constructed/reused as needed
324 int mode = (e == null) ? REQUEST : DATA;
325
326 for (;;) {
327 SNode h = head;
328 if (h == null || h.mode == mode) { // empty or same-mode
329 if (timed && nanos <= 0) { // can't wait
330 if (h != null && h.isCancelled())
331 casHead(h, h.next); // pop cancelled node
332 else
333 return null;
334 } else if (casHead(h, s = snode(s, e, h, mode))) {
335 SNode m = awaitFulfill(s, timed, nanos);
336 if (m == s) { // wait was cancelled
337 clean(s);
338 return null;
339 }
340 if ((h = head) != null && h.next == s)
341 casHead(h, s.next); // help s's fulfiller
342 return (E) ((mode == REQUEST) ? m.item : s.item);
343 }
344 } else if (!isFulfilling(h.mode)) { // try to fulfill
345 if (h.isCancelled()) // already cancelled
346 casHead(h, h.next); // pop and retry
347 else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) {
348 for (;;) { // loop until matched or waiters disappear
349 SNode m = s.next; // m is s's match
350 if (m == null) { // all waiters are gone
351 casHead(s, null); // pop fulfill node
352 s = null; // use new node next time
353 break; // restart main loop
354 }
355 SNode mn = m.next;
356 if (m.tryMatch(s)) {
357 casHead(s, mn); // pop both s and m
358 return (E) ((mode == REQUEST) ? m.item : s.item);
359 } else // lost match
360 s.casNext(m, mn); // help unlink
361 }
362 }
363 } else { // help a fulfiller
364 SNode m = h.next; // m is h's match
365 if (m == null) // waiter is gone
366 casHead(h, null); // pop fulfilling node
367 else {
368 SNode mn = m.next;
369 if (m.tryMatch(h)) // help match
370 casHead(h, mn); // pop both h and m
371 else // lost match
372 h.casNext(m, mn); // help unlink
373 }
374 }
375 }
376 }
377
378 /**
379 * Spins/blocks until node s is matched by a fulfill operation.
380 *
381 * @param s the waiting node
382 * @param timed true if timed wait
383 * @param nanos timeout value
384 * @return matched node, or s if cancelled
385 */
386 SNode awaitFulfill(SNode s, boolean timed, long nanos) {
387 /*
388 * When a node/thread is about to block, it sets its waiter
389 * field and then rechecks state at least one more time
390 * before actually parking, thus covering race vs
391 * fulfiller noticing that waiter is non-null so should be
392 * woken.
393 *
394 * When invoked by nodes that appear at the point of call
395 * to be at the head of the stack, calls to park are
396 * preceded by spins to avoid blocking when producers and
397 * consumers are arriving very close in time. This can
398 * happen enough to bother only on multiprocessors.
399 *
400 * The order of checks for returning out of main loop
401 * reflects fact that interrupts have precedence over
402 * normal returns, which have precedence over
403 * timeouts. (So, on timeout, one last check for match is
404 * done before giving up.) Except that calls from untimed
405 * SynchronousQueue.{poll/offer} don't check interrupts
406 * and don't wait at all, so are trapped in transfer
407 * method rather than calling awaitFulfill.
408 */
409 final long deadline = timed ? System.nanoTime() + nanos : 0L;
410 Thread w = Thread.currentThread();
411 int spins = (shouldSpin(s) ?
412 (timed ? maxTimedSpins : maxUntimedSpins) : 0);
413 for (;;) {
414 if (w.isInterrupted())
415 s.tryCancel();
416 SNode m = s.match;
417 if (m != null)
418 return m;
419 if (timed) {
420 nanos = deadline - System.nanoTime();
421 if (nanos <= 0L) {
422 s.tryCancel();
423 continue;
424 }
425 }
426 if (spins > 0)
427 spins = shouldSpin(s) ? (spins-1) : 0;
428 else if (s.waiter == null)
429 s.waiter = w; // establish waiter so can park next iter
430 else if (!timed)
431 LockSupport.park(this);
432 else if (nanos > spinForTimeoutThreshold)
433 LockSupport.parkNanos(this, nanos);
434 }
435 }
436
437 /**
438 * Returns true if node s is at head or there is an active
439 * fulfiller.
440 */
441 boolean shouldSpin(SNode s) {
442 SNode h = head;
443 return (h == s || h == null || isFulfilling(h.mode));
444 }
445
446 /**
447 * Unlinks s from the stack.
448 */
449 void clean(SNode s) {
450 s.item = null; // forget item
451 s.waiter = null; // forget thread
452
453 /*
454 * At worst we may need to traverse entire stack to unlink
455 * s. If there are multiple concurrent calls to clean, we
456 * might not see s if another thread has already removed
457 * it. But we can stop when we see any node known to
458 * follow s. We use s.next unless it too is cancelled, in
459 * which case we try the node one past. We don't check any
460 * further because we don't want to doubly traverse just to
461 * find sentinel.
462 */
463
464 SNode past = s.next;
465 if (past != null && past.isCancelled())
466 past = past.next;
467
468 // Absorb cancelled nodes at head
469 SNode p;
470 while ((p = head) != null && p != past && p.isCancelled())
471 casHead(p, p.next);
472
473 // Unsplice embedded nodes
474 while (p != null && p != past) {
475 SNode n = p.next;
476 if (n != null && n.isCancelled())
477 p.casNext(n, n.next);
478 else
479 p = n;
480 }
481 }
482
483 // Unsafe mechanics
484 private static final sun.misc.Unsafe UNSAFE;
485 private static final long headOffset;
486 static {
487 try {
488 UNSAFE = sun.misc.Unsafe.getUnsafe();
489 Class<?> k = TransferStack.class;
490 headOffset = UNSAFE.objectFieldOffset
491 (k.getDeclaredField("head"));
492 } catch (Exception e) {
493 throw new Error(e);
494 }
495 }
496 }
497
498 /** Dual Queue */
499 static final class TransferQueue<E> extends Transferer<E> {
500 /*
501 * This extends Scherer-Scott dual queue algorithm, differing,
502 * among other ways, by using modes within nodes rather than
503 * marked pointers. The algorithm is a little simpler than
504 * that for stacks because fulfillers do not need explicit
505 * nodes, and matching is done by CAS'ing QNode.item field
506 * from non-null to null (for put) or vice versa (for take).
507 */
508
509 /** Node class for TransferQueue. */
510 static final class QNode {
511 volatile QNode next; // next node in queue
512 volatile Object item; // CAS'ed to or from null
513 volatile Thread waiter; // to control park/unpark
514 final boolean isData;
515
516 QNode(Object item, boolean isData) {
517 this.item = item;
518 this.isData = isData;
519 }
520
521 boolean casNext(QNode cmp, QNode val) {
522 return next == cmp &&
523 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
524 }
525
526 boolean casItem(Object cmp, Object val) {
527 return item == cmp &&
528 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val);
529 }
530
531 /**
532 * Tries to cancel by CAS'ing ref to this as item.
533 */
534 void tryCancel(Object cmp) {
535 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, this);
536 }
537
538 boolean isCancelled() {
539 return item == this;
540 }
541
542 /**
543 * Returns true if this node is known to be off the queue
544 * because its next pointer has been forgotten due to
545 * an advanceHead operation.
546 */
547 boolean isOffList() {
548 return next == this;
549 }
550
551 // Unsafe mechanics
552 private static final sun.misc.Unsafe UNSAFE;
553 private static final long itemOffset;
554 private static final long nextOffset;
555
556 static {
557 try {
558 UNSAFE = sun.misc.Unsafe.getUnsafe();
559 Class<?> k = QNode.class;
560 itemOffset = UNSAFE.objectFieldOffset
561 (k.getDeclaredField("item"));
562 nextOffset = UNSAFE.objectFieldOffset
563 (k.getDeclaredField("next"));
564 } catch (Exception e) {
565 throw new Error(e);
566 }
567 }
568 }
569
570 /** Head of queue */
571 transient volatile QNode head;
572 /** Tail of queue */
573 transient volatile QNode tail;
574 /**
575 * Reference to a cancelled node that might not yet have been
576 * unlinked from queue because it was the last inserted node
577 * when it was cancelled.
578 */
579 transient volatile QNode cleanMe;
580
581 TransferQueue() {
582 QNode h = new QNode(null, false); // initialize to dummy node.
583 head = h;
584 tail = h;
585 }
586
587 /**
588 * Tries to cas nh as new head; if successful, unlink
589 * old head's next node to avoid garbage retention.
590 */
591 void advanceHead(QNode h, QNode nh) {
592 if (h == head &&
593 UNSAFE.compareAndSwapObject(this, headOffset, h, nh))
594 h.next = h; // forget old next
595 }
596
597 /**
598 * Tries to cas nt as new tail.
599 */
600 void advanceTail(QNode t, QNode nt) {
601 if (tail == t)
602 UNSAFE.compareAndSwapObject(this, tailOffset, t, nt);
603 }
604
605 /**
606 * Tries to CAS cleanMe slot.
607 */
608 boolean casCleanMe(QNode cmp, QNode val) {
609 return cleanMe == cmp &&
610 UNSAFE.compareAndSwapObject(this, cleanMeOffset, cmp, val);
611 }
612
613 /**
614 * Puts or takes an item.
615 */
616 @SuppressWarnings("unchecked")
617 E transfer(E e, boolean timed, long nanos) {
618 /* Basic algorithm is to loop trying to take either of
619 * two actions:
620 *
621 * 1. If queue apparently empty or holding same-mode nodes,
622 * try to add node to queue of waiters, wait to be
623 * fulfilled (or cancelled) and return matching item.
624 *
625 * 2. If queue apparently contains waiting items, and this
626 * call is of complementary mode, try to fulfill by CAS'ing
627 * item field of waiting node and dequeuing it, and then
628 * returning matching item.
629 *
630 * In each case, along the way, check for and try to help
631 * advance head and tail on behalf of other stalled/slow
632 * threads.
633 *
634 * The loop starts off with a null check guarding against
635 * seeing uninitialized head or tail values. This never
636 * happens in current SynchronousQueue, but could if
637 * callers held non-volatile/final ref to the
638 * transferer. The check is here anyway because it places
639 * null checks at top of loop, which is usually faster
640 * than having them implicitly interspersed.
641 */
642
643 QNode s = null; // constructed/reused as needed
644 boolean isData = (e != null);
645
646 for (;;) {
647 QNode t = tail;
648 QNode h = head;
649 if (t == null || h == null) // saw uninitialized value
650 continue; // spin
651
652 if (h == t || t.isData == isData) { // empty or same-mode
653 QNode tn = t.next;
654 if (t != tail) // inconsistent read
655 continue;
656 if (tn != null) { // lagging tail
657 advanceTail(t, tn);
658 continue;
659 }
660 if (timed && nanos <= 0) // can't wait
661 return null;
662 if (s == null)
663 s = new QNode(e, isData);
664 if (!t.casNext(null, s)) // failed to link in
665 continue;
666
667 advanceTail(t, s); // swing tail and wait
668 Object x = awaitFulfill(s, e, timed, nanos);
669 if (x == s) { // wait was cancelled
670 clean(t, s);
671 return null;
672 }
673
674 if (!s.isOffList()) { // not already unlinked
675 advanceHead(t, s); // unlink if head
676 if (x != null) // and forget fields
677 s.item = s;
678 s.waiter = null;
679 }
680 return (x != null) ? (E)x : e;
681
682 } else { // complementary-mode
683 QNode m = h.next; // node to fulfill
684 if (t != tail || m == null || h != head)
685 continue; // inconsistent read
686
687 Object x = m.item;
688 if (isData == (x != null) || // m already fulfilled
689 x == m || // m cancelled
690 !m.casItem(x, e)) { // lost CAS
691 advanceHead(h, m); // dequeue and retry
692 continue;
693 }
694
695 advanceHead(h, m); // successfully fulfilled
696 LockSupport.unpark(m.waiter);
697 return (x != null) ? (E)x : e;
698 }
699 }
700 }
701
702 /**
703 * Spins/blocks until node s is fulfilled.
704 *
705 * @param s the waiting node
706 * @param e the comparison value for checking match
707 * @param timed true if timed wait
708 * @param nanos timeout value
709 * @return matched item, or s if cancelled
710 */
711 Object awaitFulfill(QNode s, E e, boolean timed, long nanos) {
712 /* Same idea as TransferStack.awaitFulfill */
713 final long deadline = timed ? System.nanoTime() + nanos : 0L;
714 Thread w = Thread.currentThread();
715 int spins = ((head.next == s) ?
716 (timed ? maxTimedSpins : maxUntimedSpins) : 0);
717 for (;;) {
718 if (w.isInterrupted())
719 s.tryCancel(e);
720 Object x = s.item;
721 if (x != e)
722 return x;
723 if (timed) {
724 nanos = deadline - System.nanoTime();
725 if (nanos <= 0L) {
726 s.tryCancel(e);
727 continue;
728 }
729 }
730 if (spins > 0)
731 --spins;
732 else if (s.waiter == null)
733 s.waiter = w;
734 else if (!timed)
735 LockSupport.park(this);
736 else if (nanos > spinForTimeoutThreshold)
737 LockSupport.parkNanos(this, nanos);
738 }
739 }
740
741 /**
742 * Gets rid of cancelled node s with original predecessor pred.
743 */
744 void clean(QNode pred, QNode s) {
745 s.waiter = null; // forget thread
746 /*
747 * At any given time, exactly one node on list cannot be
748 * deleted -- the last inserted node. To accommodate this,
749 * if we cannot delete s, we save its predecessor as
750 * "cleanMe", deleting the previously saved version
751 * first. At least one of node s or the node previously
752 * saved can always be deleted, so this always terminates.
753 */
754 while (pred.next == s) { // Return early if already unlinked
755 QNode h = head;
756 QNode hn = h.next; // Absorb cancelled first node as head
757 if (hn != null && hn.isCancelled()) {
758 advanceHead(h, hn);
759 continue;
760 }
761 QNode t = tail; // Ensure consistent read for tail
762 if (t == h)
763 return;
764 QNode tn = t.next;
765 if (t != tail)
766 continue;
767 if (tn != null) {
768 advanceTail(t, tn);
769 continue;
770 }
771 if (s != t) { // If not tail, try to unsplice
772 QNode sn = s.next;
773 if (sn == s || pred.casNext(s, sn))
774 return;
775 }
776 QNode dp = cleanMe;
777 if (dp != null) { // Try unlinking previous cancelled node
778 QNode d = dp.next;
779 QNode dn;
780 if (d == null || // d is gone or
781 d == dp || // d is off list or
782 !d.isCancelled() || // d not cancelled or
783 (d != t && // d not tail and
784 (dn = d.next) != null && // has successor
785 dn != d && // that is on list
786 dp.casNext(d, dn))) // d unspliced
787 casCleanMe(dp, null);
788 if (dp == pred)
789 return; // s is already saved node
790 } else if (casCleanMe(null, pred))
791 return; // Postpone cleaning s
792 }
793 }
794
795 private static final sun.misc.Unsafe UNSAFE;
796 private static final long headOffset;
797 private static final long tailOffset;
798 private static final long cleanMeOffset;
799 static {
800 try {
801 UNSAFE = sun.misc.Unsafe.getUnsafe();
802 Class<?> k = TransferQueue.class;
803 headOffset = UNSAFE.objectFieldOffset
804 (k.getDeclaredField("head"));
805 tailOffset = UNSAFE.objectFieldOffset
806 (k.getDeclaredField("tail"));
807 cleanMeOffset = UNSAFE.objectFieldOffset
808 (k.getDeclaredField("cleanMe"));
809 } catch (Exception e) {
810 throw new Error(e);
811 }
812 }
813 }
814
815 /**
816 * The transferer. Set only in constructor, but cannot be declared
817 * as final without further complicating serialization. Since
818 * this is accessed only at most once per public method, there
819 * isn't a noticeable performance penalty for using volatile
820 * instead of final here.
821 */
822 private transient volatile Transferer<E> transferer;
823
824 /**
825 * Creates a {@code SynchronousQueue} with nonfair access policy.
826 */
827 public SynchronousQueue() {
828 this(false);
829 }
830
831 /**
832 * Creates a {@code SynchronousQueue} with the specified fairness policy.
833 *
834 * @param fair if true, waiting threads contend in FIFO order for
835 * access; otherwise the order is unspecified.
836 */
837 public SynchronousQueue(boolean fair) {
838 transferer = fair ? new TransferQueue<E>() : new TransferStack<E>();
839 }
840
841 /**
842 * Adds the specified element to this queue, waiting if necessary for
843 * another thread to receive it.
844 *
845 * @throws InterruptedException {@inheritDoc}
846 * @throws NullPointerException {@inheritDoc}
847 */
848 public void put(E e) throws InterruptedException {
849 if (e == null) throw new NullPointerException();
850 if (transferer.transfer(e, false, 0) == null) {
851 Thread.interrupted();
852 throw new InterruptedException();
853 }
854 }
855
856 /**
857 * Inserts the specified element into this queue, waiting if necessary
858 * up to the specified wait time for another thread to receive it.
859 *
860 * @return {@code true} if successful, or {@code false} if the
861 * specified waiting time elapses before a consumer appears
862 * @throws InterruptedException {@inheritDoc}
863 * @throws NullPointerException {@inheritDoc}
864 */
865 public boolean offer(E e, long timeout, TimeUnit unit)
866 throws InterruptedException {
867 if (e == null) throw new NullPointerException();
868 if (transferer.transfer(e, true, unit.toNanos(timeout)) != null)
869 return true;
870 if (!Thread.interrupted())
871 return false;
872 throw new InterruptedException();
873 }
874
875 /**
876 * Inserts the specified element into this queue, if another thread is
877 * waiting to receive it.
878 *
879 * @param e the element to add
880 * @return {@code true} if the element was added to this queue, else
881 * {@code false}
882 * @throws NullPointerException if the specified element is null
883 */
884 public boolean offer(E e) {
885 if (e == null) throw new NullPointerException();
886 return transferer.transfer(e, true, 0) != null;
887 }
888
889 /**
890 * Retrieves and removes the head of this queue, waiting if necessary
891 * for another thread to insert it.
892 *
893 * @return the head of this queue
894 * @throws InterruptedException {@inheritDoc}
895 */
896 public E take() throws InterruptedException {
897 E e = transferer.transfer(null, false, 0);
898 if (e != null)
899 return e;
900 Thread.interrupted();
901 throw new InterruptedException();
902 }
903
904 /**
905 * Retrieves and removes the head of this queue, waiting
906 * if necessary up to the specified wait time, for another thread
907 * to insert it.
908 *
909 * @return the head of this queue, or {@code null} if the
910 * specified waiting time elapses before an element is present
911 * @throws InterruptedException {@inheritDoc}
912 */
913 public E poll(long timeout, TimeUnit unit) throws InterruptedException {
914 E e = transferer.transfer(null, true, unit.toNanos(timeout));
915 if (e != null || !Thread.interrupted())
916 return e;
917 throw new InterruptedException();
918 }
919
920 /**
921 * Retrieves and removes the head of this queue, if another thread
922 * is currently making an element available.
923 *
924 * @return the head of this queue, or {@code null} if no
925 * element is available
926 */
927 public E poll() {
928 return transferer.transfer(null, true, 0);
929 }
930
931 /**
932 * Always returns {@code true}.
933 * A {@code SynchronousQueue} has no internal capacity.
934 *
935 * @return {@code true}
936 */
937 public boolean isEmpty() {
938 return true;
939 }
940
941 /**
942 * Always returns zero.
943 * A {@code SynchronousQueue} has no internal capacity.
944 *
945 * @return zero
946 */
947 public int size() {
948 return 0;
949 }
950
951 /**
952 * Always returns zero.
953 * A {@code SynchronousQueue} has no internal capacity.
954 *
955 * @return zero
956 */
957 public int remainingCapacity() {
958 return 0;
959 }
960
961 /**
962 * Does nothing.
963 * A {@code SynchronousQueue} has no internal capacity.
964 */
965 public void clear() {
966 }
967
968 /**
969 * Always returns {@code false}.
970 * A {@code SynchronousQueue} has no internal capacity.
971 *
972 * @param o the element
973 * @return {@code false}
974 */
975 public boolean contains(Object o) {
976 return false;
977 }
978
979 /**
980 * Always returns {@code false}.
981 * A {@code SynchronousQueue} has no internal capacity.
982 *
983 * @param o the element to remove
984 * @return {@code false}
985 */
986 public boolean remove(Object o) {
987 return false;
988 }
989
990 /**
991 * Returns {@code false} unless the given collection is empty.
992 * A {@code SynchronousQueue} has no internal capacity.
993 *
994 * @param c the collection
995 * @return {@code false} unless given collection is empty
996 */
997 public boolean containsAll(Collection<?> c) {
998 return c.isEmpty();
999 }
1000
1001 /**
1002 * Always returns {@code false}.
1003 * A {@code SynchronousQueue} has no internal capacity.
1004 *
1005 * @param c the collection
1006 * @return {@code false}
1007 */
1008 public boolean removeAll(Collection<?> c) {
1009 return false;
1010 }
1011
1012 /**
1013 * Always returns {@code false}.
1014 * A {@code SynchronousQueue} has no internal capacity.
1015 *
1016 * @param c the collection
1017 * @return {@code false}
1018 */
1019 public boolean retainAll(Collection<?> c) {
1020 return false;
1021 }
1022
1023 /**
1024 * Always returns {@code null}.
1025 * A {@code SynchronousQueue} does not return elements
1026 * unless actively waited on.
1027 *
1028 * @return {@code null}
1029 */
1030 public E peek() {
1031 return null;
1032 }
1033
1034 /**
1035 * Returns an empty iterator in which {@code hasNext} always returns
1036 * {@code false}.
1037 *
1038 * @return an empty iterator
1039 */
1040 public Iterator<E> iterator() {
1041 return Collections.emptyIterator();
1042 }
1043
1044 /**
1045 * Returns an empty spliterator in which calls to
1046 * {@link java.util.Spliterator#trySplit()} always return {@code null}.
1047 *
1048 * @return an empty spliterator
1049 * @since 1.8
1050 */
1051 public Spliterator<E> spliterator() {
1052 return Spliterators.emptySpliterator();
1053 }
1054
1055 /**
1056 * Returns a zero-length array.
1057 * @return a zero-length array
1058 */
1059 public Object[] toArray() {
1060 return new Object[0];
1061 }
1062
1063 /**
1064 * Sets the zeroth element of the specified array to {@code null}
1065 * (if the array has non-zero length) and returns it.
1066 *
1067 * @param a the array
1068 * @return the specified array
1069 * @throws NullPointerException if the specified array is null
1070 */
1071 public <T> T[] toArray(T[] a) {
1072 if (a.length > 0)
1073 a[0] = null;
1074 return a;
1075 }
1076
1077 /**
1078 * @throws UnsupportedOperationException {@inheritDoc}
1079 * @throws ClassCastException {@inheritDoc}
1080 * @throws NullPointerException {@inheritDoc}
1081 * @throws IllegalArgumentException {@inheritDoc}
1082 */
1083 public int drainTo(Collection<? super E> c) {
1084 if (c == null)
1085 throw new NullPointerException();
1086 if (c == this)
1087 throw new IllegalArgumentException();
1088 int n = 0;
1089 for (E e; (e = poll()) != null;) {
1090 c.add(e);
1091 ++n;
1092 }
1093 return n;
1094 }
1095
1096 /**
1097 * @throws UnsupportedOperationException {@inheritDoc}
1098 * @throws ClassCastException {@inheritDoc}
1099 * @throws NullPointerException {@inheritDoc}
1100 * @throws IllegalArgumentException {@inheritDoc}
1101 */
1102 public int drainTo(Collection<? super E> c, int maxElements) {
1103 if (c == null)
1104 throw new NullPointerException();
1105 if (c == this)
1106 throw new IllegalArgumentException();
1107 int n = 0;
1108 for (E e; n < maxElements && (e = poll()) != null;) {
1109 c.add(e);
1110 ++n;
1111 }
1112 return n;
1113 }
1114
1115 /*
1116 * To cope with serialization strategy in the 1.5 version of
1117 * SynchronousQueue, we declare some unused classes and fields
1118 * that exist solely to enable serializability across versions.
1119 * These fields are never used, so are initialized only if this
1120 * object is ever serialized or deserialized.
1121 */
1122
1123 @SuppressWarnings("serial")
1124 static class WaitQueue implements java.io.Serializable { }
1125 static class LifoWaitQueue extends WaitQueue {
1126 private static final long serialVersionUID = -3633113410248163686L;
1127 }
1128 static class FifoWaitQueue extends WaitQueue {
1129 private static final long serialVersionUID = -3623113410248163686L;
1130 }
1131 private ReentrantLock qlock;
1132 private WaitQueue waitingProducers;
1133 private WaitQueue waitingConsumers;
1134
1135 /**
1136 * Saves this queue to a stream (that is, serializes it).
1137 * @param s the stream
1138 * @throws java.io.IOException if an I/O error occurs
1139 */
1140 private void writeObject(java.io.ObjectOutputStream s)
1141 throws java.io.IOException {
1142 boolean fair = transferer instanceof TransferQueue;
1143 if (fair) {
1144 qlock = new ReentrantLock(true);
1145 waitingProducers = new FifoWaitQueue();
1146 waitingConsumers = new FifoWaitQueue();
1147 }
1148 else {
1149 qlock = new ReentrantLock();
1150 waitingProducers = new LifoWaitQueue();
1151 waitingConsumers = new LifoWaitQueue();
1152 }
1153 s.defaultWriteObject();
1154 }
1155
1156 /**
1157 * Reconstitutes this queue from a stream (that is, deserializes it).
1158 * @param s the stream
1159 * @throws ClassNotFoundException if the class of a serialized object
1160 * could not be found
1161 * @throws java.io.IOException if an I/O error occurs
1162 */
1163 private void readObject(java.io.ObjectInputStream s)
1164 throws java.io.IOException, ClassNotFoundException {
1165 s.defaultReadObject();
1166 if (waitingProducers instanceof FifoWaitQueue)
1167 transferer = new TransferQueue<E>();
1168 else
1169 transferer = new TransferStack<E>();
1170 }
1171
1172 }