ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/SynchronousQueue.java
Revision: 1.85
Committed: Mon Dec 19 19:58:00 2011 UTC (12 years, 5 months ago) by jsr166
Branch: MAIN
Changes since 1.84: +6 -10 lines
Log Message:
s/lastTime/deadline/g

File Contents

# Content
1 /*
2 * Written by Doug Lea, Bill Scherer, and Michael Scott with
3 * assistance from members of JCP JSR-166 Expert Group and released to
4 * the public domain, as explained at
5 * http://creativecommons.org/publicdomain/zero/1.0/
6 */
7
8 package java.util.concurrent;
9 import java.util.concurrent.locks.*;
10 import java.util.*;
11
12 /**
13 * A {@linkplain BlockingQueue blocking queue} in which each insert
14 * operation must wait for a corresponding remove operation by another
15 * thread, and vice versa. A synchronous queue does not have any
16 * internal capacity, not even a capacity of one. You cannot
17 * <tt>peek</tt> at a synchronous queue because an element is only
18 * present when you try to remove it; you cannot insert an element
19 * (using any method) unless another thread is trying to remove it;
20 * you cannot iterate as there is nothing to iterate. The
21 * <em>head</em> of the queue is the element that the first queued
22 * inserting thread is trying to add to the queue; if there is no such
23 * queued thread then no element is available for removal and
24 * <tt>poll()</tt> will return <tt>null</tt>. For purposes of other
25 * <tt>Collection</tt> methods (for example <tt>contains</tt>), a
26 * <tt>SynchronousQueue</tt> acts as an empty collection. This queue
27 * does not permit <tt>null</tt> elements.
28 *
29 * <p>Synchronous queues are similar to rendezvous channels used in
30 * CSP and Ada. They are well suited for handoff designs, in which an
31 * object running in one thread must sync up with an object running
32 * in another thread in order to hand it some information, event, or
33 * task.
34 *
35 * <p> This class supports an optional fairness policy for ordering
36 * waiting producer and consumer threads. By default, this ordering
37 * is not guaranteed. However, a queue constructed with fairness set
38 * to <tt>true</tt> grants threads access in FIFO order.
39 *
40 * <p>This class and its iterator implement all of the
41 * <em>optional</em> methods of the {@link Collection} and {@link
42 * Iterator} interfaces.
43 *
44 * <p>This class is a member of the
45 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
46 * Java Collections Framework</a>.
47 *
48 * @since 1.5
49 * @author Doug Lea and Bill Scherer and Michael Scott
50 * @param <E> the type of elements held in this collection
51 */
52 public class SynchronousQueue<E> extends AbstractQueue<E>
53 implements BlockingQueue<E>, java.io.Serializable {
54 private static final long serialVersionUID = -3223113410248163686L;
55
56 /*
57 * This class implements extensions of the dual stack and dual
58 * queue algorithms described in "Nonblocking Concurrent Objects
59 * with Condition Synchronization", by W. N. Scherer III and
60 * M. L. Scott. 18th Annual Conf. on Distributed Computing,
61 * Oct. 2004 (see also
62 * http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/duals.html).
63 * The (Lifo) stack is used for non-fair mode, and the (Fifo)
64 * queue for fair mode. The performance of the two is generally
65 * similar. Fifo usually supports higher throughput under
66 * contention but Lifo maintains higher thread locality in common
67 * applications.
68 *
69 * A dual queue (and similarly stack) is one that at any given
70 * time either holds "data" -- items provided by put operations,
71 * or "requests" -- slots representing take operations, or is
72 * empty. A call to "fulfill" (i.e., a call requesting an item
73 * from a queue holding data or vice versa) dequeues a
74 * complementary node. The most interesting feature of these
75 * queues is that any operation can figure out which mode the
76 * queue is in, and act accordingly without needing locks.
77 *
78 * Both the queue and stack extend abstract class Transferer
79 * defining the single method transfer that does a put or a
80 * take. These are unified into a single method because in dual
81 * data structures, the put and take operations are symmetrical,
82 * so nearly all code can be combined. The resulting transfer
83 * methods are on the long side, but are easier to follow than
84 * they would be if broken up into nearly-duplicated parts.
85 *
86 * The queue and stack data structures share many conceptual
87 * similarities but very few concrete details. For simplicity,
88 * they are kept distinct so that they can later evolve
89 * separately.
90 *
91 * The algorithms here differ from the versions in the above paper
92 * in extending them for use in synchronous queues, as well as
93 * dealing with cancellation. The main differences include:
94 *
95 * 1. The original algorithms used bit-marked pointers, but
96 * the ones here use mode bits in nodes, leading to a number
97 * of further adaptations.
98 * 2. SynchronousQueues must block threads waiting to become
99 * fulfilled.
100 * 3. Support for cancellation via timeout and interrupts,
101 * including cleaning out cancelled nodes/threads
102 * from lists to avoid garbage retention and memory depletion.
103 *
104 * Blocking is mainly accomplished using LockSupport park/unpark,
105 * except that nodes that appear to be the next ones to become
106 * fulfilled first spin a bit (on multiprocessors only). On very
107 * busy synchronous queues, spinning can dramatically improve
108 * throughput. And on less busy ones, the amount of spinning is
109 * small enough not to be noticeable.
110 *
111 * Cleaning is done in different ways in queues vs stacks. For
112 * queues, we can almost always remove a node immediately in O(1)
113 * time (modulo retries for consistency checks) when it is
114 * cancelled. But if it may be pinned as the current tail, it must
115 * wait until some subsequent cancellation. For stacks, we need a
116 * potentially O(n) traversal to be sure that we can remove the
117 * node, but this can run concurrently with other threads
118 * accessing the stack.
119 *
120 * While garbage collection takes care of most node reclamation
121 * issues that otherwise complicate nonblocking algorithms, care
122 * is taken to "forget" references to data, other nodes, and
123 * threads that might be held on to long-term by blocked
124 * threads. In cases where setting to null would otherwise
125 * conflict with main algorithms, this is done by changing a
126 * node's link to now point to the node itself. This doesn't arise
127 * much for Stack nodes (because blocked threads do not hang on to
128 * old head pointers), but references in Queue nodes must be
129 * aggressively forgotten to avoid reachability of everything any
130 * node has ever referred to since arrival.
131 */
132
133 /**
134 * Shared internal API for dual stacks and queues.
135 */
136 abstract static class Transferer<E> {
137 /**
138 * Performs a put or take.
139 *
140 * @param e if non-null, the item to be handed to a consumer;
141 * if null, requests that transfer return an item
142 * offered by producer.
143 * @param timed if this operation should timeout
144 * @param nanos the timeout, in nanoseconds
145 * @return if non-null, the item provided or received; if null,
146 * the operation failed due to timeout or interrupt --
147 * the caller can distinguish which of these occurred
148 * by checking Thread.interrupted.
149 */
150 abstract E transfer(E e, boolean timed, long nanos);
151 }
152
153 /** The number of CPUs, for spin control */
154 static final int NCPUS = Runtime.getRuntime().availableProcessors();
155
156 /**
157 * The number of times to spin before blocking in timed waits.
158 * The value is empirically derived -- it works well across a
159 * variety of processors and OSes. Empirically, the best value
160 * seems not to vary with number of CPUs (beyond 2) so is just
161 * a constant.
162 */
163 static final int maxTimedSpins = (NCPUS < 2) ? 0 : 32;
164
165 /**
166 * The number of times to spin before blocking in untimed waits.
167 * This is greater than timed value because untimed waits spin
168 * faster since they don't need to check times on each spin.
169 */
170 static final int maxUntimedSpins = maxTimedSpins * 16;
171
172 /**
173 * The number of nanoseconds for which it is faster to spin
174 * rather than to use timed park. A rough estimate suffices.
175 */
176 static final long spinForTimeoutThreshold = 1000L;
177
178 /** Dual stack */
179 static final class TransferStack<E> extends Transferer<E> {
180 /*
181 * This extends Scherer-Scott dual stack algorithm, differing,
182 * among other ways, by using "covering" nodes rather than
183 * bit-marked pointers: Fulfilling operations push on marker
184 * nodes (with FULFILLING bit set in mode) to reserve a spot
185 * to match a waiting node.
186 */
187
188 /* Modes for SNodes, ORed together in node fields */
189 /** Node represents an unfulfilled consumer */
190 static final int REQUEST = 0;
191 /** Node represents an unfulfilled producer */
192 static final int DATA = 1;
193 /** Node is fulfilling another unfulfilled DATA or REQUEST */
194 static final int FULFILLING = 2;
195
196 /** Return true if m has fulfilling bit set */
197 static boolean isFulfilling(int m) { return (m & FULFILLING) != 0; }
198
199 /** Node class for TransferStacks. */
200 static final class SNode {
201 volatile SNode next; // next node in stack
202 volatile SNode match; // the node matched to this
203 volatile Thread waiter; // to control park/unpark
204 Object item; // data; or null for REQUESTs
205 int mode;
206 // Note: item and mode fields don't need to be volatile
207 // since they are always written before, and read after,
208 // other volatile/atomic operations.
209
210 SNode(Object item) {
211 this.item = item;
212 }
213
214 boolean casNext(SNode cmp, SNode val) {
215 return cmp == next &&
216 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
217 }
218
219 /**
220 * Tries to match node s to this node, if so, waking up thread.
221 * Fulfillers call tryMatch to identify their waiters.
222 * Waiters block until they have been matched.
223 *
224 * @param s the node to match
225 * @return true if successfully matched to s
226 */
227 boolean tryMatch(SNode s) {
228 if (match == null &&
229 UNSAFE.compareAndSwapObject(this, matchOffset, null, s)) {
230 Thread w = waiter;
231 if (w != null) { // waiters need at most one unpark
232 waiter = null;
233 LockSupport.unpark(w);
234 }
235 return true;
236 }
237 return match == s;
238 }
239
240 /**
241 * Tries to cancel a wait by matching node to itself.
242 */
243 void tryCancel() {
244 UNSAFE.compareAndSwapObject(this, matchOffset, null, this);
245 }
246
247 boolean isCancelled() {
248 return match == this;
249 }
250
251 // Unsafe mechanics
252 private static final sun.misc.Unsafe UNSAFE;
253 private static final long matchOffset;
254 private static final long nextOffset;
255
256 static {
257 try {
258 UNSAFE = sun.misc.Unsafe.getUnsafe();
259 Class<?> k = SNode.class;
260 matchOffset = UNSAFE.objectFieldOffset
261 (k.getDeclaredField("match"));
262 nextOffset = UNSAFE.objectFieldOffset
263 (k.getDeclaredField("next"));
264 } catch (Exception e) {
265 throw new Error(e);
266 }
267 }
268 }
269
270 /** The head (top) of the stack */
271 volatile SNode head;
272
273 boolean casHead(SNode h, SNode nh) {
274 return h == head &&
275 UNSAFE.compareAndSwapObject(this, headOffset, h, nh);
276 }
277
278 /**
279 * Creates or resets fields of a node. Called only from transfer
280 * where the node to push on stack is lazily created and
281 * reused when possible to help reduce intervals between reads
282 * and CASes of head and to avoid surges of garbage when CASes
283 * to push nodes fail due to contention.
284 */
285 static SNode snode(SNode s, Object e, SNode next, int mode) {
286 if (s == null) s = new SNode(e);
287 s.mode = mode;
288 s.next = next;
289 return s;
290 }
291
292 /**
293 * Puts or takes an item.
294 */
295 @SuppressWarnings("unchecked")
296 E transfer(E e, boolean timed, long nanos) {
297 /*
298 * Basic algorithm is to loop trying one of three actions:
299 *
300 * 1. If apparently empty or already containing nodes of same
301 * mode, try to push node on stack and wait for a match,
302 * returning it, or null if cancelled.
303 *
304 * 2. If apparently containing node of complementary mode,
305 * try to push a fulfilling node on to stack, match
306 * with corresponding waiting node, pop both from
307 * stack, and return matched item. The matching or
308 * unlinking might not actually be necessary because of
309 * other threads performing action 3:
310 *
311 * 3. If top of stack already holds another fulfilling node,
312 * help it out by doing its match and/or pop
313 * operations, and then continue. The code for helping
314 * is essentially the same as for fulfilling, except
315 * that it doesn't return the item.
316 */
317
318 SNode s = null; // constructed/reused as needed
319 int mode = (e == null) ? REQUEST : DATA;
320
321 for (;;) {
322 SNode h = head;
323 if (h == null || h.mode == mode) { // empty or same-mode
324 if (timed && nanos <= 0) { // can't wait
325 if (h != null && h.isCancelled())
326 casHead(h, h.next); // pop cancelled node
327 else
328 return null;
329 } else if (casHead(h, s = snode(s, e, h, mode))) {
330 SNode m = awaitFulfill(s, timed, nanos);
331 if (m == s) { // wait was cancelled
332 clean(s);
333 return null;
334 }
335 if ((h = head) != null && h.next == s)
336 casHead(h, s.next); // help s's fulfiller
337 return (E) ((mode == REQUEST) ? m.item : s.item);
338 }
339 } else if (!isFulfilling(h.mode)) { // try to fulfill
340 if (h.isCancelled()) // already cancelled
341 casHead(h, h.next); // pop and retry
342 else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) {
343 for (;;) { // loop until matched or waiters disappear
344 SNode m = s.next; // m is s's match
345 if (m == null) { // all waiters are gone
346 casHead(s, null); // pop fulfill node
347 s = null; // use new node next time
348 break; // restart main loop
349 }
350 SNode mn = m.next;
351 if (m.tryMatch(s)) {
352 casHead(s, mn); // pop both s and m
353 return (E) ((mode == REQUEST) ? m.item : s.item);
354 } else // lost match
355 s.casNext(m, mn); // help unlink
356 }
357 }
358 } else { // help a fulfiller
359 SNode m = h.next; // m is h's match
360 if (m == null) // waiter is gone
361 casHead(h, null); // pop fulfilling node
362 else {
363 SNode mn = m.next;
364 if (m.tryMatch(h)) // help match
365 casHead(h, mn); // pop both h and m
366 else // lost match
367 h.casNext(m, mn); // help unlink
368 }
369 }
370 }
371 }
372
373 /**
374 * Spins/blocks until node s is matched by a fulfill operation.
375 *
376 * @param s the waiting node
377 * @param timed true if timed wait
378 * @param nanos timeout value
379 * @return matched node, or s if cancelled
380 */
381 SNode awaitFulfill(SNode s, boolean timed, long nanos) {
382 /*
383 * When a node/thread is about to block, it sets its waiter
384 * field and then rechecks state at least one more time
385 * before actually parking, thus covering race vs
386 * fulfiller noticing that waiter is non-null so should be
387 * woken.
388 *
389 * When invoked by nodes that appear at the point of call
390 * to be at the head of the stack, calls to park are
391 * preceded by spins to avoid blocking when producers and
392 * consumers are arriving very close in time. This can
393 * happen enough to bother only on multiprocessors.
394 *
395 * The order of checks for returning out of main loop
396 * reflects fact that interrupts have precedence over
397 * normal returns, which have precedence over
398 * timeouts. (So, on timeout, one last check for match is
399 * done before giving up.) Except that calls from untimed
400 * SynchronousQueue.{poll/offer} don't check interrupts
401 * and don't wait at all, so are trapped in transfer
402 * method rather than calling awaitFulfill.
403 */
404 final long deadline = timed ? System.nanoTime() + nanos : 0L;
405 Thread w = Thread.currentThread();
406 int spins = (shouldSpin(s) ?
407 (timed ? maxTimedSpins : maxUntimedSpins) : 0);
408 for (;;) {
409 if (w.isInterrupted())
410 s.tryCancel();
411 SNode m = s.match;
412 if (m != null)
413 return m;
414 if (timed) {
415 nanos = deadline - System.nanoTime();
416 if (nanos <= 0L) {
417 s.tryCancel();
418 continue;
419 }
420 }
421 if (spins > 0)
422 spins = shouldSpin(s) ? (spins-1) : 0;
423 else if (s.waiter == null)
424 s.waiter = w; // establish waiter so can park next iter
425 else if (!timed)
426 LockSupport.park(this);
427 else if (nanos > spinForTimeoutThreshold)
428 LockSupport.parkNanos(this, nanos);
429 }
430 }
431
432 /**
433 * Returns true if node s is at head or there is an active
434 * fulfiller.
435 */
436 boolean shouldSpin(SNode s) {
437 SNode h = head;
438 return (h == s || h == null || isFulfilling(h.mode));
439 }
440
441 /**
442 * Unlinks s from the stack.
443 */
444 void clean(SNode s) {
445 s.item = null; // forget item
446 s.waiter = null; // forget thread
447
448 /*
449 * At worst we may need to traverse entire stack to unlink
450 * s. If there are multiple concurrent calls to clean, we
451 * might not see s if another thread has already removed
452 * it. But we can stop when we see any node known to
453 * follow s. We use s.next unless it too is cancelled, in
454 * which case we try the node one past. We don't check any
455 * further because we don't want to doubly traverse just to
456 * find sentinel.
457 */
458
459 SNode past = s.next;
460 if (past != null && past.isCancelled())
461 past = past.next;
462
463 // Absorb cancelled nodes at head
464 SNode p;
465 while ((p = head) != null && p != past && p.isCancelled())
466 casHead(p, p.next);
467
468 // Unsplice embedded nodes
469 while (p != null && p != past) {
470 SNode n = p.next;
471 if (n != null && n.isCancelled())
472 p.casNext(n, n.next);
473 else
474 p = n;
475 }
476 }
477
478 // Unsafe mechanics
479 private static final sun.misc.Unsafe UNSAFE;
480 private static final long headOffset;
481 static {
482 try {
483 UNSAFE = sun.misc.Unsafe.getUnsafe();
484 Class<?> k = TransferStack.class;
485 headOffset = UNSAFE.objectFieldOffset
486 (k.getDeclaredField("head"));
487 } catch (Exception e) {
488 throw new Error(e);
489 }
490 }
491 }
492
493 /** Dual Queue */
494 static final class TransferQueue<E> extends Transferer<E> {
495 /*
496 * This extends Scherer-Scott dual queue algorithm, differing,
497 * among other ways, by using modes within nodes rather than
498 * marked pointers. The algorithm is a little simpler than
499 * that for stacks because fulfillers do not need explicit
500 * nodes, and matching is done by CAS'ing QNode.item field
501 * from non-null to null (for put) or vice versa (for take).
502 */
503
504 /** Node class for TransferQueue. */
505 static final class QNode {
506 volatile QNode next; // next node in queue
507 volatile Object item; // CAS'ed to or from null
508 volatile Thread waiter; // to control park/unpark
509 final boolean isData;
510
511 QNode(Object item, boolean isData) {
512 this.item = item;
513 this.isData = isData;
514 }
515
516 boolean casNext(QNode cmp, QNode val) {
517 return next == cmp &&
518 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
519 }
520
521 boolean casItem(Object cmp, Object val) {
522 return item == cmp &&
523 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val);
524 }
525
526 /**
527 * Tries to cancel by CAS'ing ref to this as item.
528 */
529 void tryCancel(Object cmp) {
530 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, this);
531 }
532
533 boolean isCancelled() {
534 return item == this;
535 }
536
537 /**
538 * Returns true if this node is known to be off the queue
539 * because its next pointer has been forgotten due to
540 * an advanceHead operation.
541 */
542 boolean isOffList() {
543 return next == this;
544 }
545
546 // Unsafe mechanics
547 private static final sun.misc.Unsafe UNSAFE;
548 private static final long itemOffset;
549 private static final long nextOffset;
550
551 static {
552 try {
553 UNSAFE = sun.misc.Unsafe.getUnsafe();
554 Class<?> k = QNode.class;
555 itemOffset = UNSAFE.objectFieldOffset
556 (k.getDeclaredField("item"));
557 nextOffset = UNSAFE.objectFieldOffset
558 (k.getDeclaredField("next"));
559 } catch (Exception e) {
560 throw new Error(e);
561 }
562 }
563 }
564
565 /** Head of queue */
566 transient volatile QNode head;
567 /** Tail of queue */
568 transient volatile QNode tail;
569 /**
570 * Reference to a cancelled node that might not yet have been
571 * unlinked from queue because it was the last inserted node
572 * when it cancelled.
573 */
574 transient volatile QNode cleanMe;
575
576 TransferQueue() {
577 QNode h = new QNode(null, false); // initialize to dummy node.
578 head = h;
579 tail = h;
580 }
581
582 /**
583 * Tries to cas nh as new head; if successful, unlink
584 * old head's next node to avoid garbage retention.
585 */
586 void advanceHead(QNode h, QNode nh) {
587 if (h == head &&
588 UNSAFE.compareAndSwapObject(this, headOffset, h, nh))
589 h.next = h; // forget old next
590 }
591
592 /**
593 * Tries to cas nt as new tail.
594 */
595 void advanceTail(QNode t, QNode nt) {
596 if (tail == t)
597 UNSAFE.compareAndSwapObject(this, tailOffset, t, nt);
598 }
599
600 /**
601 * Tries to CAS cleanMe slot.
602 */
603 boolean casCleanMe(QNode cmp, QNode val) {
604 return cleanMe == cmp &&
605 UNSAFE.compareAndSwapObject(this, cleanMeOffset, cmp, val);
606 }
607
608 /**
609 * Puts or takes an item.
610 */
611 @SuppressWarnings("unchecked")
612 E transfer(E e, boolean timed, long nanos) {
613 /* Basic algorithm is to loop trying to take either of
614 * two actions:
615 *
616 * 1. If queue apparently empty or holding same-mode nodes,
617 * try to add node to queue of waiters, wait to be
618 * fulfilled (or cancelled) and return matching item.
619 *
620 * 2. If queue apparently contains waiting items, and this
621 * call is of complementary mode, try to fulfill by CAS'ing
622 * item field of waiting node and dequeuing it, and then
623 * returning matching item.
624 *
625 * In each case, along the way, check for and try to help
626 * advance head and tail on behalf of other stalled/slow
627 * threads.
628 *
629 * The loop starts off with a null check guarding against
630 * seeing uninitialized head or tail values. This never
631 * happens in current SynchronousQueue, but could if
632 * callers held non-volatile/final ref to the
633 * transferer. The check is here anyway because it places
634 * null checks at top of loop, which is usually faster
635 * than having them implicitly interspersed.
636 */
637
638 QNode s = null; // constructed/reused as needed
639 boolean isData = (e != null);
640
641 for (;;) {
642 QNode t = tail;
643 QNode h = head;
644 if (t == null || h == null) // saw uninitialized value
645 continue; // spin
646
647 if (h == t || t.isData == isData) { // empty or same-mode
648 QNode tn = t.next;
649 if (t != tail) // inconsistent read
650 continue;
651 if (tn != null) { // lagging tail
652 advanceTail(t, tn);
653 continue;
654 }
655 if (timed && nanos <= 0) // can't wait
656 return null;
657 if (s == null)
658 s = new QNode(e, isData);
659 if (!t.casNext(null, s)) // failed to link in
660 continue;
661
662 advanceTail(t, s); // swing tail and wait
663 Object x = awaitFulfill(s, e, timed, nanos);
664 if (x == s) { // wait was cancelled
665 clean(t, s);
666 return null;
667 }
668
669 if (!s.isOffList()) { // not already unlinked
670 advanceHead(t, s); // unlink if head
671 if (x != null) // and forget fields
672 s.item = s;
673 s.waiter = null;
674 }
675 return (x != null) ? (E)x : e;
676
677 } else { // complementary-mode
678 QNode m = h.next; // node to fulfill
679 if (t != tail || m == null || h != head)
680 continue; // inconsistent read
681
682 Object x = m.item;
683 if (isData == (x != null) || // m already fulfilled
684 x == m || // m cancelled
685 !m.casItem(x, e)) { // lost CAS
686 advanceHead(h, m); // dequeue and retry
687 continue;
688 }
689
690 advanceHead(h, m); // successfully fulfilled
691 LockSupport.unpark(m.waiter);
692 return (x != null) ? (E)x : e;
693 }
694 }
695 }
696
697 /**
698 * Spins/blocks until node s is fulfilled.
699 *
700 * @param s the waiting node
701 * @param e the comparison value for checking match
702 * @param timed true if timed wait
703 * @param nanos timeout value
704 * @return matched item, or s if cancelled
705 */
706 Object awaitFulfill(QNode s, E e, boolean timed, long nanos) {
707 /* Same idea as TransferStack.awaitFulfill */
708 final long deadline = timed ? System.nanoTime() + nanos : 0L;
709 Thread w = Thread.currentThread();
710 int spins = ((head.next == s) ?
711 (timed ? maxTimedSpins : maxUntimedSpins) : 0);
712 for (;;) {
713 if (w.isInterrupted())
714 s.tryCancel(e);
715 Object x = s.item;
716 if (x != e)
717 return x;
718 if (timed) {
719 nanos = deadline - System.nanoTime();
720 if (nanos <= 0L) {
721 s.tryCancel(e);
722 continue;
723 }
724 }
725 if (spins > 0)
726 --spins;
727 else if (s.waiter == null)
728 s.waiter = w;
729 else if (!timed)
730 LockSupport.park(this);
731 else if (nanos > spinForTimeoutThreshold)
732 LockSupport.parkNanos(this, nanos);
733 }
734 }
735
736 /**
737 * Gets rid of cancelled node s with original predecessor pred.
738 */
739 void clean(QNode pred, QNode s) {
740 s.waiter = null; // forget thread
741 /*
742 * At any given time, exactly one node on list cannot be
743 * deleted -- the last inserted node. To accommodate this,
744 * if we cannot delete s, we save its predecessor as
745 * "cleanMe", deleting the previously saved version
746 * first. At least one of node s or the node previously
747 * saved can always be deleted, so this always terminates.
748 */
749 while (pred.next == s) { // Return early if already unlinked
750 QNode h = head;
751 QNode hn = h.next; // Absorb cancelled first node as head
752 if (hn != null && hn.isCancelled()) {
753 advanceHead(h, hn);
754 continue;
755 }
756 QNode t = tail; // Ensure consistent read for tail
757 if (t == h)
758 return;
759 QNode tn = t.next;
760 if (t != tail)
761 continue;
762 if (tn != null) {
763 advanceTail(t, tn);
764 continue;
765 }
766 if (s != t) { // If not tail, try to unsplice
767 QNode sn = s.next;
768 if (sn == s || pred.casNext(s, sn))
769 return;
770 }
771 QNode dp = cleanMe;
772 if (dp != null) { // Try unlinking previous cancelled node
773 QNode d = dp.next;
774 QNode dn;
775 if (d == null || // d is gone or
776 d == dp || // d is off list or
777 !d.isCancelled() || // d not cancelled or
778 (d != t && // d not tail and
779 (dn = d.next) != null && // has successor
780 dn != d && // that is on list
781 dp.casNext(d, dn))) // d unspliced
782 casCleanMe(dp, null);
783 if (dp == pred)
784 return; // s is already saved node
785 } else if (casCleanMe(null, pred))
786 return; // Postpone cleaning s
787 }
788 }
789
790 private static final sun.misc.Unsafe UNSAFE;
791 private static final long headOffset;
792 private static final long tailOffset;
793 private static final long cleanMeOffset;
794 static {
795 try {
796 UNSAFE = sun.misc.Unsafe.getUnsafe();
797 Class<?> k = TransferQueue.class;
798 headOffset = UNSAFE.objectFieldOffset
799 (k.getDeclaredField("head"));
800 tailOffset = UNSAFE.objectFieldOffset
801 (k.getDeclaredField("tail"));
802 cleanMeOffset = UNSAFE.objectFieldOffset
803 (k.getDeclaredField("cleanMe"));
804 } catch (Exception e) {
805 throw new Error(e);
806 }
807 }
808 }
809
810 /**
811 * The transferer. Set only in constructor, but cannot be declared
812 * as final without further complicating serialization. Since
813 * this is accessed only at most once per public method, there
814 * isn't a noticeable performance penalty for using volatile
815 * instead of final here.
816 */
817 private transient volatile Transferer<E> transferer;
818
819 /**
820 * Creates a <tt>SynchronousQueue</tt> with nonfair access policy.
821 */
822 public SynchronousQueue() {
823 this(false);
824 }
825
826 /**
827 * Creates a <tt>SynchronousQueue</tt> with the specified fairness policy.
828 *
829 * @param fair if true, waiting threads contend in FIFO order for
830 * access; otherwise the order is unspecified.
831 */
832 public SynchronousQueue(boolean fair) {
833 transferer = fair ? new TransferQueue<E>() : new TransferStack<E>();
834 }
835
836 /**
837 * Adds the specified element to this queue, waiting if necessary for
838 * another thread to receive it.
839 *
840 * @throws InterruptedException {@inheritDoc}
841 * @throws NullPointerException {@inheritDoc}
842 */
843 public void put(E e) throws InterruptedException {
844 if (e == null) throw new NullPointerException();
845 if (transferer.transfer(e, false, 0) == null) {
846 Thread.interrupted();
847 throw new InterruptedException();
848 }
849 }
850
851 /**
852 * Inserts the specified element into this queue, waiting if necessary
853 * up to the specified wait time for another thread to receive it.
854 *
855 * @return <tt>true</tt> if successful, or <tt>false</tt> if the
856 * specified waiting time elapses before a consumer appears.
857 * @throws InterruptedException {@inheritDoc}
858 * @throws NullPointerException {@inheritDoc}
859 */
860 public boolean offer(E e, long timeout, TimeUnit unit)
861 throws InterruptedException {
862 if (e == null) throw new NullPointerException();
863 if (transferer.transfer(e, true, unit.toNanos(timeout)) != null)
864 return true;
865 if (!Thread.interrupted())
866 return false;
867 throw new InterruptedException();
868 }
869
870 /**
871 * Inserts the specified element into this queue, if another thread is
872 * waiting to receive it.
873 *
874 * @param e the element to add
875 * @return <tt>true</tt> if the element was added to this queue, else
876 * <tt>false</tt>
877 * @throws NullPointerException if the specified element is null
878 */
879 public boolean offer(E e) {
880 if (e == null) throw new NullPointerException();
881 return transferer.transfer(e, true, 0) != null;
882 }
883
884 /**
885 * Retrieves and removes the head of this queue, waiting if necessary
886 * for another thread to insert it.
887 *
888 * @return the head of this queue
889 * @throws InterruptedException {@inheritDoc}
890 */
891 public E take() throws InterruptedException {
892 E e = transferer.transfer(null, false, 0);
893 if (e != null)
894 return e;
895 Thread.interrupted();
896 throw new InterruptedException();
897 }
898
899 /**
900 * Retrieves and removes the head of this queue, waiting
901 * if necessary up to the specified wait time, for another thread
902 * to insert it.
903 *
904 * @return the head of this queue, or <tt>null</tt> if the
905 * specified waiting time elapses before an element is present.
906 * @throws InterruptedException {@inheritDoc}
907 */
908 public E poll(long timeout, TimeUnit unit) throws InterruptedException {
909 E e = transferer.transfer(null, true, unit.toNanos(timeout));
910 if (e != null || !Thread.interrupted())
911 return e;
912 throw new InterruptedException();
913 }
914
915 /**
916 * Retrieves and removes the head of this queue, if another thread
917 * is currently making an element available.
918 *
919 * @return the head of this queue, or <tt>null</tt> if no
920 * element is available.
921 */
922 public E poll() {
923 return transferer.transfer(null, true, 0);
924 }
925
926 /**
927 * Always returns <tt>true</tt>.
928 * A <tt>SynchronousQueue</tt> has no internal capacity.
929 *
930 * @return <tt>true</tt>
931 */
932 public boolean isEmpty() {
933 return true;
934 }
935
936 /**
937 * Always returns zero.
938 * A <tt>SynchronousQueue</tt> has no internal capacity.
939 *
940 * @return zero.
941 */
942 public int size() {
943 return 0;
944 }
945
946 /**
947 * Always returns zero.
948 * A <tt>SynchronousQueue</tt> has no internal capacity.
949 *
950 * @return zero.
951 */
952 public int remainingCapacity() {
953 return 0;
954 }
955
956 /**
957 * Does nothing.
958 * A <tt>SynchronousQueue</tt> has no internal capacity.
959 */
960 public void clear() {
961 }
962
963 /**
964 * Always returns <tt>false</tt>.
965 * A <tt>SynchronousQueue</tt> has no internal capacity.
966 *
967 * @param o the element
968 * @return <tt>false</tt>
969 */
970 public boolean contains(Object o) {
971 return false;
972 }
973
974 /**
975 * Always returns <tt>false</tt>.
976 * A <tt>SynchronousQueue</tt> has no internal capacity.
977 *
978 * @param o the element to remove
979 * @return <tt>false</tt>
980 */
981 public boolean remove(Object o) {
982 return false;
983 }
984
985 /**
986 * Returns <tt>false</tt> unless the given collection is empty.
987 * A <tt>SynchronousQueue</tt> has no internal capacity.
988 *
989 * @param c the collection
990 * @return <tt>false</tt> unless given collection is empty
991 */
992 public boolean containsAll(Collection<?> c) {
993 return c.isEmpty();
994 }
995
996 /**
997 * Always returns <tt>false</tt>.
998 * A <tt>SynchronousQueue</tt> has no internal capacity.
999 *
1000 * @param c the collection
1001 * @return <tt>false</tt>
1002 */
1003 public boolean removeAll(Collection<?> c) {
1004 return false;
1005 }
1006
1007 /**
1008 * Always returns <tt>false</tt>.
1009 * A <tt>SynchronousQueue</tt> has no internal capacity.
1010 *
1011 * @param c the collection
1012 * @return <tt>false</tt>
1013 */
1014 public boolean retainAll(Collection<?> c) {
1015 return false;
1016 }
1017
1018 /**
1019 * Always returns <tt>null</tt>.
1020 * A <tt>SynchronousQueue</tt> does not return elements
1021 * unless actively waited on.
1022 *
1023 * @return <tt>null</tt>
1024 */
1025 public E peek() {
1026 return null;
1027 }
1028
1029 /**
1030 * Returns an empty iterator in which <tt>hasNext</tt> always returns
1031 * <tt>false</tt>.
1032 *
1033 * @return an empty iterator
1034 */
1035 @SuppressWarnings("unchecked")
1036 public Iterator<E> iterator() {
1037 return (Iterator<E>) EmptyIterator.EMPTY_ITERATOR;
1038 }
1039
1040 // Replicated from a previous version of Collections
1041 private static class EmptyIterator<E> implements Iterator<E> {
1042 static final EmptyIterator<Object> EMPTY_ITERATOR
1043 = new EmptyIterator<Object>();
1044
1045 public boolean hasNext() { return false; }
1046 public E next() { throw new NoSuchElementException(); }
1047 public void remove() { throw new IllegalStateException(); }
1048 }
1049
1050 /**
1051 * Returns a zero-length array.
1052 * @return a zero-length array
1053 */
1054 public Object[] toArray() {
1055 return new Object[0];
1056 }
1057
1058 /**
1059 * Sets the zeroeth element of the specified array to <tt>null</tt>
1060 * (if the array has non-zero length) and returns it.
1061 *
1062 * @param a the array
1063 * @return the specified array
1064 * @throws NullPointerException if the specified array is null
1065 */
1066 public <T> T[] toArray(T[] a) {
1067 if (a.length > 0)
1068 a[0] = null;
1069 return a;
1070 }
1071
1072 /**
1073 * @throws UnsupportedOperationException {@inheritDoc}
1074 * @throws ClassCastException {@inheritDoc}
1075 * @throws NullPointerException {@inheritDoc}
1076 * @throws IllegalArgumentException {@inheritDoc}
1077 */
1078 public int drainTo(Collection<? super E> c) {
1079 if (c == null)
1080 throw new NullPointerException();
1081 if (c == this)
1082 throw new IllegalArgumentException();
1083 int n = 0;
1084 for (E e; (e = poll()) != null;) {
1085 c.add(e);
1086 ++n;
1087 }
1088 return n;
1089 }
1090
1091 /**
1092 * @throws UnsupportedOperationException {@inheritDoc}
1093 * @throws ClassCastException {@inheritDoc}
1094 * @throws NullPointerException {@inheritDoc}
1095 * @throws IllegalArgumentException {@inheritDoc}
1096 */
1097 public int drainTo(Collection<? super E> c, int maxElements) {
1098 if (c == null)
1099 throw new NullPointerException();
1100 if (c == this)
1101 throw new IllegalArgumentException();
1102 int n = 0;
1103 for (E e; n < maxElements && (e = poll()) != null;) {
1104 c.add(e);
1105 ++n;
1106 }
1107 return n;
1108 }
1109
1110 /*
1111 * To cope with serialization strategy in the 1.5 version of
1112 * SynchronousQueue, we declare some unused classes and fields
1113 * that exist solely to enable serializability across versions.
1114 * These fields are never used, so are initialized only if this
1115 * object is ever serialized or deserialized.
1116 */
1117
1118 @SuppressWarnings("serial")
1119 static class WaitQueue implements java.io.Serializable { }
1120 static class LifoWaitQueue extends WaitQueue {
1121 private static final long serialVersionUID = -3633113410248163686L;
1122 }
1123 static class FifoWaitQueue extends WaitQueue {
1124 private static final long serialVersionUID = -3623113410248163686L;
1125 }
1126 private ReentrantLock qlock;
1127 private WaitQueue waitingProducers;
1128 private WaitQueue waitingConsumers;
1129
1130 /**
1131 * Saves this queue to a stream (that is, serializes it).
1132 */
1133 private void writeObject(java.io.ObjectOutputStream s)
1134 throws java.io.IOException {
1135 boolean fair = transferer instanceof TransferQueue;
1136 if (fair) {
1137 qlock = new ReentrantLock(true);
1138 waitingProducers = new FifoWaitQueue();
1139 waitingConsumers = new FifoWaitQueue();
1140 }
1141 else {
1142 qlock = new ReentrantLock();
1143 waitingProducers = new LifoWaitQueue();
1144 waitingConsumers = new LifoWaitQueue();
1145 }
1146 s.defaultWriteObject();
1147 }
1148
1149 /**
1150 * Reconstitutes this queue from a stream (that is, deserializes it).
1151 */
1152 private void readObject(final java.io.ObjectInputStream s)
1153 throws java.io.IOException, ClassNotFoundException {
1154 s.defaultReadObject();
1155 if (waitingProducers instanceof FifoWaitQueue)
1156 transferer = new TransferQueue<E>();
1157 else
1158 transferer = new TransferStack<E>();
1159 }
1160
1161 // Unsafe mechanics
1162 static long objectFieldOffset(sun.misc.Unsafe UNSAFE,
1163 String field, Class<?> klazz) {
1164 try {
1165 return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1166 } catch (NoSuchFieldException e) {
1167 // Convert Exception to corresponding Error
1168 NoSuchFieldError error = new NoSuchFieldError(field);
1169 error.initCause(e);
1170 throw error;
1171 }
1172 }
1173
1174 }