ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ReentrantLock.java
Revision: 1.2
Committed: Tue May 27 18:14:40 2003 UTC (21 years ago) by dl
Branch: MAIN
CVS Tags: JSR166_PRERELEASE_0_1
Changes since 1.1: +965 -33 lines
Log Message:
re-check-in initial implementations

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain. Use, modify, and
4 * redistribute this code in any way without acknowledgement.
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.atomic.*;
9 import java.util.Date;
10
11 /**
12 * A reentrant mutual exclusion lock.
13 * <p><tt>ReentrantLock</tt> defines a stand-alone {@link Lock} class with
14 * the same basic behavior and semantics as the implicit
15 * monitor lock accessed by the use of <tt>synchronized</tt> methods and
16 * statements, but without the forced block-structured locking and unlocking
17 * that occurs with <tt>synchronized</tt> methods and
18 * statements.
19 *
20 * <p>The order in which blocked threads are granted the lock is not
21 * specified.
22 *
23 * <p>If you want a non-reentrant mutual exclusion lock then it is a simple
24 * matter to use a reentrant lock in a non-reentrant way by ensuring that
25 * the lock is not held by the current thread prior to locking.
26 * See {@link #getHoldCount} for a way to check this.
27 *
28 *
29 * <p><code>ReentrantLock</code> instances are intended to be used primarily
30 * in before/after constructions such as:
31 *
32 * <pre>
33 * class X {
34 * ReentrantLock lock;
35 * // ...
36 *
37 * public void m() {
38 * lock.lock(); // block until condition holds
39 * try {
40 * // ... method body
41 * } finally {
42 * lock.unlock()
43 * }
44 * }
45 * }
46 * </pre>
47 *
48 * <p>This class supports the interruption of lock acquisition and provides a
49 * {@link #newCondition Condition} implementation that supports the
50 * interruption of thread suspension.
51 *
52 * <p>Except where noted, passing a <tt>null</tt> value for any parameter
53 * will result in a {@link NullPointerException} being thrown.
54 *
55 * <h3>Implementation Notes</h3>
56 * <p> To-BE_DONE
57 *
58 *
59 * @since 1.5
60 * @spec JSR-166
61 * @revised $Date: 2003/02/06 06:41:13 $
62 * @editor $Author: dholmes $
63 * @author Doug Lea
64 *
65 * @fixme (1) We need a non-nested example to motivate this
66 * @fixme (2) Describe performance aspects of interruptible locking for the RI
67 **/
68 public class ReentrantLock implements Lock, java.io.Serializable {
69 /*
70 This is a fastpath/slowpath algorithm.
71
72 * The basic algorithm looks like this, ignoring reentrance,
73 cancellation, timeouts, error checking etc:
74 Lock:
75 if (!fair && casOwner(null, currentThread)) // fastpath
76 return;
77 node = create and enq a wait node;
78 for (;;) {
79 if (node is first on queue) {
80 if (casOwner(null, currentThread)) {
81 deq(node);
82 return;
83 }
84 }
85 park(currentThread);
86 }
87
88 Unlock:
89 owner = null; // volatile assignment
90 h = first node on queue;
91 if (h != null) unpark(h's successor's thread);
92
93 * The fast path uses one atomic CAS operation, plus one
94 StoreLoad barrier per lock/unlock pair. The "owner" field is
95 handled as a simple spinlock. To lock, the owner field is set
96 to current thread using conditional atomic update. To unlock,
97 the owner field is set to null, checking if anyone needs
98 waking up, if so doing so. Recursive locks/unlocks instead
99 increment/decrement recursion field.
100
101 * By default, contended locks use a kind of "greedy" /
102 "renouncement" / barging / convoy-avoidance strategy: When a
103 lock is released, a waiting thread is signalled so that it can
104 (re)contend for the lock. It might lose and thus need to
105 rewait. This strategy has much higher throughput than
106 "directed handoff" because it reduces blocking of running
107 threads, but poorer fairness. The wait queue is FIFO, but
108 newly entering threads can barge ahead and grab lock before
109 woken waiters, so acquires are not strictly FIFO, and transfer
110 is not deterministically fair. It is probablistically fair
111 though. Earlier queued threads are allowed to recontend before
112 later queued threads, and each recontention has an unbiased
113 chance to succeed.
114
115 * The base class also sets up support for FairReentrantLock
116 subclass, that differs only in that barging is disabled when
117 there is contention, so locks proceed FIFO. There can be
118 some races in detecting contention, but it is still FIFO from
119 a definable (although complicated to describe) single point,
120 so qualifies as a FIFO lock.
121
122 * The wait queue is a variant of a "CLH" (Craig, Landin, and
123 Hagersten) lock. CLH locks are normally used for spinlocks.
124 We instead use them for blocking locks, but use the same basic
125 tactic of holding the information about whether a thread is
126 released (i.e, eligible to contend for ownership lock) in the
127 predecessor of each node. A node waits until its predecessor
128 says it is released. It is signalled when its predecessor
129 releases the lock. Each node of the queue otherwise serves as
130 a specific-notification-style monitor holding a single waiting
131 thread.
132
133 To enqueue into a CLH lock, you atomically splice it in as new
134 tail. To dequeue, you just set the head field.
135
136 +------+ prev +-----+ +-----+
137 head | | <---- | | <---- | | tail
138 +------+ +-----+ +-----+
139
140 The great thing about CLH Locks is that insertion into a CLH
141 queue requires only a single atomic operation on "tail", so
142 there is a simple atomic point of demarcation from unqueued to
143 queued. Similarly, dequeing involves only updating the
144 "head". However, it takes a bit more work for nodes to
145 determine who their successors are, in part to deal with
146 possible cancellation due to timeouts and interrupts.
147
148 The "prev" links (not used in original CLH locks), are mainly
149 needed to handle cancellation. If a node is cancelled, its
150 successor must be relinked to a non-cancelled predecessor. For
151 explanation in the case of spin locks, see the papers by Scott
152 & Scherer at
153 http://www.cs.rochester.edu/u/scott/synchronization/
154
155 Being first in the queue does not mean that you have the lock,
156 only that you may contend for it (by CAS'ing owner field). So
157 the currently released contender thread may need to rewait.
158
159 We also use "next" links to implement blocking mechanics. The
160 thread id for each node is kept in its node, so a predecessor
161 signals the next node to wake up by traversing next link to
162 determine which thread it is. Determination of successor must
163 avoid races with newly queued nodes to set the "next" fields
164 of their predecessors. This is solved by checking backwards
165 from the atomically updated "tail" when a node's successor
166 appears to be null.
167
168 Cancellation introduces some conservatism to the basic
169 algorithm of these. Since we must poll for cancellation of
170 other nodes, we can miss noticing whether a cancelled node is
171 ahead or behind us. This is dealt with by always unparking
172 successors upon cancellation, and not letting them park again
173 (by saturating release counts) until they stabilize on a new
174 predecessor.
175
176 * Threads waiting on Conditions use the same kind of nodes, but
177 only need to link them in simple (non-concurrent) linked
178 queues because they are only accessed when lock is held. To
179 wait, a thread makes a node inserted into a condition queue.
180 Upon signal, the node is transferred to the lock queue.
181 Special values of count fields are used to mark which queue a
182 node is on.
183
184 * All suspension and resumption of threads uses the JSR166
185 native park/unpark API. These are safe versions of
186 suspend/resume (plus timeout support) that internally avoid
187 the intrinsic race problems with suspend/resume: Park suspends
188 if not preceded by an unpark. Unpark resumes if waiting, else
189 causes next park not to suspend. While safe and efficient,
190 these are not general-purpose public operations because we do
191 not want code outside this package to randomly call these
192 methods -- parks and unparks should be matched up. (It is OK
193 to have more unparks than unparks, but it causes threads to
194 spuriously wake up. So minimizing excessive unparks is a
195 performance concern.)
196 */
197
198 /*
199 Note that all fields are transient and defined in a way that
200 deserialized locks are in initial unlocked state.
201 */
202
203 /**
204 * Creates an instance of <tt>ReentrantLock</tt>.
205 */
206 public ReentrantLock() { }
207
208 /**
209 * Current owner of lock, or null iff the lock is free. Acquired
210 * only using CAS.
211 */
212 private transient volatile Thread owner;
213
214 /** Number of recursive acquires. Note: total holds = recursions+1 */
215 private transient int recursions;
216
217 /** Head of the wait queue, initialized to dummy node */
218 private transient volatile WaitNode head = new WaitNode(null);
219
220 /** Tail of the wait queue, initialized to be same node as head */
221 private transient volatile WaitNode tail = head;
222
223 // Atomics support
224
225 private final static AtomicReferenceFieldUpdater<ReentrantLock, Thread> ownerUpdater = new AtomicReferenceFieldUpdater<ReentrantLock, Thread>(new ReentrantLock[0], new Thread[0], "owner");
226 private final static AtomicReferenceFieldUpdater<ReentrantLock, WaitNode> tailUpdater = new AtomicReferenceFieldUpdater<ReentrantLock, WaitNode>(new ReentrantLock[0], new WaitNode[0], "tail");
227 private final static AtomicReferenceFieldUpdater<ReentrantLock, WaitNode> headUpdater = new AtomicReferenceFieldUpdater<ReentrantLock, WaitNode>(new ReentrantLock[0], new WaitNode[0], "head");
228 private final static AtomicIntegerFieldUpdater<WaitNode> countUpdater =
229 new AtomicIntegerFieldUpdater<WaitNode>(new WaitNode[0], "count");
230
231 private boolean acquireOwner(Thread current) {
232 return ownerUpdater.compareAndSet(this, null, current);
233 }
234
235 private boolean casTail(WaitNode cmp, WaitNode val) {
236 return tailUpdater.compareAndSet(this, cmp, val);
237 }
238
239 private boolean casHead(WaitNode cmp, WaitNode val) {
240 return headUpdater.compareAndSet(this, cmp, val);
241 }
242
243 // casCount non-private because also accessed by Conditions
244 final boolean casCount(WaitNode node, int cmp, int val) {
245 return countUpdater.compareAndSet(node, cmp, val);
246 }
247
248 /**
249 * Node class for threads waiting for locks.
250 */
251 static final class WaitNode {
252 /**
253 * Count controlling whether successor node is allowed to try
254 * to obtain ownership. Acts as a saturating (in both
255 * directions) counting semaphore: Upon each wait, the count
256 * is reduced to zero if positive, else reduced to negative,
257 * in which case the thread will park. The count is
258 * incremented on each unlock that would enable successor
259 * thread to obtain lock (succeeding if there is no
260 * contention). The special value of CANCELLED is used to mean
261 * that the count cannot be either incremented or decremented.
262 * The special value of ON_CONDITION_QUEUE is used when nodes
263 * are on conditions queues instead of lock queue, and the
264 * special value TRANSFERRING is used while signals are in
265 * progress.
266 */
267 volatile int count;
268
269 /**
270 * Link to predecessor node that current node/thread relies on
271 * for checking release count. Assigned once during enqueing,
272 * and nulled out (for sake of GC) only upon dequeuing. Upon
273 * cancellation, we do NOT adjust this field, but simply
274 * traverse through prev's until we hit a non-cancelled node.
275 * A valid predecessor will always exist because the head node
276 * is never cancelled.
277 */
278 volatile WaitNode prev;
279
280 /**
281 * Link to the successor node that the current node/thread
282 * unparks upon lock release. Assigned once during enquing.
283 * Upon cancellation, we do NOT adjust this field, but simply
284 * traverse through next's until we hit a non-cancelled node,
285 * (or null if at end of queue). The enqueue operation does
286 * not assign next field of a predecessor until after
287 * attachment, so seeing a null next field not necessarily
288 * mean that node is at end of queue. However, if a next field
289 * appears to be null, we can scan prev's from the tail to
290 * double-check.
291 */
292 volatile WaitNode next;
293
294 /**
295 * The thread that enqueued this node. Initialized on
296 * construction and nulled out after use. Note that this need
297 * not be declared volatile since it is always accessed after
298 * traversing volatile links, and written before writing
299 * links.
300 */
301 Thread thread;
302
303 WaitNode(Thread t) { thread = t; }
304 }
305
306 /**
307 * Special value for release counts indicating that node is cancelled.
308 * Must be a large positive number.
309 */
310 static private final int CANCELLED = Integer.MAX_VALUE;
311
312 /**
313 * Special value for node counts indicating that node is on a
314 * condition queue. Must be large negative number.
315 */
316 static private final int ON_CONDITION_QUEUE = Integer.MIN_VALUE;
317
318 /**
319 * Special value for node counts indicating that node is in
320 * process of transfer. Must be negative and greater than
321 * ON_CONDITION_QUEUE.
322 */
323 static private final int TRANSFERRING = ON_CONDITION_QUEUE+1;
324
325
326 /**
327 * Utility to throw an exception if lock not held by given thread.
328 */
329 final void checkOwner(Thread current) {
330 if (current != owner)
331 throw new IllegalMonitorStateException();
332 }
333
334 /**
335 * Return whther lock wait queue is empty
336 */
337 final boolean queueEmpty() {
338 WaitNode h = head; // force order of the volatile reads
339 return h == tail;
340 }
341
342 /**
343 * Insert node into queue. Return predecessor.
344 */
345 private WaitNode enq(WaitNode node) {
346 for (;;) {
347 WaitNode p = tail;
348 node.prev = p; // prev must be valid before/upon CAS
349 if (casTail(p, node)) {
350 p.next = node; // Note: next field assignment lags CAS
351 return p;
352 }
353 }
354 }
355
356 /**
357 * Return true if it is OK to take fast path to lock.
358 * Overridden in FairReentrantLock.
359 */
360 boolean canBarge() {
361 return true;
362 }
363
364 /*
365 * policy/status values for main lock routine. A bit ugly but
366 * worth it to factor the common lock code across the different
367 * policies for suppressing or throwing IE and indicating
368 * timeouts.
369 */
370
371 /**
372 * As param, suppress interrupts.
373 * As return, lock was successfully obtained.
374 */
375 static final int NO_INTERRUPT = 0;
376
377 /**
378 * As param, allow interrupts.
379 * As return, lock was not obtained because thread was interrupted.
380 */
381 static final int INTERRUPT = -1;
382
383
384 /**
385 * As param, use timed waits.
386 * As return, lock was not obtained because timeout was exceeded.
387 */
388 static final int TIMEOUT = -2;
389
390 /**
391 * Main locking code, parameterized across different policies.
392 * @param current current thread
393 * @param node its wait-node, it is alread exists; else null in
394 * which case it is created,
395 * @param policy -- NO_INTERRUPT, INTERRUPT, or TIMEOUT
396 * thread interrupted
397 * @param nanos time to wait, or zero if untimed
398 * @return NO_INTERRUPT on success, else INTERRUPT, or TIMEOUT
399 */
400 final int doLock(Thread current,
401 WaitNode node,
402 int policy,
403 long nanos) {
404
405 int status = NO_INTERRUPT;
406
407 /*
408 * Bypass queueing if a recursive lock
409 */
410 if (owner == current) {
411 ++recursions;
412 return status;
413 }
414
415
416 long lastTime = 0; // for adjusting timeouts, below
417
418 /*
419 * p is our predecessor node, that holds release count giving
420 * permission to try to obtain lock if we are first in queue.
421 */
422 WaitNode p;
423
424 /*
425 * Create and enqueue node if not already created. Nodes
426 * transferred from condition queues will already be created
427 * and queued.
428 */
429 if (node == null) {
430 node = new WaitNode(current);
431 p = enq(node);
432 }
433 else
434 p = node.prev;
435
436 /*
437 * Repeatedly try to get ownership if first in queue, else
438 * block.
439 */
440 for (;;) {
441 /*
442 * If we are the first thread in queue, try to get the
443 * lock. (We must not try to get lock if we are not
444 * first.) Note that if we become first after p == head
445 * check, all is well -- we can be sure an unlocking
446 * thread will signal us.
447 */
448
449 if (p == head && acquireOwner(current)) {
450 /*
451 * status can be INTERRUPT here if policy is
452 * NO_INTERRUPT but thread was interrupted while
453 * waiting, in which case we must reset status upon
454 * return.
455 */
456 if (status == INTERRUPT)
457 current.interrupt();
458
459 p.next = null; // clear for GC and to suppress signals
460 node.thread = null;
461 node.prev = null;
462 head = node;
463 return NO_INTERRUPT;
464 }
465
466 int count = p.count;
467
468 /*
469 * If our predecessor was cancelled, use its predecessor.
470 * There will always be a non-cancelled one somewhere
471 * because head node is never cancelled, so at worst we
472 * will hit it. (Note that because head is never
473 * cancelled, we can perform this check after trying to
474 * acquire ownership).
475 */
476 if (count == CANCELLED) {
477 node.prev = p = p.prev;
478 }
479
480 /*
481 * Wait if we are not not first in queue, or if we are
482 * first, we have tried to acquire owner and failed since
483 * either entry or last release. (Note that count can
484 * already be less than zero if we spuriously returned
485 * from a previous park or got new a predecessor due to
486 * cancellation.)
487 *
488 * We also don't wait if atomic decrement of release count
489 * fails. We just continue main loop on failure to
490 * atomically update release count because interference
491 * causing failure is almost surely due to someone
492 * releasing us anyway.
493 *
494 * Each wait consumes all available releases. Normally
495 * there is only one anyway because release doesn't bother
496 * incrementing if already positive.
497 *
498 */
499 else if (casCount(p, count, (count > 0)? 0 : -1) && count <= 0) {
500
501 // Update and check timeout value
502 if (nanos > 0) {
503 long now = TimeUnit.nanoTime();
504 if (lastTime != 0) {
505 nanos -= now - lastTime;
506 if (nanos <= 0)
507 status = TIMEOUT;
508 }
509 lastTime = now;
510 }
511
512 if (status != TIMEOUT)
513 JSR166Support.park(false, nanos);
514
515 if (Thread.interrupted())
516 status = INTERRUPT;
517
518 if (status != NO_INTERRUPT && policy != NO_INTERRUPT) {
519 node.thread = null; // disable signals
520 countUpdater.set(node, CANCELLED); // don't need CAS here
521 signalSuccessor(node);
522 return status;
523 }
524 }
525 }
526 }
527
528 /**
529 * Wake up node's successor, if one exists
530 */
531 private void signalSuccessor(WaitNode node) {
532 /*
533 * Find successor -- normally just node.next.
534 */
535 WaitNode s = node.next;
536
537 /*
538 * if s is cancelled, traverse through next's.
539 */
540
541 while (s != null && s.count == CANCELLED) {
542 node = s;
543 s = s.next;
544 }
545
546 /*
547 * If successor appears to be null, check to see if a newly
548 * queued node is successor by starting at tail and working
549 * backwards. If so, help out the enqueing thread by setting
550 * next field. We don't expect this loop to trigger often,
551 * and hardly ever to iterate.
552 */
553
554 if (s == null) {
555 WaitNode t = tail;
556 for (;;) {
557 /*
558 * If t == node, there is no successor.
559 */
560 if (t == node)
561 return;
562
563 WaitNode tp = t.prev;
564
565 /*
566 * t's predecessor is null if we are lagging so far
567 * behind the actions of other nodes/threads that an
568 * intervening head.prev was nulled by some
569 * non-cancelled successor of node. In which case,
570 * there's no live successor.
571 */
572
573 if (tp == null)
574 return;
575
576 /*
577 * If we find successor, we can do the assignment to
578 * next (don't even need CAS) on behalf of enqueuing
579 * thread. At worst we will stall now and lag behind
580 * both the setting and the later clearing of next
581 * field. But if so, we will reattach an internal link
582 * in soon-to-be unreachable set of nodes, so no harm
583 * done.
584 */
585
586 if (tp == node) {
587 node.next = s = t;
588 break;
589 }
590
591 t = tp;
592
593 /*
594 * Before iterating, check to see if link has
595 * appeared.
596 */
597 WaitNode n = node.next;
598 if (n != null) {
599 s = n;
600 break;
601 }
602 }
603 }
604
605 Thread thr = s.thread;
606 if (thr != null && thr != owner) // don't bother signalling if has lock
607 JSR166Support.unpark(thr);
608 }
609
610
611 /**
612 * Increment release count and signal next thread in queue if one
613 * exists and is waiting. Called only by unlock. This code is split
614 * out from unlock to encourage inlining of non-contended cases.
615 * @param h (nonnull) current head of lock queue (reread upon CAS
616 * failure).
617 */
618 private void releaseFirst(WaitNode h) {
619 for (;;) {
620 int c = h.count;
621 if (c > 0) // Don't need signal if already positive
622 return;
623 if (owner != null) // Don't bother if some thread got lock
624 return;
625 if (casCount(h, c, (c < 0)? 0 : 1)) { // saturate count at 1
626 if (c < 0)
627 signalSuccessor(h);
628 return;
629 }
630 h = head;
631 if (h == tail) // No successor
632 return;
633 }
634 }
635
636 /**
637 * Attempts to release this lock. <p>If the current thread is the
638 * holder of this lock then the hold count is decremented. If the
639 * hold count is now zero then the lock is released. <p>If the
640 * current thread is not the holder of this lock then {@link
641 * IllegalMonitorStateException} is thrown.
642 * @throws IllegalMonitorStateException if the current thread does not
643 * hold this lock.
644 */
645 public void unlock() {
646 checkOwner(Thread.currentThread());
647
648 if (recursions > 0) {
649 --recursions;
650 return;
651 }
652
653 ownerUpdater.set(this, null);
654 WaitNode h = head;
655 if (h != tail)
656 releaseFirst(h);
657 }
658
659 /**
660 * Acquire the lock.
661 * <p>Acquires the lock if it is not held be another thread and returns
662 * immediately, setting the lock hold count to one.
663 * <p>If the current thread
664 * already holds the lock then the hold count is incremented by one and
665 * the method returns immediately.
666 * <p>If the lock is held by another thread then the
667 * the current thread thread becomes disabled for thread scheduling
668 * purposes and lies dormant until the lock has been acquired
669 * at which time the lock hold count is set to one.
670 */
671 public void lock() {
672 Thread current = Thread.currentThread();
673 if (canBarge() && acquireOwner(current))
674 return;
675 int stat = doLock(current, null, NO_INTERRUPT, 0);
676 // assert stat == NO_INTERRUPT;
677 }
678
679 /**
680 * Acquires the lock unless the current thread is
681 * {@link Thread#interrupt interrupted}.
682 * <p>Acquires the lock if it is not held by another thread and returns
683 * immediately, setting the lock hold count to one.
684 * <p>If the current thread already holds this lock then the hold count
685 * is incremented by one and the method returns immediately.
686 * <p>If the lock is held by another thread then the
687 * the current thread becomes disabled for thread scheduling
688 * purposes and lies dormant until one of two things happens:
689 * <ul>
690 * <li> The lock is acquired by the current thread; or
691 * <li> Some other thread {@link Thread#interrupt interrupts} the current
692 * thread.
693 * </ul>
694 * <p>If the lock is acquired by the current thread then the lock hold
695 * count is set to one.
696 * <p>If the current thread:
697 * <ul>
698 * <li>has its interrupted status set on entry to this method; or
699 * <li>is {@link Thread#interrupt interrupted} while waiting to acquire
700 * the lock,
701 * </ul>
702 * then {@link InterruptedException} is thrown and the current thread's
703 * interrupted status is cleared. As this method is an explicit
704 * interruption point, preference is given to responding to the interrupt
705 * over reentrant acquisition of the lock.
706 *
707 * <h3>Implementation Notes</h3>
708 * <p> To-BE_DONE
709 *
710 * @throws InterruptedException if the current thread is interrupted
711 */
712 public void lockInterruptibly() throws InterruptedException {
713 Thread current = Thread.currentThread();
714 if (Thread.interrupted())
715 throw new InterruptedException();
716 if (canBarge() && acquireOwner(current))
717 return;
718 if (doLock(current, null, INTERRUPT, 0) == INTERRUPT)
719 throw new InterruptedException();
720 }
721
722 /**
723 * Acquires the lock only if it is not held by another thread at the time
724 * of invocation.
725 * <p>Acquires the lock if it is not held by another thread and returns
726 * immediately with the value <tt>true</tt>, setting the lock hold count
727 * to one.
728 * <p> If the current thread
729 * already holds this lock then the hold count is incremented by one and
730 * the method returns <tt>true</code>.
731 * <p>If the lock is held by another thread then this method will return
732 * immediately with the value <tt>false</tt>.
733 *
734 * @return <tt>true</tt>if the lock was free and was acquired by the
735 * current thread, or the lock was already held by the current thread; and
736 * <tt>false</tt> otherwise.
737 */
738 public boolean tryLock() {
739 Thread current = Thread.currentThread();
740 if (acquireOwner(current))
741 return true;
742 if (owner == current) {
743 ++recursions;
744 return true;
745 }
746 return false;
747 }
748
749 /**
750 *
751 * Acquires the lock if it is not held by another thread within the given
752 * waiting time and the current thread has not been interrupted.
753 * <p>Acquires the lock if it is not held by another thread and returns
754 * immediately with the value <tt>true</tt>, setting the lock hold count
755 * to one.
756 * <p> If the current thread
757 * already holds this lock then the hold count is incremented by one and
758 * the method returns <tt>true</code>.
759 * <p>If the lock is held by another thread then the
760 * the current thread becomes disabled for thread scheduling
761 * purposes and lies dormant until one of three things happens:
762 * <ul>
763 * <li> The lock is acquired by the current thread; or
764 * <li> Some other thread {@link Thread#interrupt interrupts} the current
765 * thread; or
766 * <li> The specified waiting time elapses
767 * </ul>
768 * <p>If the lock is acquired then the value <tt>true</tt> is returned and
769 * the lock hold count is set to one.
770 * <p>If the current thread:
771 * <ul>
772 * <li>has its interrupted status set on entry to this method; or
773 * <li>is {@link Thread#interrupt interrupted} before acquiring
774 * the lock,
775 * </ul>
776 * then {@link InterruptedException} is thrown and the current thread's
777 * interrupted status is cleared. As this method is an explicit
778 * interruption point, preference is given to responding to the interrupt
779 * over reentrant acquisition of the lock.
780 * <p>If the specified waiting time elapses then the value <tt>false</tt>
781 * is returned.
782 * <p>The given waiting time is a best-effort lower bound. If the time is
783 * less than or equal to zero, the method will not wait at all.
784 *
785 * <h3>Implementation Notes</h3>
786 * <p> To-BE_DONE
787 *
788 * @return <tt>true</tt> if the lock was free and was acquired by the
789 * current thread, or the lock was already held by the current thread; and
790 * <tt>false</tt> if the waiting time elapsed before the lock could be
791 * acquired.
792 *
793 * @throws InterruptedException if the current thread is interrupted
794 */
795 public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {
796 if (unit == null)
797 throw new NullPointerException();
798 if (Thread.interrupted())
799 throw new InterruptedException();
800 Thread current = Thread.currentThread();
801 if (canBarge() && acquireOwner(current))
802 return true;
803 if (owner == current) {
804 ++recursions;
805 return true;
806 }
807 if (timeout <= 0)
808 return false;
809 int stat = doLock(current, null, TIMEOUT, unit.toNanos(timeout));
810 if (stat == NO_INTERRUPT)
811 return true;
812 if (stat == INTERRUPT)
813 throw new InterruptedException();
814 assert stat == TIMEOUT;
815 return false;
816 }
817
818
819 /**
820 * Queries the number of holds on this lock by the current thread.
821 * <p>A thread has a hold on a lock for each lock action that is not
822 * matched by an unlock action.
823 * <p>The hold count information is typically only used for testing and
824 * debugging purposes. For example, if a certain section of code should
825 * not be entered with the lock already held then we can assert that
826 * fact:
827 * <pre>
828 * class X {
829 * ReentrantLock lock = new ReentrantLock();
830 * // ...
831 *
832 * public void m() {
833 * assert lock.getHoldCount() == 0;
834 * lock.lock();
835 * try {
836 * // ... method body
837 * } finally {
838 * lock.unlock()
839 * }
840 * }
841 * }
842 * </pre>
843 *
844 * @return the number of holds on this lock by current thread,
845 * or zero if this lock is not held by current thread.
846 **/
847 public int getHoldCount() {
848 return (owner == Thread.currentThread()) ? recursions + 1 : 0;
849 }
850
851 /**
852 * Queries if this lock is held by the current thread.
853 * <p>Analogous to the {@link Thread#holdsLock} method for built-in
854 * monitor locks, this method is typically used for debugging and
855 * testing. For example, a method that should only be called while
856 * a lock is held can assert that this is the case:
857 * <pre>
858 * class X {
859 * ReentrantLock lock = new ReentrantLock();
860 * // ...
861 *
862 * public void m() {
863 * assert lock.isHeldByCurrentThread();
864 * // ... method body
865 * }
866 * }
867 * </pre>
868 *
869 * @return <tt>true</tt> if current thread holds this lock and
870 * <tt>false</tt> otherwise.
871 **/
872 public boolean isHeldByCurrentThread() {
873 return (owner == Thread.currentThread());
874 }
875
876
877 /**
878 * Queries if this lock is held by any thread. THis method is
879 * designed for use in monitoring, not for synchronization control.
880 * @return <tt>true</tt> if any thread holds this lock and
881 * <tt>false</tt> otherwise.
882 **/
883 public boolean isLocked() {
884 return owner != null;
885 }
886
887
888 /**
889 * Returns a {@link Condition} instance for use with this
890 * {@link Lock} instance.
891 *
892 * <p>The returned {@link Condition} instance has the same behavior and
893 * usage
894 * restrictions with this lock as the {@link Object} monitor methods
895 * ({@link Object#wait() wait}, {@link Object#notify notify}, and
896 * {@link Object#notifyAll notifyAll}) have with the built-in monitor
897 * lock:
898 * <ul>
899 * <li>If this lock is not held when any of the {@link Condition}
900 * {@link Condition#await() waiting} or {@link Condition#signal signalling}
901 * methods are called, then an {@link IllegalMonitorStateException} is
902 * thrown.
903 * <li>When the condition {@link Condition#await() waiting} methods are
904 * called the lock is released and before they return the lock is
905 * reacquired and the lock hold count restored to what it was when the
906 * method was called.
907 * <li>If a thread is {@link Thread#interrupt interrupted} while waiting
908 * then the wait will terminate, an {@link InterruptedException} will be
909 * thrown, and the thread's interrupted status will be cleared.
910 * <li>The order in which waiting threads are signalled is not specified.
911 * <li>The order in which threads returning from a wait, and threads trying
912 * to acquire the lock, are granted the lock, is not specified.
913 * </ul>
914 */
915 public Condition newCondition() {
916 return new ReentrantLockConditionObject();
917 }
918
919 // Helper methods for Conditions
920
921 /**
922 * Return true if a node, always one that was initially placed on
923 * a condition queue, is off the condition queue (and thus,
924 * normally is now on lock queue.)
925 */
926 boolean isOffConditionQueue(WaitNode w) {
927 return w.count > TRANSFERRING;
928 }
929
930 /**
931 * Transfer a node from condition queue onto lock queue.
932 * Return true if successful (i.e., node not cancelled)
933 */
934 final boolean transferToLockQueue(WaitNode node) {
935 /*
936 * Atomically change status to TRANSFERRING to avoid races
937 * with cancelling waiters. We use a special value that causes
938 * any waiters spuriously waking up to re-park until the node
939 * has been placed on lock queue.
940 */
941 if (!casCount(node, ON_CONDITION_QUEUE, TRANSFERRING))
942 return false;
943
944 /*
945 * Splice onto queue
946 */
947 WaitNode p = enq(node);
948
949 /*
950 * Establish normal lock-queue release count for node. The
951 * CAS can fail if node already was involved in a cancellation
952 * on lock-queue, in which case we signal to be sure.
953 */
954 if (!casCount(node, TRANSFERRING, 0))
955 signalSuccessor(node);
956
957 /*
958 * Ensure release count of predecessor is negative to indicate
959 * that thread is (probably) waiting. If attempt to set count
960 * fails or is pred is/becomes cancelled, wake up successor
961 * (which will ordinarily be "node") to resynch.
962 */
963
964 for (;;) {
965 int c = p.count;
966 if (c < 0 || (c != CANCELLED && casCount(p, c, -1)))
967 break;
968 signalSuccessor(p);
969 if (c == CANCELLED)
970 break;
971 }
972
973 return true;
974 }
975
976 /**
977 * Hook method used by ReentrantReadWriteLock. Called
978 * before unlocking lock to enter wait.
979 */
980 void beforeWait() { }
981
982
983 /**
984 * Hook method used by ReentrantReadWriteLock. Called
985 * after locking lock after exiting wait.
986 */
987 void afterWait() { }
988
989 private class ReentrantLockConditionObject implements Condition, java.io.Serializable {
990 /*
991 * Because condition queues are accessed only when locks are
992 * already held, we just need a simple linked queue to hold
993 * nodes while they are waiting on conditions. They are then
994 * transferred to the lock queue to re-acquire locks.
995 */
996
997 /**
998 * First node of condition queue.
999 */
1000 private transient WaitNode firstWaiter;
1001
1002 /**
1003 * Last node of condition queue.
1004 */
1005 private transient WaitNode lastWaiter;
1006
1007 /**
1008 * Basic linked queue insertion.
1009 */
1010 private WaitNode addWaiter(Thread current) {
1011 WaitNode w = new WaitNode(current);
1012 w.count = ON_CONDITION_QUEUE;
1013 if (lastWaiter == null)
1014 firstWaiter = lastWaiter = w;
1015 else {
1016 WaitNode t = lastWaiter;
1017 lastWaiter = w;
1018 t.next = w;
1019 }
1020 return w;
1021 }
1022
1023 /**
1024 * Main code for signal. Dequeue and transfer nodes until hit
1025 * non-cancelled one or null. Split out from signal to
1026 * encourage compilers to inline the case of no waiters.
1027 */
1028 private void doSignal(WaitNode first) {
1029 do {
1030 if ( (firstWaiter = first.next) == null)
1031 lastWaiter = null;
1032 first.next = null;
1033 if (transferToLockQueue(first))
1034 return;
1035 first = firstWaiter;
1036 } while (first != null);
1037 }
1038
1039 public void signal() {
1040 checkOwner(Thread.currentThread());
1041 WaitNode w = firstWaiter;
1042 if (w != null)
1043 doSignal(w);
1044 }
1045
1046 public void signalAll() {
1047 checkOwner(Thread.currentThread());
1048 // Pull off list all at once and traverse.
1049 WaitNode w = firstWaiter;
1050 if (w != null) {
1051 lastWaiter = firstWaiter = null;
1052 do {
1053 WaitNode n = w.next;
1054 w.next = null;
1055 transferToLockQueue(w);
1056 w = n;
1057 } while (w != null);
1058 }
1059 }
1060
1061 /*
1062 * Various flavors of wait. Each almost the same, but
1063 * annoyingly different and no nice way to factor common code.
1064 */
1065
1066 public void await() throws InterruptedException {
1067 Thread current = Thread.currentThread();
1068 checkOwner(current);
1069
1070 WaitNode w = addWaiter(current);
1071 beforeWait();
1072 int recs = recursions;
1073 unlock();
1074
1075 boolean wasInterrupted = false;
1076
1077 while (!isOffConditionQueue(w)) {
1078 JSR166Support.park(false, 0);
1079 if (Thread.interrupted()) {
1080 wasInterrupted = true;
1081 if (casCount(w, ON_CONDITION_QUEUE, CANCELLED)) {
1082 w.thread = null;
1083 w = null;
1084 }
1085 break;
1086 }
1087 }
1088
1089 /*
1090 * If we exited above loop due to cancellation, then w is
1091 * null, and doLock will make a new lock node for
1092 * us. Otherwise, upon exit, our node is already in the
1093 * lock queue when doLock is called.
1094 */
1095 doLock(current, w, NO_INTERRUPT, 0);
1096
1097 recursions = recs;
1098 afterWait();
1099
1100 if (wasInterrupted)
1101 throw new InterruptedException();
1102 }
1103
1104 public void awaitUninterruptibly() {
1105 Thread current = Thread.currentThread();
1106 checkOwner(current);
1107
1108 WaitNode w = addWaiter(current);
1109 beforeWait();
1110 int recs = recursions;
1111 unlock();
1112
1113 boolean wasInterrupted = false;
1114 while (!isOffConditionQueue(w)) {
1115 JSR166Support.park(false, 0);
1116 if (Thread.interrupted())
1117 wasInterrupted = true;
1118 }
1119
1120 doLock(current, w, NO_INTERRUPT, 0);
1121 recursions = recs;
1122 afterWait();
1123 // avoid re-interrupts on exit
1124 if (wasInterrupted && !current.isInterrupted())
1125 current.interrupt();
1126 }
1127
1128
1129 public long awaitNanos(long nanos) throws InterruptedException {
1130 Thread current = Thread.currentThread();
1131 checkOwner(current);
1132
1133 WaitNode w = addWaiter(current);
1134 beforeWait();
1135 int recs = recursions;
1136 unlock();
1137
1138 if (nanos <= 0) nanos = 1; // park arg must be positive
1139 long timeLeft = nanos;
1140 long startTime = TimeUnit.nanoTime();
1141 boolean wasInterrupted = false;
1142 boolean cancelled = false;
1143
1144 if (!isOffConditionQueue(w)) {
1145 for (;;) {
1146 JSR166Support.park(false, timeLeft);
1147 if (Thread.interrupted())
1148 wasInterrupted = true;
1149 else if (isOffConditionQueue(w))
1150 break;
1151 else
1152 timeLeft = nanos - (TimeUnit.nanoTime() - startTime);
1153
1154 if (wasInterrupted || timeLeft <= 0) {
1155 if (casCount(w, ON_CONDITION_QUEUE, CANCELLED)) {
1156 w.thread = null;
1157 w = null;
1158 }
1159 break;
1160 }
1161 }
1162 }
1163
1164 doLock(current, w, NO_INTERRUPT, 0);
1165 recursions = recs;
1166 afterWait();
1167
1168 if (wasInterrupted)
1169 throw new InterruptedException();
1170 else if (timeLeft <= 0)
1171 return timeLeft;
1172 else
1173 return nanos - (TimeUnit.nanoTime() - startTime);
1174 }
1175
1176 public boolean awaitUntil(Date deadline) throws InterruptedException {
1177 Thread current = Thread.currentThread();
1178 checkOwner(current);
1179
1180 WaitNode w = addWaiter(current);
1181 beforeWait();
1182 int recs = recursions;
1183 unlock();
1184
1185 boolean wasInterrupted = false;
1186 boolean cancelled = false;
1187 long abstime = deadline.getTime();
1188
1189 if (!isOffConditionQueue(w)) {
1190 for (;;) {
1191 JSR166Support.park(true, abstime);
1192
1193 boolean timedOut = false;
1194 if (Thread.interrupted())
1195 wasInterrupted = true;
1196 else if (isOffConditionQueue(w))
1197 break;
1198 else if (System.currentTimeMillis() <= abstime)
1199 timedOut = true;
1200
1201 if (wasInterrupted || timedOut) {
1202 if (casCount(w, ON_CONDITION_QUEUE, CANCELLED)) {
1203 w.thread = null;
1204 w = null;
1205 }
1206 break;
1207 }
1208 }
1209 }
1210
1211 doLock(current, w, NO_INTERRUPT, 0);
1212 recursions = recs;
1213 afterWait();
1214
1215 if (wasInterrupted)
1216 throw new InterruptedException();
1217 return !cancelled;
1218 }
1219
1220 public boolean await(long time, TimeUnit unit) throws InterruptedException {
1221 return awaitNanos(unit.toNanos(time)) > 0;
1222 }
1223
1224 }
1225
1226 }