49 |
|
* and a significantly lower value can lead to thread contention. But |
50 |
|
* overestimates and underestimates within an order of magnitude do |
51 |
|
* not usually have much noticeable impact. A value of one is |
52 |
< |
* appropriate when it is known that only one thread will modify |
53 |
< |
* and all others will only read. |
52 |
> |
* appropriate when it is known that only one thread will modify and |
53 |
> |
* all others will only read. Also, resizing this or any other kind of |
54 |
> |
* hash table is a relatively slow operation, so, when possible, it is |
55 |
> |
* a good idea to provide estimates of expected table sizes in |
56 |
> |
* constructors. |
57 |
|
* |
58 |
< |
* <p>This class implements all of the <em>optional</em> methods |
59 |
< |
* of the {@link Map} and {@link Iterator} interfaces. |
58 |
> |
* <p>This class and its views and iterators implement all of the |
59 |
> |
* <em>optional</em> methods of the {@link Map} and {@link Iterator} |
60 |
> |
* interfaces. |
61 |
|
* |
62 |
|
* <p> Like {@link java.util.Hashtable} but unlike {@link |
63 |
|
* java.util.HashMap}, this class does NOT allow <tt>null</tt> to be |
73 |
|
* @param <V> the type of mapped values |
74 |
|
*/ |
75 |
|
public class ConcurrentHashMap<K, V> extends AbstractMap<K, V> |
76 |
< |
implements ConcurrentMap<K, V>, Cloneable, Serializable { |
76 |
> |
implements ConcurrentMap<K, V>, Serializable { |
77 |
|
private static final long serialVersionUID = 7249069246763182397L; |
78 |
|
|
79 |
|
/* |
114 |
|
*/ |
115 |
|
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative |
116 |
|
|
117 |
+ |
/** |
118 |
+ |
* Number of unsynchronized retries in size and containsValue |
119 |
+ |
* methods before resorting to locking. This is used to avoid |
120 |
+ |
* unbounded retries if tables undergo continuous modification |
121 |
+ |
* which would make it impossible to obtain an accurate result. |
122 |
+ |
*/ |
123 |
+ |
static final int RETRIES_BEFORE_LOCK = 2; |
124 |
+ |
|
125 |
|
/* ---------------- Fields -------------- */ |
126 |
|
|
127 |
|
/** |
173 |
|
/* ---------------- Inner Classes -------------- */ |
174 |
|
|
175 |
|
/** |
176 |
+ |
* ConcurrentHashMap list entry. Note that this is never exported |
177 |
+ |
* out as a user-visible Map.Entry. |
178 |
+ |
* |
179 |
+ |
* Because the value field is volatile, not final, it is legal wrt |
180 |
+ |
* the Java Memory Model for an unsynchronized reader to see null |
181 |
+ |
* instead of initial value when read via a data race. Although a |
182 |
+ |
* reordering leading to this is not likely to ever actually |
183 |
+ |
* occur, the Segment.readValueUnderLock method is used as a |
184 |
+ |
* backup in case a null (pre-initialized) value is ever seen in |
185 |
+ |
* an unsynchronized access method. |
186 |
+ |
*/ |
187 |
+ |
static final class HashEntry<K,V> { |
188 |
+ |
final K key; |
189 |
+ |
final int hash; |
190 |
+ |
volatile V value; |
191 |
+ |
final HashEntry<K,V> next; |
192 |
+ |
|
193 |
+ |
HashEntry(K key, int hash, HashEntry<K,V> next, V value) { |
194 |
+ |
this.key = key; |
195 |
+ |
this.hash = hash; |
196 |
+ |
this.next = next; |
197 |
+ |
this.value = value; |
198 |
+ |
} |
199 |
+ |
} |
200 |
+ |
|
201 |
+ |
/** |
202 |
|
* Segments are specialized versions of hash tables. This |
203 |
|
* subclasses from ReentrantLock opportunistically, just to |
204 |
|
* simplify some locking and avoid separate construction. |
216 |
|
* is less than two for the default load factor threshold.) |
217 |
|
* |
218 |
|
* Read operations can thus proceed without locking, but rely |
219 |
< |
* on a memory barrier to ensure that completed write |
220 |
< |
* operations performed by other threads are |
221 |
< |
* noticed. Conveniently, the "count" field, tracking the |
222 |
< |
* number of elements, can also serve as the volatile variable |
223 |
< |
* providing proper read/write barriers. This is convenient |
224 |
< |
* because this field needs to be read in many read operations |
187 |
< |
* anyway. |
188 |
< |
* |
189 |
< |
* Implementors note. The basic rules for all this are: |
219 |
> |
* on selected uses of volatiles to ensure that completed |
220 |
> |
* write operations performed by other threads are |
221 |
> |
* noticed. For most purposes, the "count" field, tracking the |
222 |
> |
* number of elements, serves as that volatile variable |
223 |
> |
* ensuring visibility. This is convenient because this field |
224 |
> |
* needs to be read in many read operations anyway: |
225 |
|
* |
226 |
< |
* - All unsynchronized read operations must first read the |
226 |
> |
* - All (unsynchronized) read operations must first read the |
227 |
|
* "count" field, and should not look at table entries if |
228 |
|
* it is 0. |
229 |
|
* |
230 |
< |
* - All synchronized write operations should write to |
231 |
< |
* the "count" field after updating. The operations must not |
232 |
< |
* take any action that could even momentarily cause |
233 |
< |
* a concurrent read operation to see inconsistent |
234 |
< |
* data. This is made easier by the nature of the read |
235 |
< |
* operations in Map. For example, no operation |
230 |
> |
* - All (synchronized) write operations should write to |
231 |
> |
* the "count" field after structurally changing any bin. |
232 |
> |
* The operations must not take any action that could even |
233 |
> |
* momentarily cause a concurrent read operation to see |
234 |
> |
* inconsistent data. This is made easier by the nature of |
235 |
> |
* the read operations in Map. For example, no operation |
236 |
|
* can reveal that the table has grown but the threshold |
237 |
|
* has not yet been updated, so there are no atomicity |
238 |
|
* requirements for this with respect to reads. |
239 |
|
* |
240 |
< |
* As a guide, all critical volatile reads and writes are marked |
241 |
< |
* in code comments. |
240 |
> |
* As a guide, all critical volatile reads and writes to the |
241 |
> |
* count field are marked in code comments. |
242 |
|
*/ |
243 |
|
|
244 |
|
private static final long serialVersionUID = 2249069246763182397L; |
249 |
|
transient volatile int count; |
250 |
|
|
251 |
|
/** |
252 |
< |
* Number of updates; used for checking lack of modifications |
253 |
< |
* in bulk-read methods. |
252 |
> |
* Number of updates that alter the size of the table. This is |
253 |
> |
* used during bulk-read methods to make sure they see a |
254 |
> |
* consistent snapshot: If modCounts change during a traversal |
255 |
> |
* of segments computing size or checking containsValue, then |
256 |
> |
* we might have an inconsistent view of state so (usually) |
257 |
> |
* must retry. |
258 |
|
*/ |
259 |
|
transient int modCount; |
260 |
|
|
266 |
|
transient int threshold; |
267 |
|
|
268 |
|
/** |
269 |
< |
* The per-segment table |
269 |
> |
* The per-segment table. Declared as a raw type, casted |
270 |
> |
* to HashEntry<K,V> on each use. |
271 |
|
*/ |
272 |
< |
transient HashEntry[] table; |
272 |
> |
transient volatile HashEntry[] table; |
273 |
|
|
274 |
|
/** |
275 |
|
* The load factor for the hash table. Even though this value |
289 |
|
* Call only while holding lock or in constructor. |
290 |
|
**/ |
291 |
|
void setTable(HashEntry[] newTable) { |
252 |
– |
table = newTable; |
292 |
|
threshold = (int)(newTable.length * loadFactor); |
293 |
< |
count = count; // write-volatile |
293 |
> |
table = newTable; |
294 |
> |
} |
295 |
> |
|
296 |
> |
/** |
297 |
> |
* Return properly casted first entry of bin for given hash |
298 |
> |
*/ |
299 |
> |
HashEntry<K,V> getFirst(int hash) { |
300 |
> |
HashEntry[] tab = table; |
301 |
> |
return (HashEntry<K,V>) tab[hash & (tab.length - 1)]; |
302 |
> |
} |
303 |
> |
|
304 |
> |
/** |
305 |
> |
* Read value field of an entry under lock. Called if value |
306 |
> |
* field ever appears to be null. This is possible only if a |
307 |
> |
* compiler happens to reorder a HashEntry initialization with |
308 |
> |
* its table assignment, which is legal under memory model |
309 |
> |
* but is not known to ever occur. |
310 |
> |
*/ |
311 |
> |
V readValueUnderLock(HashEntry<K,V> e) { |
312 |
> |
lock(); |
313 |
> |
try { |
314 |
> |
return e.value; |
315 |
> |
} finally { |
316 |
> |
unlock(); |
317 |
> |
} |
318 |
|
} |
319 |
|
|
320 |
|
/* Specialized implementations of map methods */ |
321 |
|
|
322 |
|
V get(Object key, int hash) { |
323 |
|
if (count != 0) { // read-volatile |
324 |
< |
HashEntry[] tab = table; |
262 |
< |
int index = hash & (tab.length - 1); |
263 |
< |
HashEntry<K,V> e = (HashEntry<K,V>) tab[index]; |
324 |
> |
HashEntry<K,V> e = getFirst(hash); |
325 |
|
while (e != null) { |
326 |
< |
if (e.hash == hash && key.equals(e.key)) |
327 |
< |
return e.value; |
326 |
> |
if (e.hash == hash && key.equals(e.key)) { |
327 |
> |
V v = e.value; |
328 |
> |
if (v != null) |
329 |
> |
return v; |
330 |
> |
return readValueUnderLock(e); // recheck |
331 |
> |
} |
332 |
|
e = e.next; |
333 |
|
} |
334 |
|
} |
337 |
|
|
338 |
|
boolean containsKey(Object key, int hash) { |
339 |
|
if (count != 0) { // read-volatile |
340 |
< |
HashEntry[] tab = table; |
276 |
< |
int index = hash & (tab.length - 1); |
277 |
< |
HashEntry<K,V> e = (HashEntry<K,V>) tab[index]; |
340 |
> |
HashEntry<K,V> e = getFirst(hash); |
341 |
|
while (e != null) { |
342 |
|
if (e.hash == hash && key.equals(e.key)) |
343 |
|
return true; |
351 |
|
if (count != 0) { // read-volatile |
352 |
|
HashEntry[] tab = table; |
353 |
|
int len = tab.length; |
354 |
< |
for (int i = 0 ; i < len; i++) |
355 |
< |
for (HashEntry<K,V> e = (HashEntry<K,V>)tab[i] ; e != null ; e = e.next) |
356 |
< |
if (value.equals(e.value)) |
354 |
> |
for (int i = 0 ; i < len; i++) { |
355 |
> |
for (HashEntry<K,V> e = (HashEntry<K,V>)tab[i]; |
356 |
> |
e != null ; |
357 |
> |
e = e.next) { |
358 |
> |
V v = e.value; |
359 |
> |
if (v == null) // recheck |
360 |
> |
v = readValueUnderLock(e); |
361 |
> |
if (value.equals(v)) |
362 |
|
return true; |
363 |
+ |
} |
364 |
+ |
} |
365 |
|
} |
366 |
|
return false; |
367 |
|
} |
369 |
|
boolean replace(K key, int hash, V oldValue, V newValue) { |
370 |
|
lock(); |
371 |
|
try { |
372 |
< |
int c = count; |
373 |
< |
HashEntry[] tab = table; |
304 |
< |
int index = hash & (tab.length - 1); |
305 |
< |
HashEntry<K,V> first = (HashEntry<K,V>) tab[index]; |
306 |
< |
HashEntry<K,V> e = first; |
307 |
< |
for (;;) { |
308 |
< |
if (e == null) |
309 |
< |
return false; |
310 |
< |
if (e.hash == hash && key.equals(e.key)) |
311 |
< |
break; |
372 |
> |
HashEntry<K,V> e = getFirst(hash); |
373 |
> |
while (e != null && (e.hash != hash || !key.equals(e.key))) |
374 |
|
e = e.next; |
313 |
– |
} |
314 |
– |
|
315 |
– |
V v = e.value; |
316 |
– |
if (v == null || !oldValue.equals(v)) |
317 |
– |
return false; |
375 |
|
|
376 |
< |
e.value = newValue; |
377 |
< |
count = c; // write-volatile |
378 |
< |
return true; |
379 |
< |
|
376 |
> |
boolean replaced = false; |
377 |
> |
if (e != null && oldValue.equals(e.value)) { |
378 |
> |
replaced = true; |
379 |
> |
e.value = newValue; |
380 |
> |
} |
381 |
> |
return replaced; |
382 |
|
} finally { |
383 |
|
unlock(); |
384 |
|
} |
387 |
|
V replace(K key, int hash, V newValue) { |
388 |
|
lock(); |
389 |
|
try { |
390 |
< |
int c = count; |
391 |
< |
HashEntry[] tab = table; |
333 |
< |
int index = hash & (tab.length - 1); |
334 |
< |
HashEntry<K,V> first = (HashEntry<K,V>) tab[index]; |
335 |
< |
HashEntry<K,V> e = first; |
336 |
< |
for (;;) { |
337 |
< |
if (e == null) |
338 |
< |
return null; |
339 |
< |
if (e.hash == hash && key.equals(e.key)) |
340 |
< |
break; |
390 |
> |
HashEntry<K,V> e = getFirst(hash); |
391 |
> |
while (e != null && (e.hash != hash || !key.equals(e.key))) |
392 |
|
e = e.next; |
342 |
– |
} |
393 |
|
|
394 |
< |
V v = e.value; |
395 |
< |
e.value = newValue; |
396 |
< |
count = c; // write-volatile |
397 |
< |
return v; |
398 |
< |
|
394 |
> |
V oldValue = null; |
395 |
> |
if (e != null) { |
396 |
> |
oldValue = e.value; |
397 |
> |
e.value = newValue; |
398 |
> |
} |
399 |
> |
return oldValue; |
400 |
|
} finally { |
401 |
|
unlock(); |
402 |
|
} |
407 |
|
lock(); |
408 |
|
try { |
409 |
|
int c = count; |
410 |
+ |
if (c++ > threshold) // ensure capacity |
411 |
+ |
rehash(); |
412 |
|
HashEntry[] tab = table; |
413 |
|
int index = hash & (tab.length - 1); |
414 |
|
HashEntry<K,V> first = (HashEntry<K,V>) tab[index]; |
415 |
+ |
HashEntry<K,V> e = first; |
416 |
+ |
while (e != null && (e.hash != hash || !key.equals(e.key))) |
417 |
+ |
e = e.next; |
418 |
|
|
419 |
< |
for (HashEntry<K,V> e = first; e != null; e = (HashEntry<K,V>) e.next) { |
420 |
< |
if (e.hash == hash && key.equals(e.key)) { |
421 |
< |
V oldValue = e.value; |
422 |
< |
if (!onlyIfAbsent) |
423 |
< |
e.value = value; |
368 |
< |
++modCount; |
369 |
< |
count = c; // write-volatile |
370 |
< |
return oldValue; |
371 |
< |
} |
419 |
> |
V oldValue; |
420 |
> |
if (e != null) { |
421 |
> |
oldValue = e.value; |
422 |
> |
if (!onlyIfAbsent) |
423 |
> |
e.value = value; |
424 |
|
} |
425 |
< |
|
426 |
< |
tab[index] = new HashEntry<K,V>(hash, key, value, first); |
427 |
< |
++modCount; |
428 |
< |
++c; |
429 |
< |
count = c; // write-volatile |
430 |
< |
if (c > threshold) |
431 |
< |
setTable(rehash(tab)); |
380 |
< |
return null; |
425 |
> |
else { |
426 |
> |
oldValue = null; |
427 |
> |
++modCount; |
428 |
> |
tab[index] = new HashEntry<K,V>(key, hash, first, value); |
429 |
> |
count = c; // write-volatile |
430 |
> |
} |
431 |
> |
return oldValue; |
432 |
|
} finally { |
433 |
|
unlock(); |
434 |
|
} |
435 |
|
} |
436 |
|
|
437 |
< |
HashEntry[] rehash(HashEntry[] oldTable) { |
437 |
> |
void rehash() { |
438 |
> |
HashEntry[] oldTable = table; |
439 |
|
int oldCapacity = oldTable.length; |
440 |
|
if (oldCapacity >= MAXIMUM_CAPACITY) |
441 |
< |
return oldTable; |
441 |
> |
return; |
442 |
|
|
443 |
|
/* |
444 |
|
* Reclassify nodes in each list to new Map. Because we are |
455 |
|
*/ |
456 |
|
|
457 |
|
HashEntry[] newTable = new HashEntry[oldCapacity << 1]; |
458 |
+ |
threshold = (int)(newTable.length * loadFactor); |
459 |
|
int sizeMask = newTable.length - 1; |
460 |
|
for (int i = 0; i < oldCapacity ; i++) { |
461 |
|
// We need to guarantee that any existing reads of old Map can |
488 |
|
// Clone all remaining nodes |
489 |
|
for (HashEntry<K,V> p = e; p != lastRun; p = p.next) { |
490 |
|
int k = p.hash & sizeMask; |
491 |
< |
newTable[k] = new HashEntry<K,V>(p.hash, |
492 |
< |
p.key, |
493 |
< |
p.value, |
441 |
< |
(HashEntry<K,V>) newTable[k]); |
491 |
> |
HashEntry<K,V> n = (HashEntry<K,V>)newTable[k]; |
492 |
> |
newTable[k] = new HashEntry<K,V>(p.key, p.hash, |
493 |
> |
n, p.value); |
494 |
|
} |
495 |
|
} |
496 |
|
} |
497 |
|
} |
498 |
< |
return newTable; |
498 |
> |
table = newTable; |
499 |
|
} |
500 |
|
|
501 |
|
/** |
504 |
|
V remove(Object key, int hash, Object value) { |
505 |
|
lock(); |
506 |
|
try { |
507 |
< |
int c = count; |
507 |
> |
int c = count - 1; |
508 |
|
HashEntry[] tab = table; |
509 |
|
int index = hash & (tab.length - 1); |
510 |
|
HashEntry<K,V> first = (HashEntry<K,V>)tab[index]; |
459 |
– |
|
511 |
|
HashEntry<K,V> e = first; |
512 |
< |
for (;;) { |
462 |
< |
if (e == null) |
463 |
< |
return null; |
464 |
< |
if (e.hash == hash && key.equals(e.key)) |
465 |
< |
break; |
512 |
> |
while (e != null && (e.hash != hash || !key.equals(e.key))) |
513 |
|
e = e.next; |
467 |
– |
} |
514 |
|
|
515 |
< |
V oldValue = e.value; |
516 |
< |
if (value != null && !value.equals(oldValue)) |
517 |
< |
return null; |
518 |
< |
|
519 |
< |
// All entries following removed node can stay in list, but |
520 |
< |
// all preceding ones need to be cloned. |
521 |
< |
HashEntry<K,V> newFirst = e.next; |
522 |
< |
for (HashEntry<K,V> p = first; p != e; p = p.next) |
523 |
< |
newFirst = new HashEntry<K,V>(p.hash, p.key, |
524 |
< |
p.value, newFirst); |
525 |
< |
tab[index] = newFirst; |
526 |
< |
++modCount; |
527 |
< |
count = c-1; // write-volatile |
515 |
> |
V oldValue = null; |
516 |
> |
if (e != null) { |
517 |
> |
V v = e.value; |
518 |
> |
if (value == null || value.equals(v)) { |
519 |
> |
oldValue = v; |
520 |
> |
// All entries following removed node can stay |
521 |
> |
// in list, but all preceding ones need to be |
522 |
> |
// cloned. |
523 |
> |
++modCount; |
524 |
> |
HashEntry<K,V> newFirst = e.next; |
525 |
> |
for (HashEntry<K,V> p = first; p != e; p = p.next) |
526 |
> |
newFirst = new HashEntry<K,V>(p.key, p.hash, |
527 |
> |
newFirst, p.value); |
528 |
> |
tab[index] = newFirst; |
529 |
> |
count = c; // write-volatile |
530 |
> |
} |
531 |
> |
} |
532 |
|
return oldValue; |
533 |
|
} finally { |
534 |
|
unlock(); |
536 |
|
} |
537 |
|
|
538 |
|
void clear() { |
539 |
< |
lock(); |
540 |
< |
try { |
541 |
< |
HashEntry[] tab = table; |
542 |
< |
for (int i = 0; i < tab.length ; i++) |
543 |
< |
tab[i] = null; |
544 |
< |
++modCount; |
545 |
< |
count = 0; // write-volatile |
546 |
< |
} finally { |
547 |
< |
unlock(); |
539 |
> |
if (count != 0) { |
540 |
> |
lock(); |
541 |
> |
try { |
542 |
> |
HashEntry[] tab = table; |
543 |
> |
for (int i = 0; i < tab.length ; i++) |
544 |
> |
tab[i] = null; |
545 |
> |
++modCount; |
546 |
> |
count = 0; // write-volatile |
547 |
> |
} finally { |
548 |
> |
unlock(); |
549 |
> |
} |
550 |
|
} |
551 |
|
} |
552 |
|
} |
553 |
|
|
502 |
– |
/** |
503 |
– |
* ConcurrentHashMap list entry. Note that this is never exported |
504 |
– |
* out as a user-visible Map.Entry |
505 |
– |
*/ |
506 |
– |
static final class HashEntry<K,V> { |
507 |
– |
final K key; |
508 |
– |
V value; |
509 |
– |
final int hash; |
510 |
– |
final HashEntry<K,V> next; |
511 |
– |
|
512 |
– |
HashEntry(int hash, K key, V value, HashEntry<K,V> next) { |
513 |
– |
this.value = value; |
514 |
– |
this.hash = hash; |
515 |
– |
this.key = key; |
516 |
– |
this.next = next; |
517 |
– |
} |
518 |
– |
} |
554 |
|
|
555 |
|
|
556 |
|
/* ---------------- Public operations -------------- */ |
557 |
|
|
558 |
|
/** |
559 |
|
* Creates a new, empty map with the specified initial |
560 |
< |
* capacity and the specified load factor. |
560 |
> |
* capacity, load factor, and concurrency level. |
561 |
|
* |
562 |
|
* @param initialCapacity the initial capacity. The implementation |
563 |
|
* performs internal sizing to accommodate this many elements. |
564 |
|
* @param loadFactor the load factor threshold, used to control resizing. |
565 |
+ |
* Resizing may be performed when the average number of elements per |
566 |
+ |
* bin exceeds this threshold. |
567 |
|
* @param concurrencyLevel the estimated number of concurrently |
568 |
|
* updating threads. The implementation performs internal sizing |
569 |
|
* to try to accommodate this many threads. |
605 |
|
|
606 |
|
/** |
607 |
|
* Creates a new, empty map with the specified initial |
608 |
< |
* capacity, and with default load factor and concurrencyLevel. |
608 |
> |
* capacity, and with default load factor (<tt>0.75f</tt>) |
609 |
> |
* and concurrencyLevel (<tt>16</tt>). |
610 |
|
* |
611 |
< |
* @param initialCapacity The implementation performs internal |
612 |
< |
* sizing to accommodate this many elements. |
611 |
> |
* @param initialCapacity the initial capacity. The implementation |
612 |
> |
* performs internal sizing to accommodate this many elements. |
613 |
|
* @throws IllegalArgumentException if the initial capacity of |
614 |
|
* elements is negative. |
615 |
|
*/ |
618 |
|
} |
619 |
|
|
620 |
|
/** |
621 |
< |
* Creates a new, empty map with a default initial capacity, |
622 |
< |
* load factor, and concurrencyLevel. |
621 |
> |
* Creates a new, empty map with a default initial capacity |
622 |
> |
* (<tt>16</tt>), load factor (<tt>0.75f</tt>) and |
623 |
> |
* concurrencyLevel (<tt>16</tt>). |
624 |
|
*/ |
625 |
|
public ConcurrentHashMap() { |
626 |
|
this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_SEGMENTS); |
628 |
|
|
629 |
|
/** |
630 |
|
* Creates a new map with the same mappings as the given map. The |
631 |
< |
* map is created with a capacity of twice the number of mappings in |
632 |
< |
* the given map or 11 (whichever is greater), and a default load factor. |
631 |
> |
* map is created with a capacity consistent with the default load |
632 |
> |
* factor (<tt>0.75f</tt>) and uses the default concurrencyLevel |
633 |
> |
* (<tt>16</tt>). |
634 |
|
* @param t the map |
635 |
|
*/ |
636 |
|
public ConcurrentHashMap(Map<? extends K, ? extends V> t) { |
637 |
|
this(Math.max((int) (t.size() / DEFAULT_LOAD_FACTOR) + 1, |
638 |
< |
11), |
638 |
> |
DEFAULT_INITIAL_CAPACITY), |
639 |
|
DEFAULT_LOAD_FACTOR, DEFAULT_SEGMENTS); |
640 |
|
putAll(t); |
641 |
|
} |
644 |
|
public boolean isEmpty() { |
645 |
|
final Segment[] segments = this.segments; |
646 |
|
/* |
647 |
< |
* We need to keep track of per-segment modCounts to avoid ABA |
647 |
> |
* We keep track of per-segment modCounts to avoid ABA |
648 |
|
* problems in which an element in one segment was added and |
649 |
|
* in another removed during traversal, in which case the |
650 |
|
* table was never actually empty at any point. Note the |
676 |
|
// inherit Map javadoc |
677 |
|
public int size() { |
678 |
|
final Segment[] segments = this.segments; |
679 |
+ |
long sum = 0; |
680 |
+ |
long check = 0; |
681 |
|
int[] mc = new int[segments.length]; |
682 |
< |
for (;;) { |
683 |
< |
long sum = 0; |
682 |
> |
// Try a few times to get accurate count. On failure due to |
683 |
> |
// continuous async changes in table, resort to locking. |
684 |
> |
for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) { |
685 |
> |
check = 0; |
686 |
> |
sum = 0; |
687 |
|
int mcsum = 0; |
688 |
|
for (int i = 0; i < segments.length; ++i) { |
689 |
|
sum += segments[i].count; |
690 |
|
mcsum += mc[i] = segments[i].modCount; |
691 |
|
} |
647 |
– |
int check = 0; |
692 |
|
if (mcsum != 0) { |
693 |
|
for (int i = 0; i < segments.length; ++i) { |
694 |
|
check += segments[i].count; |
698 |
|
} |
699 |
|
} |
700 |
|
} |
701 |
< |
if (check == sum) { |
702 |
< |
if (sum > Integer.MAX_VALUE) |
703 |
< |
return Integer.MAX_VALUE; |
704 |
< |
else |
705 |
< |
return (int)sum; |
706 |
< |
} |
701 |
> |
if (check == sum) |
702 |
> |
break; |
703 |
> |
} |
704 |
> |
if (check != sum) { // Resort to locking all segments |
705 |
> |
sum = 0; |
706 |
> |
for (int i = 0; i < segments.length; ++i) |
707 |
> |
segments[i].lock(); |
708 |
> |
for (int i = 0; i < segments.length; ++i) |
709 |
> |
sum += segments[i].count; |
710 |
> |
for (int i = 0; i < segments.length; ++i) |
711 |
> |
segments[i].unlock(); |
712 |
|
} |
713 |
+ |
if (sum > Integer.MAX_VALUE) |
714 |
+ |
return Integer.MAX_VALUE; |
715 |
+ |
else |
716 |
+ |
return (int)sum; |
717 |
|
} |
718 |
|
|
719 |
|
|
761 |
|
public boolean containsValue(Object value) { |
762 |
|
if (value == null) |
763 |
|
throw new NullPointerException(); |
764 |
+ |
|
765 |
+ |
// See explanation of modCount use above |
766 |
|
|
767 |
|
final Segment[] segments = this.segments; |
768 |
|
int[] mc = new int[segments.length]; |
769 |
< |
for (;;) { |
769 |
> |
|
770 |
> |
// Try a few times without locking |
771 |
> |
for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) { |
772 |
|
int sum = 0; |
773 |
|
int mcsum = 0; |
774 |
|
for (int i = 0; i < segments.length; ++i) { |
790 |
|
if (cleanSweep) |
791 |
|
return false; |
792 |
|
} |
793 |
+ |
// Resort to locking all segments |
794 |
+ |
for (int i = 0; i < segments.length; ++i) |
795 |
+ |
segments[i].lock(); |
796 |
+ |
boolean found = false; |
797 |
+ |
try { |
798 |
+ |
for (int i = 0; i < segments.length; ++i) { |
799 |
+ |
if (segments[i].containsValue(value)) { |
800 |
+ |
found = true; |
801 |
+ |
break; |
802 |
+ |
} |
803 |
+ |
} |
804 |
+ |
} finally { |
805 |
+ |
for (int i = 0; i < segments.length; ++i) |
806 |
+ |
segments[i].unlock(); |
807 |
+ |
} |
808 |
+ |
return found; |
809 |
|
} |
810 |
|
|
811 |
|
/** |
863 |
|
* @param key key with which the specified value is to be associated. |
864 |
|
* @param value value to be associated with the specified key. |
865 |
|
* @return previous value associated with specified key, or <tt>null</tt> |
866 |
< |
* if there was no mapping for key. A <tt>null</tt> return can |
794 |
< |
* also indicate that the map previously associated <tt>null</tt> |
795 |
< |
* with the specified key, if the implementation supports |
796 |
< |
* <tt>null</tt> values. |
797 |
< |
* |
798 |
< |
* @throws UnsupportedOperationException if the <tt>put</tt> operation is |
799 |
< |
* not supported by this map. |
800 |
< |
* @throws ClassCastException if the class of the specified key or value |
801 |
< |
* prevents it from being stored in this map. |
866 |
> |
* if there was no mapping for key. |
867 |
|
* @throws NullPointerException if the specified key or value is |
868 |
|
* <tt>null</tt>. |
869 |
< |
* |
805 |
< |
**/ |
869 |
> |
*/ |
870 |
|
public V putIfAbsent(K key, V value) { |
871 |
|
if (value == null) |
872 |
|
throw new NullPointerException(); |
884 |
|
* @param t Mappings to be stored in this map. |
885 |
|
*/ |
886 |
|
public void putAll(Map<? extends K, ? extends V> t) { |
887 |
< |
for (Iterator<Map.Entry<? extends K, ? extends V>> it = (Iterator<Map.Entry<? extends K, ? extends V>>) t.entrySet().iterator(); it.hasNext(); ) { |
887 |
> |
for (Iterator<? extends Map.Entry<? extends K, ? extends V>> it = (Iterator<? extends Map.Entry<? extends K, ? extends V>>) t.entrySet().iterator(); it.hasNext(); ) { |
888 |
|
Entry<? extends K, ? extends V> e = it.next(); |
889 |
|
put(e.getKey(), e.getValue()); |
890 |
|
} |
983 |
|
segments[i].clear(); |
984 |
|
} |
985 |
|
|
922 |
– |
|
923 |
– |
/** |
924 |
– |
* Returns a shallow copy of this |
925 |
– |
* <tt>ConcurrentHashMap</tt> instance: the keys and |
926 |
– |
* values themselves are not cloned. |
927 |
– |
* |
928 |
– |
* @return a shallow copy of this map. |
929 |
– |
*/ |
930 |
– |
public Object clone() { |
931 |
– |
// We cannot call super.clone, since it would share final |
932 |
– |
// segments array, and there's no way to reassign finals. |
933 |
– |
|
934 |
– |
float lf = segments[0].loadFactor; |
935 |
– |
int segs = segments.length; |
936 |
– |
int cap = (int)(size() / lf); |
937 |
– |
if (cap < segs) cap = segs; |
938 |
– |
ConcurrentHashMap<K,V> t = new ConcurrentHashMap<K,V>(cap, lf, segs); |
939 |
– |
t.putAll(this); |
940 |
– |
return t; |
941 |
– |
} |
942 |
– |
|
986 |
|
/** |
987 |
|
* Returns a set view of the keys contained in this map. The set is |
988 |
|
* backed by the map, so changes to the map are reflected in the set, and |
991 |
|
* <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt>, and |
992 |
|
* <tt>clear</tt> operations. It does not support the <tt>add</tt> or |
993 |
|
* <tt>addAll</tt> operations. |
994 |
< |
* The returned <tt>iterator</tt> is a "weakly consistent" iterator that |
994 |
> |
* The view's returned <tt>iterator</tt> is a "weakly consistent" iterator that |
995 |
|
* will never throw {@link java.util.ConcurrentModificationException}, |
996 |
|
* and guarantees to traverse elements as they existed upon |
997 |
|
* construction of the iterator, and may (but is not guaranteed to) |
1013 |
|
* <tt>Iterator.remove</tt>, <tt>Collection.remove</tt>, |
1014 |
|
* <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> operations. |
1015 |
|
* It does not support the <tt>add</tt> or <tt>addAll</tt> operations. |
1016 |
< |
* The returned <tt>iterator</tt> is a "weakly consistent" iterator that |
1016 |
> |
* The view's returned <tt>iterator</tt> is a "weakly consistent" iterator that |
1017 |
|
* will never throw {@link java.util.ConcurrentModificationException}, |
1018 |
|
* and guarantees to traverse elements as they existed upon |
1019 |
|
* construction of the iterator, and may (but is not guaranteed to) |
1036 |
|
* <tt>Iterator.remove</tt>, <tt>Collection.remove</tt>, |
1037 |
|
* <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> operations. |
1038 |
|
* It does not support the <tt>add</tt> or <tt>addAll</tt> operations. |
1039 |
< |
* The returned <tt>iterator</tt> is a "weakly consistent" iterator that |
1039 |
> |
* The view's returned <tt>iterator</tt> is a "weakly consistent" iterator that |
1040 |
|
* will never throw {@link java.util.ConcurrentModificationException}, |
1041 |
|
* and guarantees to traverse elements as they existed upon |
1042 |
|
* construction of the iterator, and may (but is not guaranteed to) |
1221 |
|
public void clear() { |
1222 |
|
ConcurrentHashMap.this.clear(); |
1223 |
|
} |
1224 |
+ |
public Object[] toArray() { |
1225 |
+ |
Collection<K> c = new ArrayList<K>(); |
1226 |
+ |
for (Iterator<K> i = iterator(); i.hasNext(); ) |
1227 |
+ |
c.add(i.next()); |
1228 |
+ |
return c.toArray(); |
1229 |
+ |
} |
1230 |
+ |
public <T> T[] toArray(T[] a) { |
1231 |
+ |
Collection<K> c = new ArrayList<K>(); |
1232 |
+ |
for (Iterator<K> i = iterator(); i.hasNext(); ) |
1233 |
+ |
c.add(i.next()); |
1234 |
+ |
return c.toArray(a); |
1235 |
+ |
} |
1236 |
|
} |
1237 |
|
|
1238 |
|
final class Values extends AbstractCollection<V> { |
1248 |
|
public void clear() { |
1249 |
|
ConcurrentHashMap.this.clear(); |
1250 |
|
} |
1251 |
+ |
public Object[] toArray() { |
1252 |
+ |
Collection<V> c = new ArrayList<V>(); |
1253 |
+ |
for (Iterator<V> i = iterator(); i.hasNext(); ) |
1254 |
+ |
c.add(i.next()); |
1255 |
+ |
return c.toArray(); |
1256 |
+ |
} |
1257 |
+ |
public <T> T[] toArray(T[] a) { |
1258 |
+ |
Collection<V> c = new ArrayList<V>(); |
1259 |
+ |
for (Iterator<V> i = iterator(); i.hasNext(); ) |
1260 |
+ |
c.add(i.next()); |
1261 |
+ |
return c.toArray(a); |
1262 |
+ |
} |
1263 |
|
} |
1264 |
|
|
1265 |
|
final class EntrySet extends AbstractSet<Map.Entry<K,V>> { |