java – Is it a good practice to wrap ConcurrentHashMap read and write operations with ReentrantLock? – Education Career Blog

I think in the implementation of ConcurrentHashMap, ReentrantLock has already been used. So there is no need to use ReentrantLock for the access of a ConcurrentHashMap object. And that will only add more synchronization overhead. Any comments?

,

What would you (or anyone) like to achieve with that? ConcurrentHashMap is already thread-safe as it is. Wrapping it with extra locking code would just slow it down significantly, since

  1. it does not lock on reads per se,
  2. even for writes, you can hardly mimic its internal lock partitioning behaviour externally.

In other words, adding extra locking would increase the chance of thread contention significantly (as well as making the thread safety guarantees of read operations stricter, for the record).

ConcurrentHashMap provides an implementation of ConcurrentMap and offers a highly effective solution to the problem of reconciling throughput with thread safety. It is optimized for reading, so retrievals do not block even while the table is being updated (to allow for this, the contract states that the results of retrievals will reflect the latest update operations completed before the start of the retrieval). Updates also can often proceed without blocking, because a ConcurrentHashMap consists of not one but a set of tables, called segments, each of which can be independently locked. If the number of segments is large enough relative to the number of threads accessing the table, there will often be no more than one update in progress per segment at any time.

From Java Generics and Collections, chapter 16.4.

,

The whole point of ConcurrentHashMap, is not to lock around access/modifications to it. Extra locking just adds overhead.

Leave a Comment