i have implemented lru cache using concurrentlinkedhashmap
. in same map, purging events if map reaches particular limit shown below.
i have max_size
variable equivalent 3.7 gb , map reaches limit, purging events map.
below code:
import java.util.concurrent.concurrentmap; import com.googlecode.concurrentlinkedhashmap.concurrentlinkedhashmap; import com.googlecode.concurrentlinkedhashmap.evictionlistener; // equal 3.7 gb? can explain this? public static final int max_size = 20000000; //equates ~3.7gb assumption each event 200 bytes avg public static evictionlistener<string, dataobject> listener = new evictionlistener<string, dataobject>() { public void oneviction(string key, dataobject value) { deleteevents(); } }; public static final concurrentmap<string, dataobject> holder = new concurrentlinkedhashmap.builder<string, dataobject>() .maximumweightedcapacity(max_size).listener(listener).build(); private static void deleteevents() { int capacity = max_size - (max_size * (20 / 100)); if (holder.size() >= capacity) { int numeventstoevict = (max_size * 20) / 100; int counter = 0; iterator<string> iter = holder.keyset().iterator(); while (iter.hasnext() && counter < numeventstoevict) { string address = iter.next(); holder.remove(address); system.out.println("purging elements: " +address); counter++; } } } // method called every 30 seconds single background thread // send data our queue public void submit() { if (holder.isempty()) { return; } // other code here int sizeofmsg = 0; iterator<string> iter = holder.keyset().iterator(); int allowedbytes = max_allowed_size - allowed_buffer; while (iter.hasnext() && sizeofmsg < allowedbytes) { string key = iter.next(); dataobject temp = holder.get(key); // code here holder.remove(key); // code here send data queue } } // holder map used in below method add events it. // below method being called other place. public void addtoholderrequest(string key, dataobject stream) { holder.put(key, stream); }
below maven dependency using this:
<dependency> <groupid>com.googlecode.concurrentlinkedhashmap</groupid> <artifactid>concurrentlinkedhashmap-lru</artifactid> <version>1.4</version> </dependency>
i not sure whether right way this? max_size
equates 3.7 gb if events of 200 bytes in average? there better way this? have background thread call deleteevents()
method every 30 second , same background thread calls submit
method extract data holder
map , send queue.
so idea is, add events holder
map in addtoholderrequest
method , background every 30 second call submit
method send data our queue iterating map , after submit method finished, call deleteevents()
method same background thread purge elements. running code in production , looks not purging events , holder map size keeps growing. have min/max heap memory set 6gb.
- in lieu of estimating size of objects in jvm , referencing them using strong references can use soft references "most used implement memory-sensitive caches" (softreference). e.g. cachebuilder.softvalues() google/guava: google core libraries java 6+: "softly-referenced objects garbage-collected in globally least-recently-used manner, in response memory demand." however, i'd recommend first familiarizing cachesexplained · google/guava wiki (specifically reference-based eviction section).
- as tweak using soft references can try "victim caching approach" described here uses "normal cache evicts [a] soft cache, , recovers entries on miss if possible".
- if want estimate size of objects take @ ehcache , sizing storage tiers. has built-in sizing computation , enforcement memory-limited caches.
Comments
Post a Comment