Working with jRQL is completely based on content class names and content class element names. Therefore in the background it needs to access content classes and elements very often. This was the main reason for me to introduce some caches within the objects.
I introduced caches as often as possible. I used the policy to cache the results of the RQL commands – for mostly static information. Means even page details will be cached, because I assumed they will not change while a jRQL process is running.
But there are several exceptions, for instance getting the page state within the workflow (draft, in correction, released) in contrast would be never cached, because I consider this to be changed quite often and to dangerous to cache. I reject using a cache when it seems to dangerous to me. The correctness stands over the performance, I think.
If you change a page, content class or any other object via jRQL the caches are automatically invalidated. This forces a re-read at the next access of the object’s data into the cache again. This is handled completely transparent as long as you use jRQL functions.
Because this caches can occupy an quite reasonable amount of memory, especially the caches within page objects, the class Page offers the method freeOccupiedMemory(). This can happen if you loop through hundreds of pages, because all pages are referenced from the PageArrayList and so will not be released by the Garbage Collector.
Use page.freeOccupiedMemory() at the end of your loop to keep your memory usage low. With that method all cache data and page details will be deleted, so the memory will not increase until the limit of the Java VM is reached.
But still the page object can be used afterwards, because the page GUID will not be removed. All empty instance variables and caches will be filled again.