com.gemstone.gnu.trove.ChangeLog Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of gemfire-trove Show documentation
Show all versions of gemfire-trove Show documentation
SnappyData store based off Pivotal GemFireXD
The newest version!
v 1.1b3
Fixed 918045 -- bug in *Decorator classes made it impossible to subclass
the decorators and make those subclasses cloneable. Thanks to Steve
Lunt for the bug report.
v 1.1b2
Fixed 901135 -- bug in T*Hash.insertionIndex() methods that prevented
us from reclaiming the very first REMOVED slot if that's what the
first hash landed upon. In applications that do lots of adds/removes,
this would have led to unnecessary rehashing. With this fix, you
can add(1), remove(1), and then re add(1) and the same slot will be
used. Thanks to matlun for reporting the problem.
v 1.1b1
fixed a bug in decorator equals methods (845890)
fixed a memory leak for certain usage patterns (843772)
corrected some javadoc defects (846286)
minor tuning of T*ArrayList toString()
added clone() methods to decorator classes. Thanks to Steve Lunt
for the bug report.
implemented equals()/hashCode() on THashMap.KeyView and (by extension) subclasses.
This allows the test in THashMapTests.testKeySetEqualsEquivalentSet to pass.
fix 787515 -- bug in T*ArrayList.set(int, *[], int, int)
v 1.0.2
revamped versioning scheme
added hashCode implementation to collections so that they can appear in collections
too.
added check+exception to detect violations of the equals() <-> hashCode() contract
specified in java.lang.Object's api.
v 0.1.8
Added gnu.trove.decorator package, with Decorator classes that wrap trove's
primitive maps and sets for conformance with the java.util.{Map,Set}
APIs.
v 0.1.7
Added iterators to the primitive maps and sets. Note that semantics
differ from those of java.util.Iterator, so RTFM.
Added hashing strategy interfaces to allow users to implement custom
hashing schemes for optimal distribution for specific data sets.
Fixed bug 602376 -- ClassCastException n THashMap.putAll
Made all collections implement Cloneable. primitive collections clone deeply;
Object collections produce shallow clones.
v 0.1.6
Minor bug fix release.
Two bugs in TIntArrayList have been fixed. Thanks to Jessica P.
Hekman for reporting them.
One of these prevented toNativeArray from working correctly in
certain circumstances; the other problem was with the depth of
cloning operations.
One enhancement to TintArrayList has been made -- serialized
instances are now compact -- previous versions relied on serialization
behavior of the backing array, which included empty slots and so
wasted space.
Serialization of sets/maps has been modified as follows: previously
all of the writeObject methods used a local implementation of the
TXXXProcedure for writing out the data in a particular collection.
These have been replaced by a single class (SerializationProcedure)
which implements all of the appropriate interfaces. This reduces
the number of .class files in the trove jar
v 0.1.5
added retainEntries methods to all Map classes. These methods accept
procedure objects of the appropriate sort and use the return value of
those procedures to determine whether or not a particular entry in the
map should be retained.
This is useful for applying a cutoff in a map without copying data:
TIntIntHashMap map = new TIntIntHashMap();
// load up map
map.retainEntries(new TIntIntProcedure() {
public boolean execute(int key, int val) {
return val > 3; // retain only those mappings with val > 3
}
});
It can also be used if you want to reduce one map to the intersection
between it and another map:
THashMap map1 = new THashMap();
THashMap map2 = new THashMap();
// load up both maps
map1.retainEntries(new TObjectObjectProcedure() {
public boolean execute(Object key, Object val) {
return map2.containsKey(key); // retain the intersection with map2
}
});
v 0.1.4
added increment() and adjustValue() methods to maps with primitive values.
These are useful in the all-too-common case where you need a map for the
purposes of counting the number of times a key is seen. These methods
are handy because you don't have to do a "containsKey" test prior to
every increment() call. Instead, you can check the return status (true
if an existing mapping was modified) and if it is false, then insert the
mapping with the initial value:
TIntIntHashMap map = TIntIntHashMap();
int key, val;
key = keyFromSomeWhere();
val = valFromSomeWhere();
if (! map.increment(key)) map.put(key, 0);
increment is implemented in terms of adjustValue, which allows you
to specify the amount by which the primitive value associated with a
particular key should be adjusted.
Thanks to Jason Baldridge for the idea.
v 0.1.3
bug fix in TLinkedList ListIterator implementation: fixed remove()
behavior so that it correctly removes the last element returned by
either next() or previous(). Added several tests to suite to verify
that list iterator does what it's supposed to do in accordance with the
collections API docs.
v 0.1.2
bug fix in primitive hash sets: toArray now produces a correct return
value of size set.size(). Previously it generated an
ArrayIndexOutOfBoundsException. Thanks to Tobias Henle for finding this.
revised class hierarchy so that all primitive hashing collections are
derived from TPrimitiveHash, which extends THash. Object hashing collections
are derived from TObjectHash. As part of this change, the byte[] flags
were pushed down to TPrimitiveHash, and TObjectHash was revised so that
it no longer needs a byte[] array to track the state of the table. This
has an appreciable impact on the total size of Object hashing collections:
a set of 1,000 Integers used to take 69% of the memory needed for a JDK
set; it now takes only 62%.
removed slots can now be re-used in all hashing collections. If the
search for an insertion index does not find that the key is already
present in the map, insertionIndex implementations will now return the
index of the first REMOVED or FREE slot. This means that tables which
undergo a pattern of insertions/deletions without radical changes in
size will not trigger as many rehashes as before.
revised hashing algorithm so that the second hash function is only executed
when necessary and so that FREE or FULL w/identical content slots can be
found with a minimum of effort.
v 0.1.1
made the initial capacity of lists used for return values in grep/inverseGrep
be the default capacity rather than the size of the list-being-grepped.
This saves space when a small list is grepped from a larger list.
added reset and resetQuick methods to *ArrayList implementations so that
lists can be cleared while retaining their current capacity.
changed *ArrayList toNativeArray() method behavior so that a List of 0 length
can return a native array of 0 length (previously this would have thrown
an exception)
revamped *ArrayList insert/remove implementations so that edits are done
in place instead of with a temporary array.
minor performance tweak in THashIterator
v 0.1.0
Added primitive ArrayList implementations.
v 0.0.9
Made all collections implement java.io.Serializable
Made all collections implement equals() so that they compare their contents
instead of doing collection object identity.
Made TLinkable extend java.io.Serializable
Changed secondary hash function to reflect Knuth's observation about the
desirability of using an odd value.
Added trimToSize() and ensureCapacity() methods
(finally) implemented loadFactor, with default of 0.5, per Knuth.
Note that load/capacity/size are handled differently than in the Javasoft
implementations. Specifically, if you ask for a collection of capacity
X, Trove will give you one that can hold X elements without rehashing;
Javasoft's implementation does not do this.
v 0.0.8
Fixes for several user-reported bugs. Unit tests have been added to
demonstrate that each of these is actually fixed.
485440 Null in keys() from TObjectDoubleHashMap
size/free were being updated even when a map.put() was really a replaement
for an existing mapping.
485829 null values not handled correctly
485831 null values cause exceptions
485834 null values cause NullPointerException
made Maps that hold Object values behave correctly (no NPE) when doing
comparisons with null objects, since null values are legal.
485837 entrySet comparison semantics are wrong
made entrySet check both key and value when doing comparisons.
v 0.0.7
new package: gnu.trove.benchmark
replaced gnu.trove.Benchmark with benchmark package. This now produces
formatted reports that include OS/JVM specs.
Changed benchmarking approach so that timestamps are only taken at the
beginning/end of the full repetition count for an operation. This
reduces the variability caused by calling System.currentTimeMillis()
more than once in the same second.
Added memory profiler which produces a report with the memory requirements
for trove/javasoft collections of the same objects.
build.xml
modified jar task so that the benchmark package is not included in the
jar file or the javadoc set. Only the framework classes get jarred up;
developers can run the benchmarks by using the output/classes directory
instead.
TObjectHash
Based on profiling results, replaced calls to HashFunctions.hash(obj) with
direct invocation of obj.hashCode() to save a method call. This is
probably inlined by hotspot compilers, but my profiler doesn't work with
those.
TObjectHash.HashIterator
Based on bytecode examination, replaced a putfield/getfield combo with a
putfield/dup_x. This saves three opcodes in a method which gets called
a lot (moveToNextIndex()).
PrimeFinder/HashFunctions
finalized both classes and all methods.
© 2015 - 2025 Weber Informatics LLC | Privacy Policy