All Downloads are FREE. Search and download functionalities are using the official Maven repository.

java9.util.stream.Stream Maven / Gradle / Ivy

Go to download

Backport of Java 9 java.util.stream API for Android Studio 3+ desugar toolchain, forked from http://sourceforge.net/projects/streamsupport

There is a newer version: 1.7.4
Show newest version
/*
 * Copyright (c) 2012, 2017, Oracle and/or its affiliates. All rights reserved.
 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 *
 * This code is free software; you can redistribute it and/or modify it
 * under the terms of the GNU General Public License version 2 only, as
 * published by the Free Software Foundation.  Oracle designates this
 * particular file as subject to the "Classpath" exception as provided
 * by Oracle in the LICENSE file that accompanied this code.
 *
 * This code is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
 * version 2 for more details (a copy is included in the LICENSE file that
 * accompanied this code).
 *
 * You should have received a copy of the GNU General Public License version
 * 2 along with this work; if not, write to the Free Software Foundation,
 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
 *
 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
 * or visit www.oracle.com if you need additional information or have any
 * questions.
 */
package java9.util.stream;

import java.util.Comparator;

import java9.util.Objects;
import java9.util.Optional;
import java9.util.Spliterator;
import java9.util.Spliterators;
import java9.util.function.BiConsumer;
import java9.util.function.BiFunction;
import java9.util.function.BinaryOperator;
import java9.util.function.Consumer;
import java9.util.function.Function;
import java9.util.function.IntFunction;
import java9.util.function.Predicate;
import java9.util.function.Supplier;
import java9.util.function.ToDoubleFunction;
import java9.util.function.ToIntFunction;
import java9.util.function.ToLongFunction;
import java9.util.function.UnaryOperator;

/**
 * A sequence of elements supporting sequential and parallel aggregate
 * operations.  The following example illustrates an aggregate operation using
 * {@link Stream} and {@link IntStream}:
 *
 * 
{@code
 *     int sum = widgets.stream()
 *                      .filter(w -> w.getColor() == RED)
 *                      .mapToInt(w -> w.getWeight())
 *                      .sum();
 * }
* * In this example, {@code widgets} is a {@code Collection}. We create * a stream of {@code Widget} objects via {@link java.util.Collection#stream Collection.stream()}, * filter it to produce a stream containing only the red widgets, and then * transform it into a stream of {@code int} values representing the weight of * each red widget. Then this stream is summed to produce a total weight. * *

In addition to {@code Stream}, which is a stream of object references, * there are primitive specializations for {@link IntStream}, {@link LongStream}, * and {@link DoubleStream}, all of which are referred to as "streams" and * conform to the characteristics and restrictions described here. * *

To perform a computation, stream * operations are composed into a * stream pipeline. A stream pipeline consists of a source (which * might be an array, a collection, a generator function, an I/O channel, * etc), zero or more intermediate operations (which transform a * stream into another stream, such as {@link Stream#filter(Predicate)}), and a * terminal operation (which produces a result or side-effect, such * as {@link Stream#count()} or {@link Stream#forEach(Consumer)}). * Streams are lazy; computation on the source data is only performed when the * terminal operation is initiated, and source elements are consumed only * as needed. * *

A stream implementation is permitted significant latitude in optimizing * the computation of the result. For example, a stream implementation is free * to elide operations (or entire stages) from a stream pipeline -- and * therefore elide invocation of behavioral parameters -- if it can prove that * it would not affect the result of the computation. This means that * side-effects of behavioral parameters may not always be executed and should * not be relied upon, unless otherwise specified (such as by the terminal * operations {@code forEach} and {@code forEachOrdered}). (For a specific * example of such an optimization, see the API note documented on the * {@link #count} operation. For more detail, see the * side-effects section of the * stream package documentation.) * *

Collections and streams, while bearing some superficial similarities, * have different goals. Collections are primarily concerned with the efficient * management of, and access to, their elements. By contrast, streams do not * provide a means to directly access or manipulate their elements, and are * instead concerned with declaratively describing their source and the * computational operations which will be performed in aggregate on that source. * However, if the provided stream operations do not offer the desired * functionality, the {@link #iterator()} and {@link #spliterator()} operations * can be used to perform a controlled traversal. * *

A stream pipeline, like the "widgets" example above, can be viewed as * a query on the stream source. Unless the source was explicitly * designed for concurrent modification (such as a {@link java.util.concurrent.ConcurrentHashMap}), * unpredictable or erroneous behavior may result from modifying the stream * source while it is being queried. * *

Most stream operations accept parameters that describe user-specified * behavior, such as the lambda expression {@code w -> w.getWeight()} passed to * {@code mapToInt} in the example above. To preserve correct behavior, * these behavioral parameters: *

    *
  • must be non-interfering * (they do not modify the stream source); and
  • *
  • in most cases must be stateless * (their result should not depend on any state that might change during execution * of the stream pipeline).
  • *
* *

Such parameters are always instances of a * functional interface such * as {@link java9.util.function.Function}, and are often lambda expressions or * method references. Unless otherwise specified these parameters must be * non-null. * *

A stream should be operated on (invoking an intermediate or terminal stream * operation) only once. This rules out, for example, "forked" streams, where * the same source feeds two or more pipelines, or multiple traversals of the * same stream. A stream implementation may throw {@link IllegalStateException} * if it detects that the stream is being reused. However, since some stream * operations may return their receiver rather than a new stream object, it may * not be possible to detect reuse in all cases. * *

Streams have a {@link #close()} method and implement {@link AutoCloseable}, * but nearly all stream instances do not actually need to be closed after use. * Generally, only streams whose source is an IO channel (such as those returned * by {@link java.nio.file.Files#lines(java.nio.file.Path, java.nio.charset.Charset)}) * will require closing. Most streams are backed by collections, arrays, or * generating functions, which require no special resource management. (If a * stream does require closing, it can be declared as a resource in a * {@code try}-with-resources statement.) * *

Stream pipelines may execute either sequentially or in * parallel. This * execution mode is a property of the stream. Streams are created * with an initial choice of sequential or parallel execution. (For example, * {@link java.util.Collection#stream() Collection.stream()} creates a sequential stream, * and {@link java.util.Collection#parallelStream() Collection.parallelStream()} creates * a parallel one.) This choice of execution mode may be modified by the * {@link #sequential()} or {@link #parallel()} methods, and may be queried with * the {@link #isParallel()} method. * * @param the type of the stream elements * @since 1.8 * @see IntStream * @see LongStream * @see DoubleStream * @see java9.util.stream */ public interface Stream extends BaseStream> { /** * Returns a stream consisting of the elements of this stream that match * the given predicate. * *

This is an intermediate * operation. * * @param predicate a non-interfering, * stateless * predicate to apply to each element to determine if it * should be included * @return the new stream */ Stream filter(Predicate predicate); /** * Returns a stream consisting of the results of applying the given * function to the elements of this stream. * *

This is an intermediate * operation. * * @param The element type of the new stream * @param mapper a non-interfering, * stateless * function to apply to each element * @return the new stream */ Stream map(Function mapper); /** * Returns an {@code IntStream} consisting of the results of applying the * given function to the elements of this stream. * *

This is an * intermediate operation. * * @param mapper a non-interfering, * stateless * function to apply to each element * @return the new stream */ IntStream mapToInt(ToIntFunction mapper); /** * Returns a {@code LongStream} consisting of the results of applying the * given function to the elements of this stream. * *

This is an intermediate * operation. * * @param mapper a non-interfering, * stateless * function to apply to each element * @return the new stream */ LongStream mapToLong(ToLongFunction mapper); /** * Returns a {@code DoubleStream} consisting of the results of applying the * given function to the elements of this stream. * *

This is an intermediate * operation. * * @param mapper a non-interfering, * stateless * function to apply to each element * @return the new stream */ DoubleStream mapToDouble(ToDoubleFunction mapper); /** * Returns a stream consisting of the results of replacing each element of * this stream with the contents of a mapped stream produced by applying * the provided mapping function to each element. Each mapped stream is * {@link java9.util.stream.BaseStream#close() closed} after its contents * have been placed into this stream. (If a mapped stream is {@code null} * an empty stream is used, instead.) * *

This is an intermediate * operation. * *

API Note:
* The {@code flatMap()} operation has the effect of applying a one-to-many * transformation to the elements of the stream, and then flattening the * resulting elements into a new stream. * *

Examples. * *

If {@code orders} is a stream of purchase orders, and each purchase * order contains a collection of line items, then the following produces a * stream containing all the line items in all the orders: *

{@code
     *     orders.flatMap(order -> order.getLineItems().stream())...
     * }
* *

If {@code path} is the path to a file, then the following produces a * stream of the {@code words} contained in that file: *

{@code
     *     Stream lines = Files.lines(path, StandardCharsets.UTF_8);
     *     Stream words = lines.flatMap(line -> RefStreams.of(line.split(" +")));
     * }
* The {@code mapper} function passed to {@code flatMap} splits a line, * using a simple regular expression, into an array of words, and then * creates a stream of words from that array. * * @param The element type of the new stream * @param mapper a non-interfering, * stateless * function to apply to each element which produces a stream * of new values * @return the new stream */ Stream flatMap(Function> mapper); /** * Returns an {@code IntStream} consisting of the results of replacing each * element of this stream with the contents of a mapped stream produced by * applying the provided mapping function to each element. Each mapped * stream is {@link java9.util.stream.BaseStream#close() closed} after its * contents have been placed into this stream. (If a mapped stream is * {@code null} an empty stream is used, instead.) * *

This is an intermediate * operation. * * @param mapper a non-interfering, * stateless * function to apply to each element which produces a stream * of new values * @return the new stream * @see #flatMap(Function) */ IntStream flatMapToInt(Function mapper); /** * Returns an {@code LongStream} consisting of the results of replacing each * element of this stream with the contents of a mapped stream produced by * applying the provided mapping function to each element. Each mapped * stream is {@link java9.util.stream.BaseStream#close() closed} after its * contents have been placed into this stream. (If a mapped stream is * {@code null} an empty stream is used, instead.) * *

This is an intermediate * operation. * * @param mapper a non-interfering, * stateless * function to apply to each element which produces a stream * of new values * @return the new stream * @see #flatMap(Function) */ LongStream flatMapToLong(Function mapper); /** * Returns an {@code DoubleStream} consisting of the results of replacing * each element of this stream with the contents of a mapped stream produced * by applying the provided mapping function to each element. Each mapped * stream is {@link java9.util.stream.BaseStream#close() closed} after its * contents have placed been into this stream. (If a mapped stream is * {@code null} an empty stream is used, instead.) * *

This is an intermediate * operation. * * @param mapper a non-interfering, * stateless * function to apply to each element which produces a stream * of new values * @return the new stream * @see #flatMap(Function) */ DoubleStream flatMapToDouble(Function mapper); /** * Returns a stream consisting of the distinct elements (according to * {@link Object#equals(Object)}) of this stream. * *

For ordered streams, the selection of distinct elements is stable * (for duplicated elements, the element appearing first in the encounter * order is preserved.) For unordered streams, no stability guarantees * are made. * *

This is a stateful * intermediate operation. * *

API Note:
* Preserving stability for {@code distinct()} in parallel pipelines is * relatively expensive (requires that the operation act as a full barrier, * with substantial buffering overhead), and stability is often not needed. * Using an unordered stream source (such as {@link Stream#generate(Supplier)}) * or removing the ordering constraint with {@link #unordered()} may result * in significantly more efficient execution for {@code distinct()} in parallel * pipelines, if the semantics of your situation permit. If consistency * with encounter order is required, and you are experiencing poor performance * or memory utilization with {@code distinct()} in parallel pipelines, * switching to sequential execution with {@link #sequential()} may improve * performance. * * @return the new stream */ Stream distinct(); /** * Returns a stream consisting of the elements of this stream, sorted * according to natural order. If the elements of this stream are not * {@code Comparable}, a {@code java.lang.ClassCastException} may be thrown * when the terminal operation is executed. * *

For ordered streams, the sort is stable. For unordered streams, no * stability guarantees are made. * *

This is a stateful * intermediate operation. * * @return the new stream */ Stream sorted(); /** * Returns a stream consisting of the elements of this stream, sorted * according to the provided {@code Comparator}. * *

For ordered streams, the sort is stable. For unordered streams, no * stability guarantees are made. * *

This is a stateful * intermediate operation. * * @param comparator a non-interfering, * stateless * {@code Comparator} to be used to compare stream elements * @return the new stream */ Stream sorted(Comparator comparator); /** * Returns a stream consisting of the elements of this stream, additionally * performing the provided action on each element as elements are consumed * from the resulting stream. * *

This is an intermediate * operation. * *

For parallel stream pipelines, the action may be called at * whatever time and in whatever thread the element is made available by the * upstream operation. If the action modifies shared state, * it is responsible for providing the required synchronization. * *

API Note:
This method exists mainly to support debugging, where you want * to see the elements as they flow past a certain point in a pipeline: *

{@code
     *     RefStreams.of("one", "two", "three", "four")
     *         .filter(e -> e.length() > 3)
     *         .peek(e -> System.out.println("Filtered value: " + e))
     *         .map(String::toUpperCase)
     *         .peek(e -> System.out.println("Mapped value: " + e))
     *         .collect(Collectors.toList());
     * }
* *

In cases where the stream implementation is able to optimize away the * production of some or all the elements (such as with short-circuiting * operations like {@code findFirst}, or in the example described in * {@link #count}), the action will not be invoked for those elements. * * @param action a * non-interfering action to perform on the elements as * they are consumed from the stream * @return the new stream */ Stream peek(Consumer action); /** * Returns a stream consisting of the elements of this stream, truncated * to be no longer than {@code maxSize} in length. * *

This is a short-circuiting * stateful intermediate operation. * *

API Note:
* While {@code limit()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel pipelines, * especially for large values of {@code maxSize}, since {@code limit(n)} * is constrained to return not just any n elements, but the * first n elements in the encounter order. Using an unordered * stream source (such as {@link Stream#generate(Supplier)}) or removing the * ordering constraint with {@link #unordered()} may result in significant * speedups of {@code limit()} in parallel pipelines, if the semantics of * your situation permit. If consistency with encounter order is required, * and you are experiencing poor performance or memory utilization with * {@code limit()} in parallel pipelines, switching to sequential execution * with {@link #sequential()} may improve performance. * * @param maxSize the number of elements the stream should be limited to * @return the new stream * @throws IllegalArgumentException if {@code maxSize} is negative */ Stream limit(long maxSize); /** * Returns a stream consisting of the remaining elements of this stream * after discarding the first {@code n} elements of the stream. * If this stream contains fewer than {@code n} elements then an * empty stream will be returned. * *

This is a stateful * intermediate operation. * *

API Note:
* While {@code skip()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel pipelines, * especially for large values of {@code n}, since {@code skip(n)} * is constrained to skip not just any n elements, but the * first n elements in the encounter order. Using an unordered * stream source (such as {@link Stream#generate(Supplier)}) or removing the * ordering constraint with {@link #unordered()} may result in significant * speedups of {@code skip()} in parallel pipelines, if the semantics of * your situation permit. If consistency with encounter order is required, * and you are experiencing poor performance or memory utilization with * {@code skip()} in parallel pipelines, switching to sequential execution * with {@link #sequential()} may improve performance. * * @param n the number of leading elements to skip * @return the new stream * @throws IllegalArgumentException if {@code n} is negative */ Stream skip(long n); /** * Returns, if this stream is ordered, a stream consisting of the longest * prefix of elements taken from this stream that match the given predicate. * Otherwise returns, if this stream is unordered, a stream consisting of a * subset of elements taken from this stream that match the given predicate. * *

If this stream is ordered then the longest prefix is a contiguous * sequence of elements of this stream that match the given predicate. The * first element of the sequence is the first element of this stream, and * the element immediately following the last element of the sequence does * not match the given predicate. * *

If this stream is unordered, and some (but not all) elements of this * stream match the given predicate, then the behavior of this operation is * nondeterministic; it is free to take any subset of matching elements * (which includes the empty set). * *

Independent of whether this stream is ordered or unordered if all * elements of this stream match the given predicate then this operation * takes all elements (the result is the same as the input), or if no * elements of the stream match the given predicate then no elements are * taken (the result is an empty stream). * *

This is a short-circuiting * stateful intermediate operation. * *

Implementation Requirements:
* The default implementation obtains the {@link #spliterator() spliterator} * of this stream, wraps that spliterator so as to support the semantics * of this operation on traversal, and returns a new stream associated with * the wrapped spliterator. The returned stream preserves the execution * characteristics of this stream (namely parallel or sequential execution * as per {@link #isParallel()}) but the wrapped spliterator may choose to * not support splitting. When the returned stream is closed, the close * handlers for both the returned and this stream are invoked. * *

API Note:
* While {@code takeWhile()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel * pipelines, since the operation is constrained to return not just any * valid prefix, but the longest prefix of elements in the encounter order. * Using an unordered stream source (such as {@link Stream#generate(Supplier)}) or * removing the ordering constraint with {@link #unordered()} may result in * significant speedups of {@code takeWhile()} in parallel pipelines, if the * semantics of your situation permit. If consistency with encounter order * is required, and you are experiencing poor performance or memory * utilization with {@code takeWhile()} in parallel pipelines, switching to * sequential execution with {@link #sequential()} may improve performance. * *

One use-case for {@code takeWhile} is executing a stream pipeline * for a certain duration. The following example will calculate as many * probable primes as is possible, in parallel, during 5 seconds: *

{@code
     *     long t = System.currentTimeMillis();
     *     List pps = RefStreams
     *         .generate(() -> BigInteger.probablePrime(1024, ThreadLocalRandom.current()))
     *         .parallel()
     *         .takeWhile(e -> (System.currentTimeMillis() - t) < TimeUnit.SECONDS.toMillis(5))
     *         .collect(toList());
     *
     * }
* * @param predicate a non-interfering, * stateless * predicate to apply to elements to determine the longest * prefix of elements. * @return the new stream * @since 9 */ default Stream takeWhile(Predicate predicate) { Objects.requireNonNull(predicate); // Reuses the unordered spliterator, which, when encounter is present, // is safe to use as long as it configured not to split return StreamSupport.stream( new WhileOps.UnorderedWhileSpliterator.OfRef.Taking<>(spliterator(), true, predicate), isParallel()).onClose(StreamSupport.closeHandler(this)); } /** * Returns, if this stream is ordered, a stream consisting of the remaining * elements of this stream after dropping the longest prefix of elements * that match the given predicate. Otherwise returns, if this stream is * unordered, a stream consisting of the remaining elements of this stream * after dropping a subset of elements that match the given predicate. * *

If this stream is ordered then the longest prefix is a contiguous * sequence of elements of this stream that match the given predicate. The * first element of the sequence is the first element of this stream, and * the element immediately following the last element of the sequence does * not match the given predicate. * *

If this stream is unordered, and some (but not all) elements of this * stream match the given predicate, then the behavior of this operation is * nondeterministic; it is free to drop any subset of matching elements * (which includes the empty set). * *

Independent of whether this stream is ordered or unordered if all * elements of this stream match the given predicate then this operation * drops all elements (the result is an empty stream), or if no elements of * the stream match the given predicate then no elements are dropped (the * result is the same as the input). * *

This is a stateful * intermediate operation. * *

Implementation Requirements:
* The default implementation obtains the {@link #spliterator() spliterator} * of this stream, wraps that spliterator so as to support the semantics * of this operation on traversal, and returns a new stream associated with * the wrapped spliterator. The returned stream preserves the execution * characteristics of this stream (namely parallel or sequential execution * as per {@link #isParallel()}) but the wrapped spliterator may choose to * not support splitting. When the returned stream is closed, the close * handlers for both the returned and this stream are invoked. * *

API Note:
* While {@code dropWhile()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel * pipelines, since the operation is constrained to return not just any * valid prefix, but the longest prefix of elements in the encounter order. * Using an unordered stream source (such as {@link Stream#generate(Supplier)}) or * removing the ordering constraint with {@link #unordered()} may result in * significant speedups of {@code dropWhile()} in parallel pipelines, if the * semantics of your situation permit. If consistency with encounter order * is required, and you are experiencing poor performance or memory * utilization with {@code dropWhile()} in parallel pipelines, switching to * sequential execution with {@link #sequential()} may improve performance. * * @param predicate a non-interfering, * stateless * predicate to apply to elements to determine the longest * prefix of elements. * @return the new stream * @since 9 */ default Stream dropWhile(Predicate predicate) { Objects.requireNonNull(predicate); // Reuses the unordered spliterator, which, when encounter is present, // is safe to use as long as it configured not to split return StreamSupport.stream( new WhileOps.UnorderedWhileSpliterator.OfRef.Dropping<>(spliterator(), true, predicate), isParallel()).onClose(StreamSupport.closeHandler(this)); } /** * Performs an action for each element of this stream. * *

This is a terminal * operation. * *

The behavior of this operation is explicitly nondeterministic. * For parallel stream pipelines, this operation does not * guarantee to respect the encounter order of the stream, as doing so * would sacrifice the benefit of parallelism. For any given element, the * action may be performed at whatever time and in whatever thread the * library chooses. If the action accesses shared state, it is * responsible for providing the required synchronization. * * @param action a * non-interfering action to perform on the elements */ void forEach(Consumer action); /** * Performs an action for each element of this stream, in the encounter * order of the stream if the stream has a defined encounter order. * *

This is a terminal * operation. * *

This operation processes the elements one at a time, in encounter * order if one exists. Performing the action for one element * happens-before * performing the action for subsequent elements, but for any given element, * the action may be performed in whatever thread the library chooses. * * @param action a * non-interfering action to perform on the elements * @see #forEach(Consumer) */ void forEachOrdered(Consumer action); /** * Returns an array containing the elements of this stream. * *

This is a terminal * operation. * * @return an array, whose {@linkplain Class#getComponentType runtime component * type} is {@code Object}, containing the elements of this stream */ Object[] toArray(); /** * Returns an array containing the elements of this stream, using the * provided {@code generator} function to allocate the returned array, as * well as any additional arrays that might be required for a partitioned * execution or for resizing. * *

This is a terminal * operation. * *

API Note:
* The generator function takes an integer, which is the size of the * desired array, and produces an array of the desired size. This can be * concisely expressed with an array constructor reference: *

{@code
     *     Person[] men = people.stream()
     *                          .filter(p -> p.getGender() == MALE)
     *                          .toArray(Person[]::new);
     * }
* * @param the component type of the resulting array * @param generator a function which produces a new array of the desired * type and the provided length * @return an array containing the elements in this stream * @throws ArrayStoreException if the runtime type of any element of this * stream is not assignable to the {@linkplain Class#getComponentType * runtime component type} of the generated array */ A[] toArray(IntFunction generator); /** * Performs a reduction on the * elements of this stream, using the provided identity value and an * associative * accumulation function, and returns the reduced value. This is equivalent * to: *
{@code
     *     T result = identity;
     *     for (T element : this stream)
     *         result = accumulator.apply(result, element)
     *     return result;
     * }
* * but is not constrained to execute sequentially. * *

The {@code identity} value must be an identity for the accumulator * function. This means that for all {@code t}, * {@code accumulator.apply(identity, t)} is equal to {@code t}. * The {@code accumulator} function must be an * associative function. * *

This is a terminal * operation. * *

API Note:
Sum, min, max, average, and string concatenation are all special * cases of reduction. Summing a stream of numbers can be expressed as: * *

{@code
     *     Integer sum = integers.reduce(0, (a, b) -> a+b);
     * }
* * or: * *
{@code
     *     Integer sum = integers.reduce(0, Integer::sum);
     * }
* *

While this may seem a more roundabout way to perform an aggregation * compared to simply mutating a running total in a loop, reduction * operations parallelize more gracefully, without needing additional * synchronization and with greatly reduced risk of data races. * * @param identity the identity value for the accumulating function * @param accumulator an associative, * non-interfering, * stateless * function for combining two values * @return the result of the reduction */ T reduce(T identity, BinaryOperator accumulator); /** * Performs a reduction on the * elements of this stream, using an * associative accumulation * function, and returns an {@code Optional} describing the reduced value, * if any. This is equivalent to: *

{@code
     *     boolean foundAny = false;
     *     T result = null;
     *     for (T element : this stream) {
     *         if (!foundAny) {
     *             foundAny = true;
     *             result = element;
     *         }
     *         else
     *             result = accumulator.apply(result, element);
     *     }
     *     return foundAny ? Optional.of(result) : Optional.empty();
     * }
* * but is not constrained to execute sequentially. * *

The {@code accumulator} function must be an * associative function. * *

This is a terminal * operation. * * @param accumulator an associative, * non-interfering, * stateless * function for combining two values * @return an {@link Optional} describing the result of the reduction * @throws NullPointerException if the result of the reduction is null * @see #reduce(Object, BinaryOperator) * @see #min(Comparator) * @see #max(Comparator) */ Optional reduce(BinaryOperator accumulator); /** * Performs a reduction on the * elements of this stream, using the provided identity, accumulation and * combining functions. This is equivalent to: *

{@code
     *     U result = identity;
     *     for (T element : this stream)
     *         result = accumulator.apply(result, element)
     *     return result;
     * }
* * but is not constrained to execute sequentially. * *

The {@code identity} value must be an identity for the combiner * function. This means that for all {@code u}, {@code combiner(identity, u)} * is equal to {@code u}. Additionally, the {@code combiner} function * must be compatible with the {@code accumulator} function; for all * {@code u} and {@code t}, the following must hold: *

{@code
     *     combiner.apply(u, accumulator.apply(identity, t)) == accumulator.apply(u, t)
     * }
* *

This is a terminal * operation. * *

API Note:
Many reductions using this form can be represented more simply * by an explicit combination of {@code map} and {@code reduce} operations. * The {@code accumulator} function acts as a fused mapper and accumulator, * which can sometimes be more efficient than separate mapping and reduction, * such as when knowing the previously reduced value allows you to avoid * some computation. * * @param The type of the result * @param identity the identity value for the combiner function * @param accumulator an associative, * non-interfering, * stateless * function for incorporating an additional element into a result * @param combiner an associative, * non-interfering, * stateless * function for combining two values, which must be * compatible with the accumulator function * @return the result of the reduction * @see #reduce(BinaryOperator) * @see #reduce(Object, BinaryOperator) */ U reduce(U identity, BiFunction accumulator, BinaryOperator combiner); /** * Performs a mutable * reduction operation on the elements of this stream. A mutable * reduction is one in which the reduced value is a mutable result container, * such as an {@code ArrayList}, and elements are incorporated by updating * the state of the result rather than by replacing the result. This * produces a result equivalent to: *

{@code
     *     R result = supplier.get();
     *     for (T element : this stream)
     *         accumulator.accept(result, element);
     *     return result;
     * }
* *

Like {@link #reduce(Object, BinaryOperator)}, {@code collect} operations * can be parallelized without requiring additional synchronization. * *

This is a terminal * operation. * *

API Note:
There are many existing classes in the JDK whose signatures are * well-suited for use with method references as arguments to {@code collect()}. * For example, the following will accumulate strings into an {@code ArrayList}: *

{@code
     *     List asList = stringStream.collect(ArrayList::new, ArrayList::add,
     *                                                ArrayList::addAll);
     * }
* *

The following will take a stream of strings and concatenates them into a * single string: *

{@code
     *     String concat = stringStream.collect(StringBuilder::new, StringBuilder::append,
     *                                          StringBuilder::append)
     *                                 .toString();
     * }
* * @param the type of the mutable result container * @param supplier a function that creates a new mutable result container. * For a parallel execution, this function may be called * multiple times and must return a fresh value each time * @param accumulator an associative, * non-interfering, * stateless * function that must fold an element into a result * container * @param combiner an associative, * non-interfering, * stateless * function that accepts two partial result containers * and merges them, which must be compatible with the * accumulator function. The combiner function must fold * the elements from the second result container into the * first result container * @return the result of the reduction */ R collect(Supplier supplier, BiConsumer accumulator, BiConsumer combiner); /** * Performs a mutable * reduction operation on the elements of this stream using a * {@code Collector}. A {@code Collector} * encapsulates the functions used as arguments to * {@link #collect(Supplier, BiConsumer, BiConsumer)}, allowing for reuse of * collection strategies and composition of collect operations such as * multiple-level grouping or partitioning. * *

If the stream is parallel, and the {@code Collector} * is {@link Collector.Characteristics#CONCURRENT concurrent}, and * either the stream is unordered or the collector is * {@link Collector.Characteristics#UNORDERED unordered}, * then a concurrent reduction will be performed (see {@link Collector} for * details on concurrent reduction.) * *

This is a terminal * operation. * *

When executed in parallel, multiple intermediate results may be * instantiated, populated, and merged so as to maintain isolation of * mutable data structures. Therefore, even when executed in parallel * with non-thread-safe data structures (such as {@code ArrayList}), no * additional synchronization is needed for a parallel reduction. * *

API Note:
* The following will accumulate strings into an ArrayList: *

{@code
     *     List asList = stringStream.collect(Collectors.toList());
     * }
* *

The following will classify {@code Person} objects by city: *

{@code
     *     Map> peopleByCity
     *         = personStream.collect(Collectors.groupingBy(Person::getCity));
     * }
* *

The following will classify {@code Person} objects by state and city, * cascading two {@code Collector}s together: *

{@code
     *     Map>> peopleByStateAndCity
     *         = personStream.collect(Collectors.groupingBy(Person::getState,
     *                                                      Collectors.groupingBy(Person::getCity)));
     * }
* * @param the type of the result * @param the intermediate accumulation type of the {@code Collector} * @param collector the {@code Collector} describing the reduction * @return the result of the reduction * @see #collect(Supplier, BiConsumer, BiConsumer) * @see Collectors */ R collect(Collector collector); /** * Returns the minimum element of this stream according to the provided * {@code Comparator}. This is a special case of a * reduction. * *

This is a terminal operation. * * @param comparator a non-interfering, * stateless * {@code Comparator} to compare elements of this stream * @return an {@code Optional} describing the minimum element of this stream, * or an empty {@code Optional} if the stream is empty * @throws NullPointerException if the minimum element is null */ Optional min(Comparator comparator); /** * Returns the maximum element of this stream according to the provided * {@code Comparator}. This is a special case of a * reduction. * *

This is a terminal * operation. * * @param comparator a non-interfering, * stateless * {@code Comparator} to compare elements of this stream * @return an {@code Optional} describing the maximum element of this stream, * or an empty {@code Optional} if the stream is empty * @throws NullPointerException if the maximum element is null */ Optional max(Comparator comparator); /** * Returns the count of elements in this stream. This is a special case of * a reduction and is (at least * in the predominant case where the count can't be directly obtained from * the stream source) equivalent to: *

{@code
     *     return mapToLong(e -> 1L).sum();
     * }
* *

This is a terminal operation. * *

API Note:
* An implementation may choose to not execute the stream pipeline (either * sequentially or in parallel) if it is capable of computing the count * directly from the stream source. In such cases no source elements will * be traversed and no intermediate operations will be evaluated. * Behavioral parameters with side-effects, which are strongly discouraged * except for harmless cases such as debugging, may be affected. For * example, consider the following stream: *

{@code
     *     List l = Arrays.asList("A", "B", "C", "D");
     *     long count = l.stream().peek(System.out::println).count();
     * }
* The number of elements covered by the stream source, a {@code List}, is * known and the intermediate operation, {@code peek}, does not inject into * or remove elements from the stream (as may be the case for * {@code flatMap} or {@code filter} operations). Thus the count is the * size of the {@code List} and there is no need to execute the pipeline * and, as a side-effect, print out the list elements. * * @return the count of elements in this stream */ long count(); /** * Returns whether any elements of this stream match the provided * predicate. May not evaluate the predicate on all elements if not * necessary for determining the result. If the stream is empty then * {@code false} is returned and the predicate is not evaluated. * *

This is a short-circuiting * terminal operation. * *

API Note:
* This method evaluates the existential quantification of the * predicate over the elements of the stream (for some x P(x)). * * @param predicate a non-interfering, * stateless * predicate to apply to elements of this stream * @return {@code true} if any elements of the stream match the provided * predicate, otherwise {@code false} */ boolean anyMatch(Predicate predicate); /** * Returns whether all elements of this stream match the provided predicate. * May not evaluate the predicate on all elements if not necessary for * determining the result. If the stream is empty then {@code true} is * returned and the predicate is not evaluated. * *

This is a short-circuiting * terminal operation. * *

API Note:
* This method evaluates the universal quantification of the * predicate over the elements of the stream (for all x P(x)). If the * stream is empty, the quantification is said to be vacuously * satisfied and is always {@code true} (regardless of P(x)). * * @param predicate a non-interfering, * stateless * predicate to apply to elements of this stream * @return {@code true} if either all elements of the stream match the * provided predicate or the stream is empty, otherwise {@code false} */ boolean allMatch(Predicate predicate); /** * Returns whether no elements of this stream match the provided predicate. * May not evaluate the predicate on all elements if not necessary for * determining the result. If the stream is empty then {@code true} is * returned and the predicate is not evaluated. * *

This is a short-circuiting * terminal operation. * *

API Note:
* This method evaluates the universal quantification of the * negated predicate over the elements of the stream (for all x ~P(x)). If * the stream is empty, the quantification is said to be vacuously satisfied * and is always {@code true}, regardless of P(x). * * @param predicate a non-interfering, * stateless * predicate to apply to elements of this stream * @return {@code true} if either no elements of the stream match the * provided predicate or the stream is empty, otherwise {@code false} */ boolean noneMatch(Predicate predicate); /** * Returns an {@link Optional} describing the first element of this stream, * or an empty {@code Optional} if the stream is empty. If the stream has * no encounter order, then any element may be returned. * *

This is a short-circuiting * terminal operation. * * @return an {@code Optional} describing the first element of this stream, * or an empty {@code Optional} if the stream is empty * @throws NullPointerException if the element selected is null */ Optional findFirst(); /** * Returns an {@link Optional} describing some element of the stream, or an * empty {@code Optional} if the stream is empty. * *

This is a short-circuiting * terminal operation. * *

The behavior of this operation is explicitly nondeterministic; it is * free to select any element in the stream. This is to allow for maximal * performance in parallel operations; the cost is that multiple invocations * on the same source may not return the same result. (If a stable result * is desired, use {@link #findFirst()} instead.) * * @return an {@code Optional} describing some element of this stream, or an * empty {@code Optional} if the stream is empty * @throws NullPointerException if the element selected is null * @see #findFirst() */ Optional findAny(); // Static factories /** * Returns a builder for a {@link Stream}. * * @param type of elements * @return a stream builder */ public static Builder builder() { return new Streams.StreamBuilderImpl<>(); } /** * Returns an empty sequential {@link Stream}. * * @param the type of stream elements * @return an empty sequential stream */ public static Stream empty() { return StreamSupport.stream(Spliterators.emptySpliterator(), false); } /** * Returns a sequential {@link Stream} containing a single element. * * @param t the single element * @param the type of stream elements * @return a singleton sequential stream */ public static Stream of(T t) { return StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false); } /** * Returns a sequential {@link Stream} containing a single element, if * non-null, otherwise returns an empty {@code Stream}. * * @param t the single element * @param the type of stream elements * @return a stream with a single element if the specified element * is non-null, otherwise an empty stream * @since 9 */ public static Stream ofNullable(T t) { return t == null ? empty() : StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false); } /** * Returns a sequential ordered {@link Stream} whose elements are the * specified values. * * @param the type of stream elements * @param values the elements of the new stream * @return the new stream */ public static Stream of(@SuppressWarnings("unchecked") T... values) { return java9.util.J8Arrays.stream(values); // Creating a stream from an array is safe } /** * Returns an infinite sequential ordered {@link Stream} produced by iterative * application of a function {@code f} to an initial element {@code seed}, * producing a {@code Stream} consisting of {@code seed}, {@code f(seed)}, * {@code f(f(seed))}, etc. * *

The first element (position {@code 0}) in the {@code Stream} will be * the provided {@code seed}. For {@code n > 0}, the element at position * {@code n}, will be the result of applying the function {@code f} to the * element at position {@code n - 1}. * *

The action of applying {@code f} for one element * happens-before * the action of applying {@code f} for subsequent elements. For any given * element the action may be performed in whatever thread the library * chooses. * * @param the type of the operand and seed, a subtype of T * @param the type of stream elements * @param seed the initial element * @param f a function to be applied to the previous element to produce * a new element * @return a new sequential {@code Stream} */ public static Stream iterate(S seed, UnaryOperator f) { Objects.requireNonNull(f); Spliterator spliterator = new Spliterators.AbstractSpliterator(Long.MAX_VALUE, Spliterator.ORDERED | Spliterator.IMMUTABLE) { S prev; boolean started; @Override public boolean tryAdvance(Consumer action) { Objects.requireNonNull(action); S s; if (started) { s = f.apply(prev); } else { s = seed; started = true; } action.accept(prev = s); return true; } }; return StreamSupport.stream(spliterator, false); } /** * Returns a sequential ordered {@code Stream} produced by iterative * application of the given {@code next} function to an initial element, * conditioned on satisfying the given {@code hasNext} predicate. The * stream terminates as soon as the {@code hasNext} predicate returns false. * *

{@code RefStreams.iterate} should produce the same sequence of elements as * produced by the corresponding for-loop: *

{@code
     *     for (T index=seed; hasNext.test(index); index = next.apply(index)) { 
     *         ... 
     *     }
     * }
* *

The resulting sequence may be empty if the {@code hasNext} predicate * does not hold on the seed value. Otherwise the first element will be the * supplied {@code seed} value, the next element (if present) will be the * result of applying the {@code next} function to the {@code seed} value, * and so on iteratively until the {@code hasNext} predicate indicates that * the stream should terminate. * *

The action of applying the {@code hasNext} predicate to an element * happens-before * the action of applying the {@code next} function to that element. The * action of applying the {@code next} function for one element * happens-before the action of applying the {@code hasNext} * predicate for subsequent elements. For any given element an action may * be performed in whatever thread the library chooses. * * @param the type of the operand, predicate input and seed, a subtype of T * @param the type of stream elements * @param seed the initial element * @param hasNext a predicate to apply to elements to determine when the * stream must terminate * @param next a function to be applied to the previous element to produce * a new element * @return a new sequential {@code Stream} * @since 9 */ public static Stream iterate(S seed, Predicate hasNext, UnaryOperator next) { Objects.requireNonNull(next); Objects.requireNonNull(hasNext); Spliterator spliterator = new Spliterators.AbstractSpliterator(Long.MAX_VALUE, Spliterator.ORDERED | Spliterator.IMMUTABLE) { S prev; boolean started, finished; @Override public boolean tryAdvance(Consumer action) { Objects.requireNonNull(action); if (finished) { return false; } S s; if (started) { s = next.apply(prev); } else { s = seed; started = true; } if (!hasNext.test(s)) { prev = null; finished = true; return false; } action.accept(prev = s); return true; } @Override public void forEachRemaining(Consumer action) { Objects.requireNonNull(action); if (finished) { return; } finished = true; S s = started ? next.apply(prev) : seed; prev = null; while (hasNext.test(s)) { action.accept(s); s = next.apply(s); } } }; return StreamSupport.stream(spliterator, false); } /** * Returns an infinite sequential unordered {@link Stream} where each * element is generated by the provided {@code Supplier}. This is * suitable for generating constant streams, streams of random elements, * etc. * * @param the type of stream elements * @param s the {@code Supplier} of generated elements * @return a new infinite sequential unordered {@code Stream} */ public static Stream generate(Supplier s) { Objects.requireNonNull(s); return StreamSupport.stream( new StreamSpliterators.InfiniteSupplyingSpliterator.OfRef<>(Long.MAX_VALUE, s), false); } /** * Creates a lazily concatenated {@link Stream} whose elements are all the * elements of the first stream followed by all the elements of the * second stream. The resulting stream is ordered if both * of the input streams are ordered, and parallel if either of the input * streams is parallel. When the resulting stream is closed, the close * handlers for both input streams are invoked. * *

This method operates on the two input streams and binds each stream * to its source. As a result subsequent modifications to an input stream * source may not be reflected in the concatenated stream result. * *

Implementation Note:
* Use caution when constructing streams from repeated concatenation. * Accessing an element of a deeply concatenated stream can result in deep * call chains, or even {@code StackOverflowError}. *

Subsequent changes to the sequential/parallel execution mode of the * returned stream are not guaranteed to be propagated to the input streams. * *

API Note:
* To preserve optimization opportunities this method binds each stream to * its source and accepts only two streams as parameters. For example, the * exact size of the concatenated stream source can be computed if the exact * size of each input stream source is known. * To concatenate more streams without binding, or without nested calls to * this method, try creating a stream of streams and flat-mapping with the * identity function, for example: *

{@code
     *     Stream concat = Stream.of(s1, s2, s3, s4).flatMap(s -> s);
     * }
* * @param The type of stream elements * @param a the first stream * @param b the second stream * @return the concatenation of the two input streams */ public static Stream concat(Stream a, Stream b) { Objects.requireNonNull(a); Objects.requireNonNull(b); @SuppressWarnings("unchecked") Spliterator split = new Streams.ConcatSpliterator.OfRef<>( (Spliterator) a.spliterator(), (Spliterator) b.spliterator()); Stream stream = StreamSupport.stream(split, a.isParallel() || b.isParallel()); return stream.onClose(Streams.composedClose(a, b)); } /** * A mutable builder for a {@code Stream}. This allows the creation of a * {@code Stream} by generating elements individually and adding them to the * {@code Builder} (without the copying overhead that comes from using * an {@code ArrayList} as a temporary buffer.) * *

A stream builder has a lifecycle, which starts in a building * phase, during which elements can be added, and then transitions to a built * phase, after which elements may not be added. The built phase begins * when the {@link #build()} method is called, which creates an ordered * {@code Stream} whose elements are the elements that were added to the stream * builder, in the order they were added. * * @param the type of stream elements * @see Stream#builder() * @since 1.8 */ public interface Builder extends Consumer { /** * Adds an element to the stream being built. * * @throws IllegalStateException if the builder has already transitioned to * the built state */ @Override void accept(T t); /** * Adds an element to the stream being built. * *

Implementation Requirements:
* The default implementation behaves as if: *

{@code
         *     accept(t)
         *     return this;
         * }
* * @param t the element to add * @return {@code this} builder * @throws IllegalStateException if the builder has already transitioned to * the built state */ default Builder add(T t) { accept(t); return this; } /** * Builds the stream, transitioning this builder to the built state. * An {@code IllegalStateException} is thrown if there are further attempts * to operate on the builder after it has entered the built state. * * @return the built stream * @throws IllegalStateException if the builder has already transitioned to * the built state */ Stream build(); } }