com.thesett.junit.extensions.example.ContinuousTestPerf Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of junit-toolkit Show documentation
Show all versions of junit-toolkit Show documentation
JUnit Toolkit enhances JUnit with performance testing, asymptotic behaviour analysis, and concurrency testing.
/*
* Copyright The Sett Ltd, 2005 to 2014.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.thesett.junit.extensions.example;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import junit.framework.TestCase;
import com.thesett.common.throttle.SleepThrottle;
import com.thesett.common.throttle.Throttle;
import com.thesett.junit.extensions.TestThreadAware;
import com.thesett.junit.extensions.TimingController;
import com.thesett.junit.extensions.TimingControllerAware;
/**
* ContinuousTestPerf is an example of a self-timed test case that runs continuously, until it is told to stop. This is
* often usefull when writing asynchronous test-cases, where the number of iterations to complete is not to be set in
* advance, that is, typically when a duration test will be used. The desired effect is to run some test processes
* continously, logging many timings, until the duration expires, or the test framework is shut-down by pressing CTRL-C.
*
* One alternative solution, is to use an asymptotic test case, and set a size parameter of a few
* tens/hundres/thousands of iterations (a large batch), and run the test processes against that many iterations of a
* test case, then exit the test method, to run another batch only if there is time left to do so. This is a fairly ugly
* solution, because it turns what should be a set of continously running processes into an approximation of that by
* running them in large batches.
*
* To give a concrete example. Suppose a test is to consist of a database process, and one writer and one reader
* thread. The writer thread continously writes new records into the database at a controlled rate. The reader thread
* queries against the database, to see how quickly it can consume the results of a test query against it as the
* database grows, it also cleans up the database as it goes, removing old records based on some criteria. The idea
* behind this example, is that it is a fairly complex producer/consumer test that may behave unpredictably enough that
* it requires analysis by simulation. The reader thread runs continously and asynchronously, logging results back to
* the test framework through the {@link TimingController} interface. The aim is to plot a graph of the query duration
* as the time goes by, in order to understand how the system behaves and to resolve any issues with its performance.
* One way to simulate the continuous behaviour of this set of processes would be to run batches of 1000 reads or
* writes, logging out 1000 timing results, before terminating the test method. This would mean that the write thread
* would need to coordinate with the read thread, waiting on the read thread when it reaches 1000 writes, and having the
* read thread unblock it when it consumes 1000 reads, whereupon that cycle of the test completes as a batch. The
* batchiness is undesirable, for example; it may artificially prevent the depth of the producer/consumer event stream
* from growing beyond 1000, which may be smaller than a real buffer limit that the code under test might be
* implementing; it may negate the effect of a read or write bias which is affecting the code under test; it may provide
* an opportunity for the garbage collector to run; and so on.
*
* ContinuousTestPerf uses the {@link TimingController#completeTest} call-back, to explicitly log timings with the
* test framework. This call-back documents that it throws an InterruptedException, if the test is to stop immediately.
* The framework will do this, if it is shutting down (because CTRL-C was invoked), or because the test is running for a
* fixed duration and that duration has expired. The example uses a bounded producer/consumer queue under a particular
* event arrival rate, and processing time, and provides statistics on the throughput and latency of events passing
* through this queueing system.
*
* One thing to note, is that the {@link TimingController} for the test thread, is set in the per-thread test
* fixture, during the {@link #threadSetUp()} method. This is because the timing call-backs are made by the reader
* thread, which is a different thread to the one which the test framework calls the test method from. So long as the
* correct timing controller is used, the framework will be able to identify which test thread the timings are for.
*
*
CRC Card
* Responsibilities Collaborations
* Time the latency of an event over a bounded producer/consumer queue under varying arrival/processing rates.
*
*
* @author Rupert Smith
*/
public class ContinuousTestPerf extends TestCase implements TimingControllerAware, TestThreadAware
{
/** Used for debugging. */
// private static final Logger log = Logger.getLogger(ContinuousTestPerf.class);
/** Defines the event arrival rate. */
private static final float ARRIVAL_RATE = 10f;
/** Defines the event processing rate, which is 1/(processing time). */
private static final float PROCESSING_RATE = 11f;
/** Defines the maximum size of the event buffer. */
private static final int BUFFER_SIZE = 10;
/** The timing controller. */
private TimingController tc;
/** Thread local to hold the per-thread test fixtures. */
final ThreadLocal© 2015 - 2024 Weber Informatics LLC | Privacy Policy