Many resources are needed to download a project. Please understand that we have to compensate our server costs. Thank you in advance. Project price only 1 $
You can buy this project and download/modify it how often you want.
// Copyright 2015-2022 by Carnegie Mellon University
// See license information in LICENSE.txt
package org.cert.netsa.mothra.tools
import org.cert.netsa.io.ipfix.{ExportStream, InfoModel}
import org.cert.netsa.mothra.packer.{
CorePacker, PackerConfig, PackerThreadFactory, PackingLogic,
PartitionerConfigurator, PartitionerPackLogic, RunTimeCodeLoader,
Version, WorkFile}
import com.typesafe.scalalogging.StrictLogging
import java.io.{PrintWriter, StringWriter}
import java.lang.management.ManagementFactory
import java.nio.channels.Channels
import java.nio.file.{Paths => JPaths}
import java.util.concurrent.{Executors, ConcurrentLinkedQueue, Future,
LinkedBlockingQueue, ThreadPoolExecutor, TimeUnit}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{LocatedFileStatus, RemoteIterator}
import org.apache.hadoop.fs.{Path => HPath}
import org.apache.hadoop.io.compress.{CodecPool, CompressionCodecFactory}
import resource.managed // see http://jsuereth.com/scala-arm/index.html
import scala.util.control.NonFatal
import scala.util.{Failure, Success, Try}
/**
* Object to implement the Reacker application.
*
* Typical Usage in a Spark environment:
*
* `spark-submit --class org.cert.netsa.mothra.packer.tools.RepackerMain mothra-tools.jar [ ...]`
*
* where:
*
* partition-conf: Partitioning configuration file as Hadoop URI
*
* dest-dir: Root destination directory as Hadoop URI
*
* work-dir: Working directory on the local disk (not `file://`)
*
* s1..sn: Source directories as Hadoop URIs
*
* Makes a single recursive scan of the source directories ,,... for
* IPFIX files. Splits the IPFIX records in the source files into output
* file(s) in a time-based directory structure based on the partitioning
* rules in the partitioning configuration file . The output
* files are initially created in the working directory , and, once
* ALL input files have been read, are moved to the destination directory and
* the initial source files removed. The dest-dir may be a source directory.
*
* Repacker runs as a batch process; not as a daemon.
*
* Example/Intended uses for the Repacker include:
*
* (1)Changing how the records are packed---for example packing by the
* silkAppLabel instead of the protocolIdentifier.
*
* (2)Combining multiple files for an hour into a single file for that hour,
* merging hourly files into a file that covers a longer duration, or
* spliting a longer duration file into smaller files.
*
* (3)Changing the compression algorithm used on the IPFIX files.
*
* Currently the repacker does NOT support modifying the records, it only
* moves the records into different files.
*
* Repacker uses multiple threads. By default, each source directory
* specified on the command line gets a dedicated thread to scanning that
* directory and its subdirectories recursively for IPFIX files, and another
* thread decidated to reading those files and repacking them. The repacker
* does not support having multiple threads scan a directory, but it does
* allow multiple threads to process a single directory's files.
*
* The must NOT be a source directory or a subdirectory of a
* source directory. To repack the files in an existing working directory,
* use a different working directory. The repacker ignores any files in the
* that exist when the repacker is started, and it ignores files
* placed there by other programs.
*
* The property values that are used by the repacker are:
*
* `mothra.repacker.compression` -- the compression algorithm used for the
* new IPFIX files. Values typically supported by Hadoop include `bzip2`,
* `gzip`, `lz4`, `lzo`, `lzop`, `snappy`, and `default`. The empty string
* indicates no compression.
*
* `mothra.repacker.hoursPerFile` -- The number of hours covered by each file
* in the repository. The valid range is 1 (a file for each hour) to 24 (one
* file per day). The default is 1.
*
* `mothra.repacker.maxScanJobs` -- the maximum number of threads dedicated
* to scanning the source directories. The default (and maximum) value is
* the number of source directories.
*
* `mothra.repacker.readersPerScanner` -- the number of reader/repacker
* threads to create for each source directory. The default is 1.
*
* `mothra.repacker.maxThreads` -- the maximum number of worker (scanner and
* repacker) threads to create. The default value is computed using the
* formula: (maxScanJobs * (1 + readersPerScanner)).
*
* `mothra.repacker.maximumSize` -- the (approximate) maximum file size to
* create. When specified, a work-file that exceeds this size is closed and
* moved into the repository. NOTES: (1)This value uses the uncompressed
* file size, and does not consider any compression that may occur when the
* file is moved from the workDir to the tgtDir. In addition, a file's size
* tends to grow in large steps because of buffering by the Java stream code.
* (2)Specifying a `maximumSize` may temporarially cause duplicate records to
* appear in the repository because of some records in the original files and
* some in the new file. Once Repacker finishes scanning all files, the
* original files are removed and only the newly packed files are left. This
* issue of temporary having duplicate records in the repository will be
* resolved in a future release.
*
* `mothra.repacker.archiveDirectory` -- the root directory into which
* working files are moved after the repacker has finished running, as a
* Hadoop URI. If not specified, the working files are deleted.
*
* `mothra.repacker.fileCacheSize` -- The maximum size of the open file
* cache. This is the maximum number of open files maintained by the file
* cache for writing to files in the work directory. The repacker does not
* limit the number of files in the work directory; this only limits the
* number of open files. Once the cache reaches this number of open files
* and the packer needs to (re-)open a file, the packer closes the
* least-recently-used file. This value does not include the file handles
* required when reading incoming files or when copying files from the work
* directory to the data directory. The default is 2000; the minimum
* permitted is 128.
*
*/
object RepackerMain extends App with StrictLogging {
def usage(full: Boolean = false): Unit = {
print("""
Usage: spark-submit --class org.cert.netsa.mothra.packer.tools.RepackerMain mothra-tools.jar [ ...]
partition-conf: Partitioning configuration file as Hadoop URI
dest-dir: Root destination directory as Hadoop URI
work-dir: Working directory on the local disk (not file://)
s1..sn: Source directories as Hadoop URIs
""")
if ( full ) {
print(s"""
Makes a single recursive scan of the source directories ,,... for
IPFIX files. Splits the IPFIX records in the source files into output file(s)
in a time-based directory structure based on the partitioning rules in the
partitioning configuration file . The output files are
initially created in the working directory , and, once ALL input
files have been read, are moved to the destination directory and the initial
source files removed.
The repacker runs as a batch process; not as a daemon.
Example/Intended uses for the repacker include:
(1)Changing how the records are packed---for example packing by the
silkAppLabel instead of the protocolIdentifier.
(2)Combining multiple files for an hour into a single file for that hour,
merging hourly files into a file that covers a longer duration, or
spliting a longer duration file into smaller files.
(3)Changing the compression algorithm used on the IPFIX files.
Currently the repacker does NOT support modifying the records, it only moves
the records into different files.
Repacker uses multiple threads. By default, each source directory specified
on the command line gets a dedicated thread to scanning that directory and its
subdirectories recursively for IPFIX files, and another thread decidated to
reading those files and repacking them. The repacker does not support having
multiple threads scan a directory, but it does allow multiple threads to
process a single directory's files.
The must NOT be a source directory or a subdirectory of a source
directory. To repack the files in an existing working directory, use a
different working directory. The repacker ignores any files in the
that exist when the repacker is started, and it ignores files placed there by
other programs.
The property values that are used by the repacker are:
mothra.repacker.compression -- the compression algorithm used for the new
IPFIX files. Values typically supported by Hadoop include bzip2, gzip,
lz4, lzo, lzop, snappy, and default. The empty string indicates no
compression. The default compression is '${CorePacker.DEFAULT_COMPRESSION}'.
`mothra.repacker.hoursPerFile` -- The number of hours covered by each file
in the repository. The valid range is 1 (a file for each hour) to 24 (one
file per day). The default is ${CorePacker.DEFAULT_HOURS_PER_FILE}.
mothra.repacker.maxScanJobs -- the maximum number of threads dedicated to
scanning the source directories. The default (and maximum) value is the
number of source directories.
mothra.repacker.readersPerScanner -- the number of reader/repacker threads to
create for each source directory. The default is 1.
mothra.repacker.maxThreads -- the maximum number of worker (scanner and
repacker) threads to create. The default value is computed using the formula:
(maxScanJobs * (1 + readersPerScanner)).
`mothra.repacker.maximumSize` -- the (approximate) maximum file size to
create. When specified, a work-file that exceeds this size is closed and
moved into the repository. NOTES: (1)This value uses the uncompressed
file size, and does not consider any compression that may occur when the
file is moved from the workDir to the tgtDir. In addition, a file's size
tends to grow in large steps because of buffering by the Java stream code.
(2)Specifying a `maximumSize` may temporarially cause duplicate records to
appear in the repository because of some records in the original files and
some in the new file. Once Repacker finishes scanning all files, the
original files are removed and only the newly packed files are left. This
issue of temporary having duplicate records in the repository will be
resolved in a future release. The default is no maximum.
mothra.repacker.archiveDirectory -- the root directory into which working
files are moved after the repacker has finished running, as a Hadoop URI. If
not specified, the working files are deleted.
""")
}
System.exit(if (full) { 0 } else { 1 })
}
def version(): Unit = {
println("Repacker " + Version.get())
System.exit(0)
}
// <<<<< PROCESS THE COMMAND LINE ARGUMENTS >>>>> //
val (switches, positionalArgs) = args.partition { _.substring(0, 2) == "-" }
switches.collect {
case "-V" | "--version" => version()
case "-h" | "--help" => usage(true)
case unknown: String =>
println(s"Unknown argument '${unknown}'")
usage()
}
if ( positionalArgs.length < 4 ) {
logger.error(
s"Called with only ${positionalArgs.length} args; at least 4 required")
logger.debug(s"Args were ${positionalArgs}")
usage()
}
logger.info("\n=============================" +
" Repacker is starting =============================\n")
logger.info(s"This is Repacker ${Version.get()}")
/** Appends the names of files that currently exist in `dir` to `queue`. */
private final def scanDir(dir: HPath, queue: LinkedBlockingQueue[HPath]):
Unit =
{
logger.trace(s"Recursively scanning directory '${dir}'...")
var count = 0
val iter = try {
sourceFileSystem.listFiles(dir, true)
} catch {
case NonFatal(e) =>
// return an empty iterator
logger.warn(s"Unable to get status of '${dir}/': ${e.getMessage}")
new RemoteIterator[LocatedFileStatus](){
def hasNext: Boolean = false
def next(): LocatedFileStatus = throw new NoSuchElementException()
}
}
while ( iter.hasNext ) {
val status = iter.next()
// isFile can throw an exception
if ( Try(status.isFile()).getOrElse(false) ) {
logger.trace(s"Found file '${status.getPath()}'")
queue.put(status.getPath())
count += 1
}
}
val numFiles = if (count > 1) {
s"${count} files"
} else if (count == 1) {
s"${count} file"
} else {
"no files"
}
logger.info(
s"Found ${numFiles} files to process in '${dir}' and its subdirectories")
}
/** A Runnable class to recursively scan a Hadoop directory for files and add
* those files (as hadoop.fs.Path objects) to a queue. Once the scan is
* complete, calls the `done()` method on `repacker`.
*/
private case class HadoopDirectoryScanner(
dir: HPath, queue: LinkedBlockingQueue[HPath])
extends Runnable
{
def run(): Unit = {
scanDir(dir, queue)
}
}
/** A Runnable class to repack the files found in a queue. Attempts to
* process files from the queue until the thread represented by 'future'
* completes, where 'future' is expected to be the thread that is gathering
* files that need to be repacked. */
private case class RepackFromQueue[T](
queue: LinkedBlockingQueue[HPath],
future: Future[T],
name: String = "Unnamed")
extends Runnable
{
def run(): Unit = {
logger.trace(s"Starting ${name}")
while ( !queue.isEmpty || !future.isDone ) {
Option(queue.poll(500, TimeUnit.MILLISECONDS)) match {
case None =>
case Some(f) => repack(f)
}
}
logger.trace(s"Ending ${name}")
// signal the main thread
signalQueue.add(0)
()
}
private[this] def repack(f: HPath): Unit = {
logger.trace(s"Repacking '${f}'...")
// get a decompressor for the source file 'f'
val codec = Try {
val factory = new CompressionCodecFactory(hadoopConf)
Option(factory.getCodec(f))
}.getOrElse(None)
val decompr = codec.map {c => CodecPool.getDecompressor(c)}
Try {
// pack the records found in f
for (channel <- managed({
val stream = sourceFileSystem.open(f)
codec match {
case None => Channels.newChannel(stream)
case Some(c) =>
Channels.newChannel(
c.createInputStream(stream, decompr.get))
}
})) {
packer.packStream(channel)
}
// finally, now that all that worked, add the source file to the
// global remove list
removeList.add(f)
} match {
case Failure(e) =>
logger.error(s"Failed to pack '${f}': ${e.toString}")
val sw = new StringWriter
e.printStackTrace(new PrintWriter(sw))
logger.debug(s"Failed to pack '${f}': ${sw.toString}")
case _ =>
logger.debug(s"Repacked '${f}'")
}
decompr.foreach {d => CodecPool.returnDecompressor(d) }
}
}
// ///// Repacker code begins here /////
// Hadoop configuration
implicit val hadoopConf = new Configuration()
/**
* The compression codec used for files written to HDFS. This may
* be set by setting the "mothra.repacker.compression" property. If
* that property is not set, CorePacker.DEFAULT_COMPRESSION is used.
*/
val compressCodec = {
val compressName = sys.props.get("mothra.repacker.compression").
getOrElse(CorePacker.DEFAULT_COMPRESSION)
if ( compressName == "" ) {
//logger.info("Using no compression for IPFIX files")
None
} else {
Try {
//logger.trace(s"have a name ${compressName}")
val factory = new CompressionCodecFactory(hadoopConf)
//logger.trace(s"have a factory ${factory}")
val codec = factory.getCodecByName(compressName)
//logger.trace(s"have a codec ${codec}")
// Make sure we can create a compressor, not using it here.
codec.createCompressor()
//logger.trace(s"have a compressor ${compressor}")
codec
} match {
case Success(ok) =>
//logger.info(s"Using ${compressName} compressor for IPFIX files")
Option(ok)
case Failure(e) =>
logger.error("Unable to initialize compressor" +
s" '${compressName}': ${e.toString}")
val sw = new StringWriter
e.printStackTrace(new PrintWriter(sw))
logger.info("Unable to initialize compressor" +
s" '${compressName}': ${sw.toString}")
logger.warn("Using no compression for IPFIX files")
None
}
}
}
//logger.trace(s"compressCodec is ${compressCodec}")
// the archive directory
val archiveDir = sys.props.get("mothra.repacker.archiveDirectory") map {
x: String => new HPath(x)
}
/** The number of hours covered by each file in the repository. This is
* determined by the "mothra.repacker.hoursPerFile" property, or
* CorePacker.DEFAULT_HOURS_PER_FILE when that property is not set.
*/
val hoursPerFile: Int = sys.props.get("mothra.repacker.hoursPerFile").
map { _.toInt }.getOrElse(CorePacker.DEFAULT_HOURS_PER_FILE)
require( hoursPerFile >= 1 && hoursPerFile <= 24 )
/**
* The (approximate) maximum size file to create. Typically a file's size
* will not exceed this value by more than the maximum size of an IPFIX
* message, 64k. The default is no maximum. When a file's size exceeds
* this value, the file is closed and a new file is started.
*/
val maximumSize = sys.props.get("mothra.repacker.maximumSize").map {_.toLong}
for ( ms <- maximumSize ) { require( ms >= 1 ) }
/** The maximum number of open files maintained by the file cache. This is
* determined by the `mothra.repacker.fileCacheSize` Java property, or by
* `CorePacker.DEFAULT_FILE_CACHE_SIZE` when the property is not set. This
* value must be no less than `CorePacker.MINIMUM_FILE_CACHE_SIZE`.
* @see CorePacker.DEFAULT_FILE_CACHE_SIZE for a full description of this
* value. */
val fileCacheSize: Int = sys.props.get("mothra.repacker.fileCacheSize").
map { _.toInt }.getOrElse(CorePacker.DEFAULT_FILE_CACHE_SIZE)
require( fileCacheSize >= CorePacker.MINIMUM_FILE_CACHE_SIZE )
// the information model
val infoModel = InfoModel.getCERTStandardInfoModel()
// the PackingLogic or PartitionerConfigurator (a .scala source file)
val runTimePackConf = new HPath(positionalArgs(0))
// the data repository (destination or output directory)
val rootDir = new HPath(positionalArgs(1))
// a local root working directory
val workDir = JPaths.get(positionalArgs(2))
// remaining argument(s) are the source directory(s)
val sourceDirs = positionalArgs.drop(3).map { new HPath(_) }
// ensure all source directories use the same file system
val sourceFileSystem = sourceDirs(0).getFileSystem(hadoopConf)
for (sd <- sourceDirs.drop(1)) {
if ( sd.getFileSystem(hadoopConf) != sourceFileSystem ) {
logger.error("source directories use different file systems")
throw new Exception("source directories use different file systems")
}
}
val packConf = PackerConfig(rootDir, workDir, None, compressCodec)
// open, load, parse, compile, and run the packing logic file
val packLogic = {
val stream = runTimePackConf.getFileSystem(hadoopConf).open(runTimePackConf)
val loader = RunTimeCodeLoader(stream)
val result = loader.load()
result match {
case pl: PackingLogic => pl
case pc: PartitionerConfigurator => PartitionerPackLogic(pc.partitioners)
case _ => throw new Exception(
s"Unexpected type returned from compiled code: result.getClass")
}
}
val packer = CorePacker(
packLogic, packConf, infoModel, hoursPerFile, fileCacheSize)
/** Implements a callback that is invoked when an IPFIX Message is
* written to an [[ExportStream]]. This callback is set by the
* EnableSizeChecker class.
* @param pk The [[Packer]] instance being used
* @param out The [[WorkFile]] that holds the [[ExportStream]].
*/
private[this] class CheckFileSize(pk: CorePacker, out: WorkFile)
extends ExportStream.DataWrittenCallback
{
def wrote(es: ExportStream, bytesWritten: Long): Unit = {
if ( running && out.size() >= maximumSize.get ) {
logger.trace(s"File size (${out.size()}) exceeds max (last write" +
s" ${bytesWritten}) for ${out.path}")
pk.closeWorkFile(out)
}
}
def closed(es: ExportStream, bytesWritten: Long): Unit = { }
}
/** Implements a callback that the [[Packer]] invokes when it creates
* a new file (which means it created a new [[WorkFile]]). This
* callback sets the CheckFileSize callback.
* @param pk The [[Packer]] instance being used
*/
private[this] class EnableSizeChecker(pk: CorePacker) extends CorePacker.FileEvent {
override def fileCreated(out: WorkFile): Unit = {
//logger.trace(s"Adding callback to ${out.path}")
out.dataWrittenCallback_=(new CheckFileSize(pk, out))
}
}
// enable the file-size check callbacks
if ( maximumSize.isDefined ) {
packer.addFileEvent(new EnableSizeChecker(packer))
}
/** readersPerScanner specifies the number of file reader/repacker threads
* that are invoked per scanning thread. The default is 1. This may be
* modified by setting the mothra.repacker.readersPerScanner property.
*/
var readersPerScanner = (sys.props.get("mothra.repacker.readersPerScanner").
map { _.toInt }).getOrElse(1)
require( readersPerScanner >= 1 )
/** maxScanJobs specifies the maximum number of scanning threads to start.
* Since at most one thread can scan a directory, the default is to create
* 1 scanner per srcDir. Setting this to a value larger than the number of
* source directories has no effect. This may be modified by setting the
* mothra.repacker.maxScanJobs property.
*/
val maxScanJobs = {
val msj = (sys.props.get("mothra.repacker.maxScanJobs").
map { _.toInt }).getOrElse(sourceDirs.size)
if ( msj > sourceDirs.size ) {
sourceDirs.size
} else {
msj
}
}
/** maxThreads specifies the maximum number of scanning and reader/repacker
* threads to start. By default this is
*
* (scanningJobs * (1 + * readersPerScanner))
*
* Setting it to a value larger than that has no effect.
*
* This may be modified by setting the mothra.repacker.readersPerScanner
* property.
*/
val maxThreads = (sys.props.get("mothra.repacker.maxThreads").
map { _.toInt }).getOrElse(maxScanJobs * (1 + readersPerScanner))
require( maxThreads >= 1 )
// if maxThreads is larger than the computed maximum and if
// readersPerScanner was not specified, set readersPerScanner to the maximum
// possible value
if ( maxThreads > maxScanJobs * (1 + readersPerScanner)
&& sys.props.get("mothra.repacker.readersPerScanner").isEmpty )
{
val rps = maxThreads / maxScanJobs - 1
if ( rps > readersPerScanner ) {
readersPerScanner = rps
}
}
// log our settings
logger.info("Repacker settings::")
logger.info(s"Output compression: ${compressCodec.getOrElse("none")}")
logger.info(s"Hours covered by each file: ${hoursPerFile}")
logger.info(s"Number of top-level directories to scan: ${sourceDirs.size}")
logger.info(s"Number of scanning threads: ${maxScanJobs}")
logger.info(s"Number of repacker threads per scanner: ${readersPerScanner}")
logger.info(s"Total number or scanning and repacker threads: ${maxThreads}")
logger.info("Approximate maximum output file size: " +
maximumSize.map{ _.toString }.getOrElse("unlimited"))
logger.info(s"Maximum number of open files in the workDir: ${fileCacheSize}")
logger.info(archiveDir match {
case Some(dir) => s"Archive location for expired working files: ${dir}"
case None => "Do not archive expired working files"
})
logger.info(s"""JVM Parameters: ${ManagementFactory.getRuntimeMXBean.getInputArguments.toArray.mkString(",")}""")
// list of files that have been processed by Repacker and need to be removed
// from HDFS once all other processing is complete
val removeList = new ConcurrentLinkedQueue[HPath]()
// used by sub-threads to signal to the main thread that they have completed
private val signalQueue = new LinkedBlockingQueue[Int]()
private[this] val pool: ThreadPoolExecutor =
new ThreadPoolExecutor(
maxThreads, maxThreads, 0L, TimeUnit.SECONDS,
new LinkedBlockingQueue[Runnable](),
new PackerThreadFactory("RepackerThread-"))
/**
* How often to print log messages regarding the number of tasks, in
* seconds.
*/
val logTaskCountInterval = 5
// print task count every 5 seconds
private[this] val logTaskCountThread = Executors.newScheduledThreadPool(1,
new PackerThreadFactory("LogTaskCounts-"))
logTaskCountThread.scheduleAtFixedRate(
new Thread() {
override def run(): Unit = {
val active = pool.getActiveCount()
val completed = pool.getCompletedTaskCount()
val total = pool.getTaskCount()
logger.info(s"Total tasks: ${total}," +
s" Completed tasks: ${completed}," +
s" Active tasks: ${active}," +
s" Queued tasks: ${total - active - completed}")
}
},
logTaskCountInterval, logTaskCountInterval, TimeUnit.SECONDS)
@volatile
var running = true
logger.info(s"Starting recursive scan of ${sourceDirs.size} director" +
(if ( 1 == sourceDirs.size ) { "y" } else { "ies" }))
// process each of the source directories
for (s <- sourceDirs) {
val files = new LinkedBlockingQueue[HPath]()
val scanner = pool.submit(HadoopDirectoryScanner(s, files))
for ( i <- 0 until readersPerScanner ) {
val name = s"Repacker #${i} for ${s}"
pool.execute(RepackFromQueue(files, scanner, name))
}
}
logger.info(
s"Waiting for ${pool.getTaskCount() - pool.getCompletedTaskCount()}" +
s" of ${pool.getTaskCount()} scanner and repacker tasks to complete...")
// all tasks are queued; shutdown the thread pool and allow the
// running/queued tasks to complete
pool.shutdown()
while ( !pool.isTerminated ) {
logger.trace(s"ActiveCount: ${pool.getActiveCount()}," +
s" CompletedCount: ${pool.getCompletedTaskCount()}," +
s" TaskCount: ${pool.getTaskCount()}")
signalQueue.poll(3, TimeUnit.SECONDS)
signalQueue.clear()
}
running = false
logger.debug("All repacking tasks have completed")
logTaskCountThread.shutdown()
logTaskCountThread.awaitTermination(1, TimeUnit.SECONDS)
// flush the output files
logger.trace("Flushing output files")
packer.flushAllWorkFiles()
// close all output files and move them into the repository
logger.debug("Moving new files into the repository")
packer.shutdown()
// delete all files in the removeList
if ( !removeList.isEmpty ) {
logger.debug("Removing files that were repacked")
do {
sourceFileSystem.delete(removeList.poll(), false)
} while ( !removeList.isEmpty )
}
logger.debug("Repacker is done")
}
// @LICENSE_FOOTER@
//
// Copyright 2015-2022 Carnegie Mellon University. All Rights Reserved.
//
// This material is based upon work funded and supported by the
// Department of Defense and Department of Homeland Security under
// Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the
// operation of the Software Engineering Institute, a federally funded
// research and development center sponsored by the United States
// Department of Defense. The U.S. Government has license rights in this
// software pursuant to DFARS 252.227.7014.
//
// NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING
// INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON
// UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR
// IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF
// FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS
// OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT
// MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT,
// TRADEMARK, OR COPYRIGHT INFRINGEMENT.
//
// Released under a GNU GPL 2.0-style license, please see LICENSE.txt or
// contact [email protected] for full terms.
//
// [DISTRIBUTION STATEMENT A] This material has been approved for public
// release and unlimited distribution. Please see Copyright notice for
// non-US Government use and distribution.
//
// Carnegie Mellon(R) and CERT(R) are registered in the U.S. Patent and
// Trademark Office by Carnegie Mellon University.
//
// This software includes and/or makes use of third party software each
// subject to its own license as detailed in LICENSE-thirdparty.tx
//
// DM20-1143