Many resources are needed to download a project. Please understand that we have to compensate our server costs. Thank you in advance. Project price only 1 $
You can buy this project and download/modify it how often you want.
///////////////////////////////////////////////////////////////////////////////
Copyright (c) 2021, 2024 Oracle and/or its affiliates.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
///////////////////////////////////////////////////////////////////////////////
ifndef::rootdir[:rootdir: {docdir}/../..]
// tag::intro[]
This guide describes how to create a sample Helidon {intro-project-name} project
that can be used to run some basic examples using both built-in and custom {metrics} with Helidon.
== What You Need
For this 30 minute tutorial, you will need the following:
include::{rootdir}/includes/prerequisites.adoc[tag=prerequisites-helm]
// end::intro[]
// tag::create-sample-project[]
=== Create a Sample Helidon {h1-prefix} Project
Use the Helidon {h1-prefix} Maven archetype to create a simple project that can be used for the examples in this guide.
[source,bash,subs="attributes+"]
.Run the Maven archetype
----
mvn -U archetype:generate -DinteractiveMode=false \
-DarchetypeGroupId=io.helidon.archetypes \
-DarchetypeArtifactId=helidon-quickstart-{flavor-lc} \
-DarchetypeVersion={helidon-version} \
-DgroupId=io.helidon.examples \
-DartifactId=helidon-quickstart-{flavor-lc} \
-Dpackage=io.helidon.examples.quickstart.{flavor-lc}
----
// end::create-sample-project[]
// tag::using-built-in-metrics-intro[]
=== Using the Built-In {metrics_uc}
Helidon provides three built-in scopes of metrics: base, vendor, and application. Here are the metric endpoints:
1. `{metrics-endpoint}?scope=base` - Base {metrics}
ifdef::mp-flavor[as specified by the MicroProfile Metrics specification]
2. `{metrics-endpoint}?scope=vendor` - Helidon-specific {metrics}
3. `{metrics-endpoint}?scope=application` - Application-specific metrics data.
Applications can add their own custom scopes as well simply by specifying a custom scope name when registering a {metric}.
NOTE: The `{metrics-endpoint}` endpoint returns data for all scopes.
The built-in {metrics} fall into these categories:
. JVM behavior (in the base scope), and
. basic key performance indicators for request handling (in the vendor scope).
A later section describes the <> in detail.
The following example demonstrates how to use the other built-in {metrics}. All examples are executed
from the root directory of your project (helidon-quickstart-{flavor-lc}).
// end::using-built-in-metrics-intro[]
// tag::metrics-dependency[]
----
io.helidon.metricshelidon-metrics
----
// end::metrics-dependency[]
// tag::build-and-run-intro[]
[source,bash,subs="attributes+"]
.Build the application and then run it:
----
mvn package
java -jar target/helidon-quickstart-{flavor-lc}.jar
----
NOTE: Metrics output can be returned in either text format (the default), or JSON. The text format uses OpenMetrics (Prometheus) Text Format,
see https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-details.
[source,bash,subs="attributes+"]
.Verify the metrics endpoint in a new terminal window:
----
curl http://localhost:8080{metrics-endpoint}
----
// end::build-and-run-intro[]
///////////////////////////////////////////////////////////////////////////////
// tag::metrics-prometheus-output[]
# TYPE base:classloader_current_loaded_class_count counter
# HELP base:classloader_current_loaded_class_count Displays the number of classes that are currently loaded in the Java virtual machine.
base:classloader_current_loaded_class_count 7511
# TYPE base:classloader_total_loaded_class_count counter
# HELP base:classloader_total_loaded_class_count Displays the total number of classes that have been loaded since the Java virtual machine has started execution.
base:classloader_total_loaded_class_count 7512
// end::metrics-prometheus-output[]
///////////////////////////////////////////////////////////////////////////////
// tag::curl-metrics-json[]
You can get the same data in JSON format.
[source,bash,subs="attributes+"]
.Verify the metrics endpoint with an HTTP accept header:
----
curl -H "Accept: application/json" http://localhost:8080{metrics-endpoint}
----
// end::curl-metrics-json[]
// tag::base-metrics-json-output[]
"gc.total;name=G1 Young Generation": 1,
"cpu.systemLoadAverage": 4.451171875,
"classloader.loadedClasses.count": 3582,
"thread.count": 18,
"classloader.unloadedClasses.total": 0,
"jvm.uptime": 36.9478,
"gc.time;name=G1 Young Generation": 0,
"memory.committedHeap": 541065216,
"thread.max.count": 19,
"cpu.availableProcessors": 8,
"classloader.loadedClasses.total": 3582,
"thread.daemon.count": 16,
"memory.maxHeap": 8589934592,
"memory.usedHeap": 20491248
// end::base-metrics-json-output[]
// tag::vendor-metrics-json-output[]
"vendor": {
"requests.count": 3
}
// end::vendor-metrics-json-output[]
// tag::get-single-metric[]
You can get a single metric by specifying the scope and name as query parameters in the URL.
[source,bash,subs="attributes+"]
.Get the Helidon `requests.count` {metric}:
----
curl -H "Accept: application/json" 'http://localhost:8080{metrics-endpoint}?scope=vendor&name=requests.count'
----
[source,json]
.JSON response:
----
{
"requests.count": 6
}
----
// end::get-single-metric[]
// tag::built-in-metrics-discussion[]
The `base` {metrics} illustrated above provide some insight into the behavior of the JVM in which the server runs.
The `vendor` {metric} shown above gives an idea of the request traffic the server is handling. See the <> for more information on the basic and extended key performance indicator {metrics}.
// end::built-in-metrics-discussion[]
// tag::controlling[]
// tag::controlling-intro[]
// tag::controlling-intro-part-1[]
=== Controlling Metrics Behavior
By adding a `metrics` section to your application configuration you can control how the Helidon metrics subsystem behaves in any of several ways.
* <>.
// end::controlling-intro-part-1[]
// tag::controlling-intro-part-2[]
* Select whether to collect <>.
// end::controlling-intro-part-2[]
// end::controlling-intro[]
// tag::disabling-whole[]
// tag::disabling-whole-intro[]
[[disabling-entirely]]
==== Disabling Metrics Subsystem Entirely
You can disable the metrics subsystem entirely using configuration:
ifdef::mp-flavor[]
[source,properties]
.Configuration properties file disabling metrics
----
metrics.enabled=false
----
endif::mp-flavor[]
ifdef::se-flavor[]
[source,yaml]
.Configuration properties file disabling metrics
----
server:
features:
observe:
observers:
metrics:
enabled: false
----
endif::se-flavor[]
// end::disabling-whole-intro[]
// tag::disabling-whole-summary[]
With metrics processing disabled, Helidon never updates any {metrics} and the `{metrics-endpoint}` endpoints respond with `404`.
// end::disabling-whole-summary[]
// end::disabling-whole[]
// tag::controlling-by-component[]
// tag::controlling-by-component-intro[]
[[enabling-disabling-by-component]]
==== Enabling and Disabling Metrics Usage by a Component
Helidon contains several components and integrations which register and update metrics.
Depending on how the component is written, you might be able to disable just that component's use of metrics:
[source,properties]
.Configuration properties file disabling a component's use of metrics
----
some-component.metrics.enabled=false
----
Check the documentation for a specific component to find out whether that component uses metrics and whether it allows you to disable that use.
// end::controlling-by-component-intro[]
// tag::controlling-by-component-summary[]
If you disable a component's use of metrics, Helidon does not register the component's metrics in the visible metrics registries nor do those metrics ever update their values. The response from the `/metrics` endpoint excludes that component's metrics.
Note that if you disable metrics processing entirely, no component updates its metrics regardless of any component-level metrics settings.
// end::controlling-by-component-summary[]
// end::controlling-by-component[]
// tag::controlling-by-registry[]
// tag::controlling-by-registry-intro[]
[[enabling-disabling-by-scope]]
==== Controlling Metrics By Registry Type and Metric Name
You can control the collection and reporting of metrics by registry type and metric name within registry type.
===== Disabling All Metrics of a Given Registry Type
To disable all metrics in a given registry type (application, vendor, or base), add one or more groups to the configuration:
[source,properties]
.Disabling `base` and `vendor` metrics (properties format)
----
metrics.registries.0.type = base
metrics.registries.0.enabled = false
metrics.registries.1.type = vendor
metrics.registries.1.enabled = false
----
[source,yaml]
.Disabling `base` and `vendor` metrics (YAML format)
----
metrics:
registries:
- type: base
enabled: false
- type: vendor
enables: false
----
===== Controlling Metrics by Metric Name
You can be even more selective. Within a registry type you can configure up to two regular expression patterns:
* one matching metric names to _exclude_, and
* one matching metric names to _include_.
Helidon updates and reports a metric only if two conditions hold:
* the metric name _does not_ match the `exclude` regex pattern (if you define one), and
* either
** there is no `include` regex pattern, or
** the metric name matches the `include` pattern.
[CAUTION]
====
Make sure any `include` regex pattern you specify matches _all_ the metric names you want to capture.
====
Suppose your application creates and updates a group of metrics with names such as `myapp.xxx.queries`, `myapp.xxx.creates`, `myapp.xxx.updates`, and `myapp.xxx.deletes` where `xxx` can be either `supplier` or `customer`.
The following example gathers all metrics _except_ those from your application regarding suppliers:
[source,properties]
.Disabling metrics by name (properties format)
----
metrics.registries.0.type = application
metrics.registries.0.filter.exclude = myapp\.supplier\..*
----
The following settings select the particular subset of the metrics created in your application code representing updates of customers and suppliers:
[source,properties]
.Enabling metrics by name (properties format)
----
metrics.registries.0.type = application
metrics.registries.0.filter.include = myapp\..*\.updates
----
If you use the YAML configuration format, enclose the regex patterns in single-quote marks:
[source,yaml]
.Enabling metrics by name (YAML format)
----
metrics:
registries:
- type: application
filter:
include: 'myapp\..*\.updates'
----
The next example selects only your application's metrics while excluding those which refer to deletions:
[source,properties]
.Combining `include` and `exclude`
----
metrics.registries.0.type = application
metrics.registries.0.filter.include = myapp\..*
metrics.registries.0.filter.exclude = myapp\..*/deletes
----
Helidon would not update or report the metric `myapp.supplier.queries`, for example.
To include metrics from your application for both updates and queries (but not for other operations), you could change the settings in the previous example to this:
[source.properties]
----
metrics.registries.0.type = application
metrics.registries.0.filter.include = myapp\..*\.updates|myapp\..*\.queries
metrics.registries.0.filter.exclude = myapp\..*/deletes
----
// end::controlling-by-registry-intro[]
// end::controlling-by-registry[]
// tag::KPI[]
[[basic-and-extended-kpi]]
==== Collecting Basic and Extended Key Performance Indicator (KPI) Metrics
Any time you include the Helidon metrics module in your application, Helidon tracks a basic performance indicator {metric}: a `Counter` of all requests received (`requests.count`).
Helidon {h1-prefix} also includes additional, extended KPI metrics which are disabled by default:
* current number of requests in-flight - a `Gauge` (`requests.inFlight`) of requests currently being processed
* long-running requests - a `Counter` (`requests.longRunning`) measuring the total number of requests which take at least a given amount of time to complete; configurable, defaults to 10000 milliseconds (10 seconds)
* load - a `Counter` (`requests.load`) measuring the number of requests worked on (as opposed to received)
* deferred - a `Gauge` (`requests.deferred`) measuring delayed request processing (work on a request was delayed after Helidon received the request)
You can enable and control these {metrics} using configuration:
ifdef::mp-flavor[]
[source,properties]
.Configuration properties file controlling extended KPI {metrics}
----
metrics.key-performance-indicators.extended = true
metrics.key-performance-indicators.long-running.threshold-ms = 2000
----
endif::mp-flavor[]
ifdef::se-flavor[]
[source,yaml]
----
server:
features:
observe:
observers:
metrics:
key-performance-indicators:
extended: true
long-running:
threshold-ms: 2000
----
endif::se-flavor[]
// end::KPI[]
// end::controlling[]
// tag::metrics-metadata[]
=== Metrics Metadata
Each {metric} has associated metadata that includes:
1. name: The name of the {metric}.
2. units: The unit of the {metric} such as time (seconds, milliseconds), size (bytes, megabytes), etc.
3. a description of the {metric}.
You can get the metadata for any scope, such as `{metrics-endpoint}?scope=base`, as shown below:
[source,bash, subs="attributes+"]
.Get the metrics metadata using HTTP OPTIONS method:
----
curl -X OPTIONS -H "Accept: application/json" 'http://localhost:8080{metrics-endpoint}?scope=base'
----
[source,json]
.JSON response (truncated):
----
{
"classloader.loadedClasses.count": {
"type": "gauge",
"description": "Displays the number of classes that are currently loaded in the Java virtual machine."
},
"jvm.uptime": {
"type": "gauge",
"unit": "seconds",
"description": "Displays the start time of the Java virtual machine in milliseconds. This attribute displays the approximate time when the Java virtual machine started."
},
"memory.usedHeap": {
"type": "gauge",
"unit": "bytes",
"description": "Displays the amount of used heap memory in bytes."
}
}
----
// end::metrics-metadata[]
// tag::k8s-and-prometheus-integration[]
=== Integration with Kubernetes and Prometheus
==== Kubernetes Integration
The following example shows how to integrate the Helidon {h1-prefix} application with Kubernetes.
[source,bash,subs="attributes+"]
.Stop the application and build the docker image:
----
docker build -t helidon-metrics-{flavor-lc} .
----
[source,yaml,subs="attributes+"]
.Create the Kubernetes YAML specification, named `metrics.yaml`, with the following content:
----
kind: Service
apiVersion: v1
metadata:
name: helidon-metrics # <1>
labels:
app: helidon-metrics
annotations:
prometheus.io/scrape: "true" # <2>
spec:
type: NodePort
selector:
app: helidon-metrics
ports:
- port: 8080
targetPort: 8080
name: http
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: helidon-metrics
spec:
replicas: 1 # <3>
selector:
matchLabels:
app: helidon-metrics
template:
metadata:
labels:
app: helidon-metrics
version: v1
spec:
containers:
- name: helidon-metrics
image: helidon-metrics-{flavor-lc}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
----
<1> A service of type `NodePort` that serves the default routes on port `8080`.
<2> An annotation that will allow Prometheus to discover and scrape the application pod.
<3> A deployment with one replica of a pod.
[source,bash]
.Create and deploy the application into Kubernetes:
----
kubectl apply -f ./metrics.yaml
----
[source,bash]
.Get the service information:
----
kubectl get service/helidon-metrics
----
[source,bash]
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helidon-metrics NodePort 10.99.159.2 8080:31143/TCP 8s # <1>
----
<1> A service of type `NodePort` that serves the default routes on port `31143`.
[source,bash]
.Verify the metrics endpoint using port `30116`, your port will likely be different:
----
curl http://localhost:31143/metrics
----
NOTE: Leave the application running in Kubernetes since it will be used for Prometheus integration.
==== Prometheus Integration
The metrics service that you just deployed into Kubernetes is already annotated with `prometheus.io/scrape:`. This will allow
Prometheus to discover the service and scrape the metrics. This example shows how to install Prometheus
into Kubernetes, then verify that it discovered the Helidon metrics in your application.
[source,bash]
.Install Prometheus and wait until the pod is ready:
----
helm install stable/prometheus --name metrics
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl get pod $POD_NAME
----
You will see output similar to the following. Repeat the `kubectl get pod` command until you see `2/2` and `Running`. This may take up to one minute.
[source,bash]
----
metrics-prometheus-server-5fc5dc86cb-79lk4 2/2 Running 0 46s
----
[source,bash]
.Create a port-forward, so you can access the server URL:
----
kubectl --namespace default port-forward $POD_NAME 7090:9090
----
Now open your browser and navigate to `http://localhost:7090/targets`. Search for helidon on the page, and you will see your
Helidon application as one of the Prometheus targets.
==== Final Cleanup
You can now delete the Kubernetes resources that were just created during this example.
[source,bash]
.Delete the Prometheus Kubernetes resources:
----
helm delete --purge metrics
----
[source,bash]
.Delete the application Kubernetes resources:
----
kubectl delete -f ./metrics.yaml
----
// end::k8s-and-prometheus-integration[]