Friday, August 11, 2017

Hawkular Alerts with Prometheus, ElasticSearch, Kafka

Federated Alerts

Hawkular Alerts aims to be a federated alerting system. That is to say, it can fire alerts and send notifications that are triggered by data coming from a number of third-party external systems.
Thus, Hawkular Alerts is more than just an alerting system for use with Hawkular Metrics. In fact, Hawkular Alerts can be used independently of Hawkular Metrics. This means you do not even have to be using Hawkular Metrics to take advantage of the functionality provided by Hawkular Alerts.
This is a key differentiator between Hawkular Alerts and other alerting systems. Most alerting systems only alert on data coming from their respective storage systems (e.g. the Prometheus Alert Engine alerts only on Prometheus data). Hawkular Alerts, on the other hand, can trigger alerts based on data from various systems.

Alerts vs. Events

Before we begin, a quick clarification is in order. When it is said that Hawkular Alerts fires an "alert" it means some data came into Hawkular Alerts that matched some conditions which triggered the creation of an alert in Hawkular Alerts backend storage (which can then trigger additional actions such as sending emails or calling a webhook). An "alert" typically refers to a problem that has been detected, and someone should take action to fix it. An alert has a lifecycle attached to it - alerts are opened, then acknowledged by some user who will hopefully fix the problem, then resolved when the problem can be considered closed.
However, there can be conditions that occur that do not represent problems but nevertheless are events you want recorded. There is no lifecycle associated with events and no additional actions are triggered by events, but "events" are fired by Hawkular Alerts in the same general manner as "alerts" are.
In this document, when it is said that Hawkular Alerts can fire "alerts" based on data coming from external third-party systems such as Prometheus, ElasticSearch, and Kakfa, this also means events can be fired as well as alerts. What this means is you can record any event (not just a "problem", aka "alert") that can be gleaned from this data coming from external third-party systems.
See alerting philosophy for more.

Demo

There is a recorded demo found here that will illustrate what this document is describing. After you read this document, you should watch the demo to gain further clarity on what is being explained. The demo is the multiple-sources example which you can run yourself found here (note: at the time of writing, this example is only found in the next branch, to be merged in master soon).

Prometheus

Hawkular Alerts can take the results of Prometheus metric queries and use the queried data for triggers that can fire alerts.
This Hawkular Alerts trigger will fire an alert (and send an email) when a Prometheus metric indicates our store’s inventory of widgets is consistently low (as defined by the Prometheus query you see in the "expression" field of the condition):
"trigger":{
   "id": "low-stock-prometheus-trigger",
   "name": "Low Stock",
   "description": "The number of widgets in stock is consistently low.",
   "severity": "MEDIUM",
   "enabled": true,
   "tags": {
      "prometheus": "Prometheus"
   },
   "actions":[
      {
      "actionPlugin": "email",
      "actionId": "email-notify-owner"
      }
   ]
},
"conditions":[
   {
      "type": "EXTERNAL",
      "alerterId": "prometheus",
      "dataId": "prometheus-dataid",
      "expression": "rate(products_in_inventory{product=\"widget\"}[30s])<2 class="pl-pds" span="" style="box-sizing: border-box; color: #032f62;">"
   }
 ]

Integration with Prometheus Alert Engine

As a side note, though not demostrated in the example, Hawkular Alerts also has an integration with Prometheus' own Alert Engine. This means the alerts generated by Prometheus itself can be forward to Hawkular Alerts which can, in turn, be used for additional processing, perhaps for use with data that is unavailable to Prometheus that can tell Hawkular Alerts to fire other alerts. For example, Hawkular Alerts can take Prometheus alerts as input and feed it back into other conditions that trigger on the Prometheus alert along with ElasticSearch logs.

ElasticSearch

Hawkular Alerts can examine logs stored in ElasticSearch and trigger alerts based on patterns that match within the ElasticSearch log messages.
This Hawkular Alerts trigger will fire an alert (and send an email) when ElasticSearch logs indicate sales are being lost due to inventory being out of stock of items (as defined by the condition which looks for a log category of "FATAL" which happens to mean a lost sale in the case of the store’s logs). Notice dampening is enabled on this trigger - this alert will only fire when the logs indicate lost sales every 3 times.
"trigger":{
   "id": "lost-sale-elasticsearch-trigger",
   "name": "Lost Sale",
   "description": "A sale was lost due to inventory out of stock.",
   "severity": "CRITICAL",
   "enabled": true,
   "tags": {
      "Elasticsearch": "Localhost instance"
   },
   "context": {
      "timestamp": "@timestamp",
      "filter": "{\"match\":{\"category\":\"inventory\"}}",
      "interval": "10s",
      "index": "store",
      "mapping": "level:category,@timestamp:ctime,message:text,category:dataId,index:tags"
   },
   "actions":[
      {
      "actionPlugin": "email",
      "actionId": "email-notify-owner"
      }
   ]
},
"dampenings": [
   {
      "triggerMode": "FIRING",
      "type":"STRICT",
      "evalTrueSetting": 3
   }
],
"conditions":[
   {
      "type": "EVENT",
      "dataId": "inventory",
      "expression": "category == 'FATAL'"
   }
]

Kafka

Hawkular Alerts can examine data retrieved from Kafka message streams and trigger alerts based that Kafka data.
This Hawkular Alerts trigger will fire an alert when data over a Kakfa topic indicates a large purchase was made to fill the store’s inventory (as defined by the condition which evaluates to true when any number over 17 is received on the Kafka topic):
"trigger":{
   "id": "large-inventory-purchase-kafka-trigger",
   "name": "Large Inventory Purchase",
   "description": "A large purchase was made to restock inventory.",
   "severity": "LOW",
   "enabled": true,
   "tags": {
      "Kafka": "Localhost instance"
   },
   "context": {
      "topic": "store",
      "kafka.bootstrap.servers": "localhost:9092",
      "kafka.group.id": "hawkular-alerting"
   },
   "actions":[ ]
},
"conditions":[
   {
      "type": "THRESHOLD",
      "dataId": "store",
      "operator": "GT",
      "threshold": 17
   }
]

But, Wait! There’s More!

The above only mentions the different ways Hawkular Metrics retrieves data for use in determining what alerts to fire. What is not covered here is the fact that Hawkular Alerts can stream data in the other direction as well - Hawkular Alerts can send alert and event data to things like an ElasticSearch server or a Kafka broker. There are additional examples (mentioned below) that can demonstrate this capability.
The point is Hawkular Alerts should be seen as a shared, common alerting engine that can be shared for use by multiple third-party systems and can be used as both a consumer and producer - as a consumer of the data from external third-party systems (which is used to fire alerts and events) and as a producer to send notifications of alerts and events to external third-party systems.

More Examples

Take a look at the Hawkular Alerts examples for more examples on using external systems as data to be used for triggering alerts. (note: at the time of writing, some examples are currently in the next branch such as the Kafka ones).

Tuesday, August 1, 2017

Hawkular Alerts 2.0 UI WIP

Hawkular Alerts 2.0 UI

A quick 10-minute demo has been published to illustrate the progress that was made by the hAlerts team on the new UI.

This is a work-in-progress, and things will change, but the UI is actually functional now.

It is best the video is viewed in full screen mode. The video link is: https://www.youtube.com/watch?v=bb9SaJudPlU



Monday, November 21, 2016

Hawkular OpenShift Demo - Running Outside OpenShift

Below is a quick 8 minute demo of the Hawkular OpenShift Agent.

For more information, see: https://github.com/hawkular/hawkular-openshift-agent


Monday, November 14, 2016

Hawkular OpenShift Agent - First Demo

Below is a quick 10 minute demo of the Hawkular OpenShift Agent.

For more information, see: https://github.com/hawkular/hawkular-openshift-agent




Thursday, October 20, 2016

Hawkular OpenShift Agent is Born

A new Hawkular agent has been published on github.com - Hawkular OpenShift Agent.

It is implemented in Go and the main use case for which it was created is to be able to collect metrics from OpenShift pods. The idea is you run Hawkular OpenShift Agent (HOSA) on an OpenShift node and HOSA will listen for pods to come up and down on the node. As pods come online, the pods will tell the agent what (if any) metrics should be collected. As pods go down, the agent will stop collecting metrics from all endpoints running on that pod.

Today, only Prometheus endpoints (using either the binary or text protocol) can be scraped with Jolokia endpoints next on the list to be implemented. So HOSA will be able to support collecting metrics from either type of endpoint in the near future.

For more information - how to build and configure it - refer to the Hawkular OpenShift Agent README.

Monday, October 17, 2016

Pulling in a Go Dependency From a Fork, Branch, or Github PR using Glide

While writing a Go app, I decided to use Glide as the dependency management system (I tried Godep first, but even on the first day of using it, my dependencies were getting screwed up, lost, builds would mysteriously break - so I decided to switch to Glide, which seems much better).

I was using the Hawkular Go Client library because I needed to write metric data to Hawkular Metrics. So in my glide.yaml, I had this:
- package: github.com/hawkular/hawkular-client-go
  subpackages:
  - metrics
Which simply tells Glide that I want to use the latest master of the client library (I'm not using a versioned library yet. I guess I should start doing that).

Anyway, I needed to add a feature to the Hawkular Go Client. So I forked the git repository, created a branch in my fork where I implemented the new feature, and submitted a Github pull request from my own branch. Rather than wait for the PR to be merged, I wanted Glide to pull in my own branch in my forked repo so I could immediately begin using the new feature. It was as simple as adding three lines to my glide.yaml and running "glide update":
- package: github.com/hawkular/hawkular-client-go
  repo: git@github.com:jmazzitelli/hawkular-client-go.git
  vcs: git
  ref: issue-8

  subpackages:
  - metrics
This tells Glide that the Hawkular Go client package is now located at a different repository (my fork located at github.com) under a branch called "issue-8".

Running "glide update" pulled in the Hawkuar Go client from my fork's branch and placed it in my vendors/ directory. I can now start using my new feature in my Go app without waiting for the PR to be merged. Once the PR is merged, I can remove those three lines, "glide update" again, and things should be back to normal.


Wednesday, October 5, 2016

Installing Open Shift Origin and Go for a Development Environment

The Hawkular team is doing some work to develop a Hawkular Agent for running within an Open Shift environment. Since the Go Programming Language ("Go") seems to be the language de jour and many things within the Open Shift infrastructure is developed in Go, the first attempt at this new Hawkular Agent will also be implemented in Go.

Because of this, I needed to get a development environment up and running that included both Open Shift and Go. This blog is simply my notes on how I did this.

INSTALL "GO"

Go isn’t necessary to install and run Open Shift, but because I want to write a Go application, I need it.

INSTALL CORE GO SYSTEM

* Create the directory where Go is to be installed

   mkdir $HOME/bin/go-install (this is where Go will be installed)

* Create the Go workspace (GOPATH will end up pointing to here)

   mkdir $HOME/source/go
   mkdir $HOME/source/go/bin
   mkdir $HOME/source/go/pkg
   mkdir $HOME/source/go/src

* Download go package from https://golang.org/dl/

   cd /tmp
   wget https://storage.googleapis.com/golang/go1.7.1.linux-amd64.tar.gz

* Unpack the tar in the $HOME/bin/go-install - should end up inside a subdirectory "go"

   cd $HOME/bin/go-install
   tar xzvf /tmp/go*.tar.gz

* Set up your shell environment so you can run Go
** Put the following Inside .bashrc 

   export GOROOT=${HOME}/bin/go-install/go
   export PATH=${PATH}:${GOROOT}/bin
   export GOPATH=${HOME}/source/go

* Make sure Go is working before going on - run “go version” to confirm. I like to log completely out and then back in again so my bash script gets loaded for all my shells.

INSTALL ADDITIONAL GO TOOLS

* Download and install guru

   cd $HOME/bin/go-install/go/bin
   go get golang.org/x/tools/cmd/guru
   go build golang.org/x/tools/cmd/guru


** The above puts the "guru" executable in your current directory (which should be $HOME/bin/go-install/go/bin)

* Download and install gocode

   cd $HOME/bin/go-install/go/bin
   go get github.com/nsf/gocode
   go build github.com/nsf/gocode


** The above puts the "gocode" executable in your current directory (which should be $HOME/bin/go-install/go/bin)

* Download and install godef

   cd $HOME/bin/go-install/go/bin
   go get github.com/rogpeppe/godef
   go build github.com/rogpeppe/godef


** The above puts the "godef" executable in your current directory (which should be $HOME/bin/go-install/go/bin)

* Download and install godep

   cd $HOME/bin/go-install/go/bin
   go get github.com/tools/godep
   go build github.com/tools/godep


** The above puts the "godep" executable in your current directory (which should be $HOME/bin/go-install/go/bin)

* If you use Eclipse, install GoClipse:
** Plugin Site: http://goclipse.github.io/releases/
** Make sure the GoClipse configuration knows where your Go installation is along with your guru, gocode, and godef executables. See the Go configuration settings in Eclipse preferences.

INSTALL OPEN SHIFT

These instructions will install Open Shift in a virtual machine. So you need to get Vagrant and VirtualBox first before doing anything. And of course Vagrant needs Ruby 2 so you need that before anything. Then you install the Open Shift image.

Note that you may have to change your BIOS settings to enable virtualization. To avoid a VERR_VMX_MSR_ALL_VMX_DISABLED error, I had to do this on my Lenovo laptop in the Security->Virtualization section of the BIOS settings.

These notes follow the instructions found at https://www.openshift.org/vm

(October 5, 2016: Note that the instructions there tell you to not use Vagrant 1.8.5 nor VirtualBox 5.1 - unfortunately, they tell you this all the way at the bottom after the instructions. So if you start at the top and work your way down, you will be frustrated beyond belief until you decide to skip all the way to the bottom and realize this. I emailed the folks maintaining that page to put those notices at  the top so people can see these warnings before they start downloading the wrong stuff.)

INSTALL RUBY

If you already have Ruby 2+ installed, you can skip this. This will install RVM and then use that to install Ruby 2. If you skip this step, make sure you have a Ruby 2 installation available before going on to installing and running Vagrant.

* Download and install RVM and Ruby
** You need to accept the GPG key for RVM and install RVM

   gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
   curl -L https://get.rvm.io | bash -s stable
   rvm autolibs packages

** Now install Ruby 2

   rvm install 2.3.0
   rvm use --default 2.3.0
   ruby -v

INSTALL VAGRANT

* Download and install Vagrant .rpm
** https://releases.hashicorp.com/vagrant/

   cd /tmp
   wget https://releases.hashicorp.com/vagrant/1.8.4/vagrant_1.8.4_x86_64.rpm

   rpm -i vagrant_*.rpm
   vagrant version

INSTALL VIRTUAL BOX

* Download and install Virtual Box
** https://www.virtualbox.org/wiki/Downloads
** https://www.virtualbox.org/wiki/Download_Old_Builds
** You must grab it from the Oracle yum repo - we first must add that repo to our system:

   sudo wget -P /etc/yum.repos.d http://download.virtualbox.org/virtualbox/rpm/fedora/virtualbox.repo

** If anything goes wrong during the install, you'll want to update your system and reboot - so you may want to do this now:

    dnf update

** Make sure the kernel version is expected - these should output the same version string - otherwise, reboot

   rpm -qa kernel | sort -V | tail -n 1

   uname -r

** Install VirtualBox now

   dnf install VirtualBox-5.0

I ran into some problems when I tried to run VirtualBox before I realized I needed to download VirtualBox from the Oracle yum repo. If you download from, say, RPMFusion, these instructions may not work (they did not for me). If you run into problems, you might have to install some additional packages via dnf (such as kernel-devel, kernel-headers, dkms) and perform some additional magic which I do not know which explains why I received a Dreadful grade on my O.W.L. exam.

INSTALL THE OPEN SHIFT IMAGE

* Go to an empty directory where you want to prepare your Vagrantfile

   mkdir ${HOME}/openshift

   cd ${HOME}/openshift

* Create a Vagrant file that initializes the OpenShift image.

   vagrant init openshift/origin-all-in-one

* Run Open Shift inside a VM within VirtualBox
 
   vagrant up --provider=virtualbox

At this point you should have an OpenShift environment running. If any errors occur, a log message should tell you how you can proceed.

You will want to download the Open Shift command line client "oc". Rather than regurgitate the instructions here, simply log into Open Shift using the default "admin" user (password "admin") and go to https://10.2.2.2:8443/console/command-line and follow the instructions there to download, install, and use "oc".

Note that you can SSH into your VM by executing "vagrant ssh"

The Open Shift self-signed certificate found at "/var/lib/origin/openshift.local.config/master/ca.crt" can be used to authenticate clients.

UPGRADE OPEN SHIFT

Should you wish to upgrade the Open Shift VM in the future, the following steps should do it.

* Go to the directory where you ran the VM:

   cd ${HOME}/openshift

* Update the image

   vagrant box update --box openshift/origin-all-in-one

* Destroy and re-create the VM environment

   vagrant destroy --force
   vagrant up --provider=virtualbox

REMOVE OPEN SHIFT

Should you wish to remove the Open Shift VM in the future, the following steps should do it.

* Go to the directory where your Vagrantfile is

   cd ${HOME}/openshift

* Remove Open Shift VM

   vagrant halt
   vagrant destroy --force
   vagrant box remove --force openshift/origin-all-in-one