Thursday, September 12, 2013

Availability Updates in RHQ GUI

In older versions, the RHQ GUI showed you the availability status of resources but if you were viewing the resource in the GUI, it did not update the icons unless you manually refreshed the screen.

In RHQ 4.9, this has changed. If you are currently viewing a resource and its availability status changes (say, it goes down, or it comes back up), the screen will quickly reflect the new availability status by changing the availabilty icon and by changing the tree node icons.

To see what I mean, take a look at this quick 3-minute demo to see the feature in action (view this in full-screen mode if you want to get a better look at the icons and tree node badges):


Wednesday, September 11, 2013

Fine-Grained Security Permissions In Bundle Provisioning

RHQ allows one to bundle up content and provision that bundle to remote machines managed by RHQ Agents. This is what we call the "Bundle" subsystem, the documentation actually titles it the "Provisioning" subsystem. I've blogged about it here and here if you want to read more about it.

RHQ 4.9 has just been released and with it comes a new feature in the Bundle subsystem. RHQ can now allow your admins to give users fine-grained security constraints around the Bundle subsystem.

In the older RHQ versions, it was an all-or-nothing prospect - a user either could do nothing with respect to bundles or could do everything.

Now, users can be granted certain permissions surrounding bundle functionality. For example, a user could be given the permission to create and delete bundles, but that user could be denied permission to deploy those bundles anywhere. A user could be restriced in such a way to allow him to deploy bundles only to a certain group of resources but not others.

Along with the new permissions, RHQ has now introduced the concept of "bundle groups." Now you can organize your bundles into separate groups, while providing security constraints around those bundles so only a select set of users can access, manipulate, and deploy bundles in certain bundle groups.

If you want all the gory details, you can read the wiki documentation on this new security model for bundles.

I put together a quick, 15-minute demo that illustrates this fine-grained security model. It demonstrates the use of the bundle permissions to implement a typical use-case that demarcates workflows to provision different applications to different environments:

Watch the demo to see how this can be done. The demo will illustrate how the user "HR Developer" will only be allowed to create bundles and put them in the "HR Applications" bundle group and the user "HR Deployer" will only be allowed to deploy those "HR Applications" bundles to the "HR Environment" resource group.

Again, read the wiki for more information. The RHQ 4.9 release notes also has information you'll want to read about this.

Monday, August 12, 2013

Moving from Eclipse to IntelliJ

Well, the second shoe dropped. The final straw was placed on the camel's back and the camel's back broke. I tried one more time and, once again, Eclipse still doesn't have a good Maven integration - at least for such a large project as RHQ.

Now, for some history, I've been using Eclipse for at least a decade. I like it. I know it. I'm comfortable with it. While I can't claim to know how to use everything in it, I can navigate around it pretty good and can pump out some code using it.

However, the Maven integration is just really bad from my experience. I've tried, I really have. In fact, it has been an annual ritual of mine to install the latest Maven plugin and see if it finally "just works" for me. I've done this for at least the last three years if not longer. So it is not without a lack of trying. Every year I keep hearing "try it again, it got better." (I really have heard this over the span of years). But every time I install it and load in the RHQ project, it doesn't "just work". I tried it again a few weeks ago and nothing has changed. What I expect is to import my root Maven module and have Eclipse load it in and let me just go back to doing my work. Alas, it has never worked.

I hate to leave Eclipse because, like I said, I have at least a decade invested in using it. But I need a good Maven integration. I don't want to have tons of Eclipse projects in my workspace - but then again, if the Eclipse Maven plugin needs to create one project per Maven module so it "just works", so be it. I can deal with it (after all, IntelliJ has tons of modules, even if it places them under one main project). But I can't even get that far.

So, after hearing all the IntelliJ fanboys denigrate Eclipse and tell me that I should move to IntelliJ because "it's better", I finally decided to at least try it.

Well, I can at least report that IntelliJ's Maven integration actually does seem to "just work" - but that isn't to say I didn't have to spend 15 minutes or so figuring out some things to get it to work (I had to make sure I imported it properly and I had to make sure to set some options). But spending 15 minutes and getting it to work is by far better than what I've gone through with Eclipse (which is, spending lots more time and never getting it to work over the years). So, yes, I can confirm that the IntelliJ folks are correct that Maven integration "just works" - with that small caveat. It actually is very nice.

In addition, I really like IntelliJ's git integration - it works out of box and has some really nice features.

I also found that IntelliJ provides an Eclipse keymap - so, while I may not like all the keystrokes required to unlock all the features in IntelliJ (more on that below), I do like how I can use many of the Eclipse keystrokes I know and have it work in IntelliJ.

As I was typing up this blog, I was about to rail on IntelliJ about its "auto-save" feature. Reading their Migration FAQ they make it sound like you can't turn off that auto-save feature (where, as soon as you type, it saves the file). I really hate that feature. But, I just found out, to my surprise, you can kinda turn that off. It still maintains the changes though, in what I suppose is a cache of changed files. So if I close the editor with the changed file, and open it back up again, my changes are still there. That's kinda annoying (but yet, I can see this might be useful, too!). But at least it doesn't change the source file. I'll presume there is a way to throw away these cached changes - at least I can do a git revert and that appears to do it.


However, with all that said, as I use IntelliJ (and really, it's only been about week), I'm seeing on the edges of it things that I do not like where Eclipse is better. If you are an IntelliJ user and know how to do the following, feel free to point out my errors. Note: I'm using the community version of  IntelliJ v12.14.

For one thing, where's the Problems View that Eclipse has? I mean, in Eclipse, I have a single view with all the compile errors within my project. I do not see anywhere in IntelliJ a single view that tells me about problems project-wise. Now, I was told that this is because Eclipse has its own compiler and IntelliJ does not. That's an issue for me. I like being able to change some code in a class, and watch the Problems View report all the breakages that that change causes. I see in the Project view, you can limit the scope to problem files. That gets you kinda there - but I want to see it as a list (not a tree) and I want to see the error messages themselves,  not just what files have errors in them.

Second, the Run/Debug Configuration feature doesn't appear to be as nice as Eclipse. For example, I have some tool configurations in Eclipse that, when selected, prompt the user for parameter values, but apparently, IntelliJ doesn't support this. In fact, Eclipse supports lots of parameter replacement variables (${x}) whereas it doesn't look like IntelliJ supports any.

Third, one nice feature in Eclipse is the ability to have the source code for a particular method to popup in a small window when you hover over a method call while holding down, say, the ALT key (this is configurable in  Eclipse). But, I can't see how this is done in IntelliJ. I can see that View->QuickDefinition does what I want, but I just want to hold down, say, ALT or SHIFT and have the quick definition popup where I hover. I have a feeling you can tell IntelliJ to do this, I just don't know how.

Another thing I am missing is an equivalent to Eclipse's "scrapbook" feature. This was something I use(d) all the time. In any scrapbook page, you can add and highlight any Java snippet and execute it. The Console View shows the output of the Java snippet. This is an excellent way to quickly run some small code snippet you want to try out to make sure you go it right (I can't tell you how many times I've used it to test regex's). The only way it appears you can do this in IntelliJ is if you are debugging something and you are at a breakpoint. From there, you can execute random code snippets. But Eclipse has this too (the Display view). I want a way to run a Java snippet right from my editor without setting up a debug session.

I also don't want to see this "TODO" or "JetGradle" or other views that IntelliJ seems to insist I want. You can't remove them from the UI entirely.

Finally, IntelliJ seems to be really keen on keyboard control. I am one of those developers that hates relying on keystrokes to do things. I am using a GUI IDE, I want to use the GUI :-) I like mouse/menu control over keystrokes. I just can't remember all the many different key combinations to do things, plus my fingers can't consistently reach all the F# function keys, but I can usually remember where in the menu structure a feature is. I'm sure as I use IntelliJ more that I'll remember more. And most everything does seem to have a main menu or popup-menu equivalent. So, this is probably just a gripe that I have to spend time on a learning curve to learn a new tool - can't really blame IntelliJ for that (and with the Eclipse keymap, lots of Eclipse keystrokes now map in IntelliJ). I guess I have to blame Eclipse for that since it's forcing me to make this move in the first place.

Some of those are nit-picky, others not. And I'm sure I'll run into more things that either IntelliJ doesn't have or is hiding from me. Maybe as I use IntelliJ more, and my ignorance of it recedes a bit, I'll post another blog entry to indicate my progress.

Wednesday, May 8, 2013

Creating Https Connection Without javax.net.ssl.trustStore Property

Question: How can you use the simple Java API call java.net.URL.openConnection() to obtain a secure HTTP connection without having to set or use the global system property "javax.net.ssl.trustStore"? How can you make a secure HTTP connection and not even need a truststore?

I will show you how you can do both below.

First, some background. Java has a basic API to make a simple HTTP connection to any URL via URL.openConnection(). If your URL uses the "http" protocol, it is very simple to use this to make basic HTTP connections.

Problems creep in when you want a secure connection over SSL (via the "https" protocol). You can still use that API - URL.openConnection() will return a HttpsURLConnection if the URL uses the https protocol - however, you must ensure your JVM can find and access your truststore in order to authenticate the remote server's certificate.

[note: I won't discuss how you get your trusted certificates and how you put them in your truststore - I'll assume you know, or can find out, how to do this.]

You tell your JVM where your truststore is by setting the system property "javax.net.ssl.trustStore" and you tell your JVM how to access your truststore by giving your JVM the password via the system property "javax.net.ssl.trustStorePassword".

The problem is these are global settings (you often see instructions telling you to set these values via the -D command line arguments when starting your Java process) so everything running in your JVM must use that truststore. And you can't alter those system properties during runtime and expect those changes to take effect. Once you ask the JVM to make a secure connection, those system property values appear to be cached in the JVM and are used thereafter for the life of the JVM (I don't know exactly where in the JRE code these values are cached, but my experience shows me that they are). Changing those system properties later on in the lifetime of the JVM has no effect; the original values are forever used.

Another problem that some people run into is having the need for a truststore in the first place. Sometimes you don't have a requirement to authenticate the server endpoint; however, you would still like to send your data encrypted over the wire. You can't do this readily since the connection you obtain from URL.openConnection() will, by default, expect to use your truststore located at the path pointed to by the system property javax.net.ssl.trustStore.

To allow me to use different truststores for different connections, or to allow me to encrypt a connection but not authenticate the endpoint, I wrote a Java utility object that allows you to do just this.

The main constructor is this:

public SecureConnector(String secureSocketProtocol,
                       File   truststoreFile,
                       String truststorePassword,
                       String truststoreType,
                       String truststoreAlgorithm)

You pass it a secure socket protocol (such as "TLS") and your truststore file location. If the truststore file is null, the SecureConnector object will assume you do not want to authenticate the remote server endpoint and you only want to encrypt your over-the-wire traffic. If you do provide a truststore file, you need to provide its password, its type (e.g. "JKS"), and its algorithm (e.g. "SunX509") - if you pass in null for type and/or algorithm, the JVM defaults are used.

Once you create the object, just obtain a secure connection to any URL via a call to SecureConnector.openSecureConnection(URL). This expects your URL to have a protocol of "https". If successful, an HttpsURLConnection object is returned and you can use it like any other connection object. You do not need to set javax.net.ssl.trustStore (or any other javax.net.ssl system property) and, as explained above, you don't even need to provide a truststore at all (assuming you don't need to do any authentication).

The code for this is found inside of RHQ's agent - you can read its javadoc and look through SecureConnector code here.

The core code is found in openSecureConnection and looks like this, I'll break it down:

First, it simply obtains the HTTPS connection object from the URL itself:
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
Then it prepares a custom SSLContext object using the given secure socket protocol:
TrustManager[] trustManagers;
SSLContext sslContext = SSLContext.getInstance(getSecureSocketProtocol());
If no truststore file was provided, it will build its own "no-op" trust manager and "no-op" hostname verifier. What these "no-op" objects will do is always accept all certificates and hostnames thus they will always allow the SSL communications to flow. This is how the authentication is by-passed:
if (getTruststoreFile() == null) {
    // configured to not care about authenticating server, encrypt but don't worry about certificates
    trustManagers = new TrustManager[] { NO_OP_TRUST_MANAGER };
    connection.setHostnameVerifier(NO_OP_HOSTNAME_VERIFIER);
If a truststore file was provided, then it will be loaded in memory and stored in a KeyStore instance:
} else {
    // need to configure SSL connection with truststore so we can authenticate the server.
    // First, create a KeyStore, but load it with our truststore entries.
    KeyStore keyStore = KeyStore.getInstance(getTruststoreType());
    keyStore.load(new FileInputStream(getTruststoreFile()), getTruststorePassword().toCharArray());
The truststore file's content (now stored in a KeyStore object) is used to initialize a trust manager. Unlike the "no-op" trust manager that was created above (if a truststore file was not provided), this trust manager really does perform authentication and it uses the provided truststore's certificates to authorize the server being communicated with. This is why we no longer need to worry about the system properties "javax.net.ssl.trustStore" and "javax.net.ssl.trustStorePassword" - this builds its own trust manager using the data provided by the caller:
    // create truststore manager and initialize it with KeyStore we created with all truststore entries
    TrustManagerFactory tmf = TrustManagerFactory.getInstance(getTruststoreAlgorithm());
    tmf.init(keyStore);
    trustManagers = tmf.getTrustManagers();
}
Finally, the SSL context is initialized with the trust manager that was created earlier (either the "no-op" trust manager, or the trust manager that was initialized with the truststore's certificates). That SSL context is handed off to the SSL connection so the connection can use the context when it needs to perform authentication:
sslContext.init(null, trustManagers, null);
connection.setSSLSocketFactory(sslContext.getSocketFactory());
The connection is finally returned to the caller, fully configured and ready to be used.
return connection;
This is helpful for certain use cases. First, it is helpful when you have multiple truststores that you need to choose from when connecting to different servers as well as being able to switch truststores at runtime (remember, the system property values of javax.net.ssl.trustStore, et. al. are fixed for the lifetime of the JVM - this helps bypass that restriction). This is also helpful in local testing, debugging and demo scenarios when you don't really need or care about setting up truststores and certificates but you do want to connect over https.

Thursday, March 14, 2013

Deleting RHQ Agent Made Easier

In the past, if you wanted to remove an RHQ Agent from your RHQ environment, the simple answer was "just uninventory the platform" which would, under the covers, also remove the agent record completely.

However, in some cases, users found it difficult to remove their agent. Usually, what happens is they try to install the RHQ Agent, run into problems, and then get their RHQ system in a state that causes their RHQ Agent to not be able to register with the RHQ Server. For example, this could happen when a person runs the agent as a different user from before or with the -L command line option - both of which essentially purges the agent's security token and could cause the RHQ Server to reject future registration requests from the agent.

If a person does not understand the linkage between platform and agent, or if the agent's platform was never committed to inventory, it became difficult to understand how to get out of the quadmire.

This has now been addressed as an enhancement as requested by BZ 849711. Now, the answer is simple - regardless of whether the platform is in inventory or not, and even if the agent's resources do not yet show up in the discovery queue - you have a way to quickly purge your agent from the system. This will allow you to get back to a clean slate and attempt to re-install your agent.

You do this by going to the top Adminstration page and selecting the "Agents" item. From here, you see the list of all the agents currently registered in your RHQ environment. If you select one or more of them, you now have the option of pressing the new "Delete" button at the bottom. This will do a few things. First, if the agent's platform is already in inventory, it will uninventory that platform. This means that platform and all its child servers and services will be removed (so be careful and make sure you really want to do this - you will lose all manageability and all audit history for all resources previously managed by that agent). Once that is done, all resources will disappear from the inventory and you won't even see any resources for that agent in the Discovery Queue. Finally, the agent's record itself is removed - so the Administration>Agents page will show that the agents have disappeared.

 
With the agent and its resources completely removed, you have the option to attempt to re-install the agent if you wish to bring it back.

You can also use this feature if your managed infrastructure has changed and you no longer want to manage a machine. Just select the agent that was responsible for managing that machine and delete it.

Note that if your agent is still running, it will attempt to re-register itself! So if you no longer wish to manage a machine, make sure you shutdown the agent as well (you'll obviously want to do this anyway, since you won't want an RHQ Agent consuming resources on a machine that you no longer want managed by RHQ).

Friday, February 8, 2013

"Nested Transactions" and Timeouts

While coding up some EJB3 SLSB methods, the following question came up:

If a thread is already associated with a transaction, and that thread calls another EJB3 SLSB method annotated with "REQUIRES_NEW", the thread gets a new (or "pseudo-nested" or for ease-of-writing what I'm simply call the "nested" transaction, even though it is not really "nested" in the true sense of the term) transaction but what happens to the parent transaction while the "nested" transaction is active? Does the parent transaction's timeout keep counting down or is it suspended and will start back up counting down only when the "nested" transaction completes?

For example, suppose I enter a transaction context by calling method A and this method has a transaction timeout of 1 minute. Method A then immediatlely calls a REQUIRES_NEW method B, which itself has a 5 minute timeout. Now suppose method B takes 3 minutes to complete. That is within B's allotted 5 minute timeout so it returns normally to A. A then immediately returns.

But A's timeout is 1 minute! B took 3 minutes on its own. Even though the amount of time A took within itself was well below its allotted 1 minute timeout, its call to B took 3 minutes.

What happens? Does A's timer "suspend" while its "nested" transaction (created from B) is still active?  Or does A's timer keep counting down, regardless of whether or not B's "nested" transaction is being counted down at the same time (and hence A will abort with a timeout)?

Here's some code to illustrate the use-case (this is what I actually used to test this):

@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
@TransactionTimeout(value = 5, unit = TimeUnit.SECONDS)
public void testNewTransaction() throws InterruptedException {
   log.warn("~~~~~ Starting new transaction with 5s timeout...");
   LookupUtil.getTest().testNewTransaction2();
   log.warn("~~~~~ Finishing new transaction with 5s timeout...");
}
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
@TransactionTimeout(value = 10, unit = TimeUnit.SECONDS)
public void testNewTransaction2() throws InterruptedException {
   log.warn("~~~~~ Starting new transaction with 10s timeout...sleeping for 8s");
   Thread.sleep(8000);
   log.warn("~~~~~ Finishing new transaction with 10s timeout...");
}
I don't know what any of the EE specs say about this, but it doesn't matter - all I need to know is how JBossAS7 behaves :) So I ran this test on JBossAS 7.1.1.Final and here's what the log messages say:
17:51:22,935 ~~~~~ Starting new transaction with 5s timeout...
17:51:22,947 ~~~~~ Starting new transaction with 10s timeout...sleeping for 8s
17:51:27,932 WARN  ARJUNA012117: TransactionReaper::check timeout for TX 0:ffffc0a80102:-751071d8:5115811c:449 in state  RUN
17:51:27,936 WARN  ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] successfully canceled TX 0:ffffc0a80102:-751071d8:5115811c:449
17:51:30,948 ~~~~~ Finishing new transaction with 10s timeout...
17:51:30,949 ~~~~~ Finishing new transaction with 5s timeout...
17:51:30,950 WARN  ARJUNA012077: Abort called on already aborted atomic action 0:ffffc0a80102:-751071d8:5115811c:449
17:51:30,951 ERROR JBAS014134: EJB Invocation failed on component TestBean for method public abstract void org.rhq.enterprise.server.test.TestLocal.testNewTransaction() throws java.lang.InterruptedException: javax.ejb.EJBTransactionRolledbackException: Transaction rolled back
...
Caused by: javax.transaction.RollbackException: ARJUNA016063: The transaction is not active!
   at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1155) [jbossjts-4.16.2.Final.jar:]
   at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:117) [jbossjts-4.16.2.Final.jar:]
   at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
   at org.jboss.as.ejb3.tx.CMTTxInterceptor.endTransaction(CMTTxInterceptor.java:92) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
So it is clear that, at least for JBossAS7's transaction manager, the parent transaction's timer is not suspended, even if a "nested" transaction is activated. You see I enter the first method (which activates my first transaction) at 17:51:22, and immediately enter the second method (which activates my second, "nested", transaction at the same time of 17:51:22). My first transaction has a timeout of 5 seconds, my second "nested" transaction has a timeout of 10 seconds. My second method sleeps for 8 seconds, so it should finish at 17:51:30 (and it does if you look at the log messages at that time). BUT! Prior to that, my first transaction is aborted by the transaction manager at 17:51:27 - exactly 5 seconds after my first transaction was started. So, clearly my first transaction's timer was not suspended and was continually counting down even as my "nested" transaction was active.

So, in short, the answer is (for JBossAS7 at least) - a transaction timeout is always counting down and starts as soon as the transaction activates. It never pauses nor suspends due to "nested" transactions.

Wednesday, January 23, 2013

PostgreSQL Tool To Analyze DB Performance

I needed to do some fine-grained performance analysis of my PostgreSQL database setup and came across an interesting tool. It's pgBadger and it is used to take a snapshot of your PostgreSQL performance data. It is not a realtime monitoring system - but it is easy to install and run and it outputs a nice HTML report that is a snapshot of your logged performance data.

I didn't have to do anything special to get it built and installed on my Fedora 15 box. I just followed their simple instructions and it built and ran fine.

I did have to configure PostgreSQL to spit out performance data in its logs, but again that was easy to do following their instructions. It just required simple changes to postgresql.conf and a restart of the DB. Once I did that, PostgreSQL started logging performance data in log files located in the PostgreSQL data/pg_log directory.

I then just ran pgbadger, passing in the names of the logs files as command line arguments, and after a few seconds it spit out an HTML report. A typical snapshot report that it generates out can be seen here.