Monday, February 2, 2015

Debugging Nashorn JavaScript with Intellij

I recently started experimenting with JavaScript inside of the Nashorn container. I soon realized that I needed to find a debugging solution. A quick search on the Web found what I was looking for: Intellij added the support in version 13.

Unfortunately initial attempts at debugging with Intellij hit some roadblocks. The support was only working for the simplest of configurations for me. The main problems for me were:
  • Absolute file paths didn't work.
  • Multi-module projects didn't work. The breakpoints would be be hit but the source would not be shown.
  • If my JavaScript came from another source, such as a Reader or CompiledScript, debugging wasn't available. The only pattern that seemed to work was engine.eval("load 'filename')". Unfortunately this didn't match my use case.
After some experimenting I was luckily able to overcome most of my problems. It would be nice if some enhancements were added to Intellij, but at least I was able to function.

Script Paths
Intellij only works with relative path names. If the JavaScript is located within the project tree then it is pretty easy to convert an absolute path to a relative path in your code. If, however, the JavaScript resides outside the project directory tree, you are pretty much hosed. It would really be nice if Intellij implemented a more robust matching algorithm. If the filename (without path) matches something in your project tree then Intellij could prompt you to confirm the match.

Multi-Module Projects
The trick to making multi-module projects work is to understand the algorithm Intellij uses to match up the source with a debugging breakpoint. I discovered that Intellij always matches by using a relative file path relative to the root of the project tree rather than the root of the module in which the source resides. Adjusting your relative path name to be relative to the project root works around this problem. Again, it would sure be nice if the IntelliJ matching algorithm was smarter.

JavaScript from another Source
Intellij needs to be able to match up the JavaScript breakpoint with its source. If your code is using a Reader as for script input then Intellij Has no clue what to do.  I found the solution to this is to add to your JavaScript a hint about the JavaScript file name. You can add a line at the front of your JavaScript like this:
//@ sourceURL=pathRelativeToProject/name.js
Dynamically Downloaded Source
A related problem I had was for source being dynamically downloaded from another location. To solve that problem I had to do something special. If in debugging mode, I create a temp directory within the directory tree of my project and copy the dynamically downloaded javascript into that directory. While writing the JavaScript to the local file system, I prepend the "//@ sourceURL" statement for its new temporary location. This makes everything work just fine.



Friday, September 2, 2011

Using Maven Profiles to Tune Heroku Java Builds


If you are new to Heroku you will soon learn that building a a Heroku-based java application is slightly different than building a traditional stand-alone java application. They are very similar but a few small differences exist. You can learn a little more about the differences by checking out the Heroku Java tutorial.

It's likely that you have existing Java maven patterns from your previous projects that aren't quite optimal for Heroku. If you are brave and want to jump in with both feet then you can go straight to creating a Heroku-specific maven build. If, however, you want to straddle the fence for a while between the exciting new cloud world and the comfortable legacy way you've worked in the past then you can come up with a maven build that works well for both traditional Java deployment and Heroku Java deployment.

Here is a sample pom.xml that shows using maven profiles to choose between a traditional standalone java application and a heroku-deployed java application. It detects the Heroku case and optimizes the build for that environment.
https://github.com/davidbuccola/force-jetty-runner/blob/master/pom.xml

The main difference is that when on Heroku you skip assembly building and ask appassembler to exclude the repo (since it is already provided by heroku).

Check out the "source" profile, the "assemble" profile and the profile.assemble property.

No More Schema First

When working with XML in Java I used to be a strong proponent of "schema first" design. With "schema first" design you create an XML schema to describe your data and then generate the supporting Java artifacts from that schema definition.

Recent work that I've been doing, however, has changed my mind. I no longer think "schema first" is the best way to go. I now recommend "java first".

There are several reasons that I've come to that conclusion. Here is a brief summary:

  1. The reason that tipped the scale was the desire to annotate my Java beans with annotations beyond just JAXB. I wanted my model beans to be annotated for both JAXB and JPA. When starting with "schema first" I could not do this. I just got JAXB annotations and that was it. If I want to annotate with multiple types of annotations I need to start with "java first".
  2. Maven projects that specify schema compilation don't import well into common IDEs. The projects imported in this way would not build correctly in the IDE because the IDE wouldn't do the Schema compilation.
  3. I often struggled with the JAXB bindings generated by the schema compiler. I would often spend a lot of time struggling to work around binding that just weren't what I needed.
  4. JSON is starting to take over the network world and generally JSON and dynamic languages like Javascript don't care about schemas.

Sunday, October 31, 2010

Maven Multi-module Assembly Building

My first attempt at getting a multi-module maven assembly to work gave me some difficulty. I finally figured it out though and wish to share the result of my thrashing.

To cut to the chase, I found that to get the assembly to build properly and pick up all the latest dependencies you need to make sure that the assembly module is not a parent module of any of the other modules upon which the assembly depends.

The reason for this is that maven builds parent modules first before building any of the children modules. This means that if an assembly module is a parent module then it will be built before any of the submodules are built. This means that your assembly can not pick up any of the newly built modules that are below it. Either your assembly will fail (if you have never built it before) or the assembly will include old submodules from the previous build.

I found that a structure which puts the assembly module on a different branch of the multi-module tree works best. Something like the following worked for me:

/root
   +-- /assembly
   +-- /module
     +-- /submodule1
     +-- /submodule2

Package-private Access won't work with Documentum BOF

If you are putting together several Documentum BOF modules such as TBOs and Aspects and they are located in the same java package you may be tempted to use Java package-private protections to allow shared access to methods and data between the modules without exposing the methods or data to outside code.

With Documentum BOF, however, you can not do this. The reason is that each TBO or Aspect gets their own private classloader. Even though the name of the package may be the same for the two modules and you might expect that they can share package private access, the reality is that since the packages are in two different class loaders they are not in fact the same package and therefore Java rightfully prohibits the access.

This means that you can't use package-private access for BOF TBOs and Aspects. For similar reasons you also can't rely upon class static data.

Need to quote DQL UNIQUE keyword

While using Documentum DQL to create a new type and index I came across a problem that was not particularly obvious. I figured I'd document it just in case others had the same problem.

The problem has to do with the DQL "unique" keyword. When creating an index using DQL the documentation says to use a command something like the following:

EXECUTE MAKE_INDEX WITH TYPE_NAME='mytype',ATTRIBUTE='myattribute',unique=true

I found, however, that on the version of Content Server I was using that this command did not work. After some amount of research I found that there is a bug introduced into the DQL parser that treats the "UNIQUE" token as a special keyword. This special treatment messes up the parsing the of MAKE_INDEX command. In order to work around the problem you can quote the UNIQUE keyword. This gets the token through the parser so that everything works as expected. The new command is as follows:

EXECUTE MAKE_INDEX WITH TYPE_NAME='mytype',ATTRIBUTE='myattribute',"unique"=true

Thursday, September 30, 2010

Protecting against LinkageErrors from DFC

Unfortunately I've discovered the hard way that DFC can throw more than just DfException and RuntimeException. It turns out that DFC can also generate nasty throwables like LinkageError. The problem with throwables like LinkageError is that the catch clauses people commonly put in there code to catch and handle unexpected conditions generally do not catch LinkageError because it is derived from Error and is not derived from Exception.

The reason DFC can throw LinkageError is because of its BOF activities. The dynamic classloading related to BOF can throw these errors when there are configuration problems with a particular BOF module.

To protect against occurances you need to add a "catch" clause for Throwable like this:
    try
    {
        IDfFolder object = (IDfFolder) session.newObject(XdsxFolderType.TYPE_NAME);
        object.setObjectName(folder.getEntryUuid());
        object.save();
    }
    catch (DfException e)
    {
        // Do something with this exception
    }
    catch (Throwable e) // Because DFC can generate a couple nasty "Error" throwables
    {
        throw new RuntimeException(e);
    }