All posts by HJ

Testing expections in Java

Unit tests usually run a piece of code to verify the output or state to ensure it does what you excepted. With exceptions, it gets trickier. Once one is thrown the test ends abruptly, so how can you make sure that it was really triggered?

To demonstrate various strategies for testing exceptions, I’ve made a small example project in the form of a simple calculator. Most of the tests use plain JUnit4, except one which takes advantage of AssertJ assertions. But before we look at that, we should clarify what we want to accomplish by testing for exceptions. As with all testing, the main goal is to verify the code does what it is supposed to. In this case; throw an exception given a certain state or input. So we want to verify three things:
1. An exception was thrown
2. It was triggered by the state or input we wish to test
3. It was the error we expected

The example calculator is capable of adding or subtracting numbers. We will ignore the implementation for now, assume it is sane and focus on the tests. It has a special rule though, it should only add positive numbers together. For negative numbers, corresponding subtraction should be used instead. So if anything fails to follow this business rule, I want the the add()-method to throw an exception.

The first approach is covered in CalculatorAnnotationTest.java. This suite contains some normal tests to ensure the calculator works as intended and one to verify it throws an exception when adding negative numbers. The latter is annotated with @Test(expected = IllegalArgumentException.class). This tells the test runner that the test should throw an exception of the specified type. The main problem here is that the annotation covers the whole test, which means that if any line throws such an exception the test will still pass. If we try to comment out the last line with calculator.add(1, -1); we might expect the test to fail since its no longer adding anything, but to our surprise it is still passing! Sounds like something else in the test is triggering an exception, but it’s hard to tell since it doesn’t seem possible to verify error message we get with the annotation. Thus, it only succeeds in point 1, but fails on 2 and 3. As soon as you do more that one thing in a test, you can no longer be sure which of the statements triggered the exception.

Since the annotation seemed too broad, let’s try to focus more on what we are trying to test. Ultimately, we want to know if the statement calculator.add(1, -1); throws an exception. So how do we normally deal with exceptions? Try-catch, of course. On to CalculatorTryCatchTest.java. This is a quite normal pattern which has several variations, but the core concept is that we do some set up, then call the statement we wish to test inside a try block and assert that we got the exception we wanted. Of course we also need to keep an eye out for other possible outcomes, so there’s two additional checks to mark the test as a failure if another or no exceptions are thrown. Without, the test would still pass even though it didn’t trigger the exception we want.

When running this test it is easier to see why the annotated test failed earlier; the constructor is rather picky and excpects the name to start with a capital letter. Once we’ve fixed that, the test works as expected. Actually, the constructor should be called outside the try-block, since it is only part of the arrangement setting up the necessary prerequisites for the test. The core of the test is the add()-method, so we should have as little as possible else inside the try-catch. If an exception is thrown in the setup, the test should of course fail because we didn’t achieve the necessary state to test our specification.

This way, we know the exception was thrown, we’ve limited the code in the try-block to just the method call we want to test and we inspect the error message to verify what we got. In other words, the try-catch pattern accomplishes all three goals we established at the start. However, it is a bit cumbersome to set up with all the try-catch boilerplate each time we want to test an exception. Worst case, we create an incomplete test which misses a case without reporting the test as failing.

For an alternative use of this pattern, see CalculatorTryCatchAlternativeTest.java. It uses a boolean flag to ensure that we don’t leave the test without asserting properly. I think this is somewhat better, but we still need to write a lot of boilerplate and end up introducing a new variable.

While I think the try-catch pattern is a step in the right direction, I’m not too happy with the need to add the same extra lines as safeguards over and over. Luckily, I found that JUnit (version 4.7 and newer) comes with a built-in rule to make this easier. The rule is called ExpectedException and is used in CalculatorExpectedExceptionTest.java. The rule is defined at the top and basically says that by default, no exceptions should be thrown by the tests. But where you want an exception to be triggered, the rule can be instructed to look for the exception type and error message which is expected. These instructions can be placed after all the all the setup so that we have it separated from the minimal section we wish to test.

This guarantees that the exception is triggered where we wanted it (goal #2), as well as specifying type and error message (goal #3). If it doesn’t encounter an exception matching its expected criteria it will mark the test as a failure, thereby fulfilling goal #1. All in all, it does an excellent job to fulfil all requirements.

The examples above are all using JUnit4, but I also looked for other solutions. I found AssertJ, a fluent assertion framework which contains a lot of useful things. (It started out as a fork of fest assert, for those more familiar with that). CalculatorAssertJTest.java contains an example demonstrating how it can deal with exceptions.

The code which should throw the exception is placed inside a lambda expression, which makes it possible to observe and verify the result from the outside. In terms of separation, this is perhaps one step further than the other examples, since we know that the exception can only be triggered by the code we place inside the lambda. We can also inspect it, looking at the type and the error message. This allows us full control to verify the exception when it has been thrown as well as cleary separate the section we except to throw something from the other parts of the test.

In conclusion, I prefer ExpectedException because it gives you the greatest amount of control/readability when testing exceptions. The annotation can lead to brittle tests if they have more than one line or method call in them. Setting up try-catch each time seems too cumbersome, plus I fear it is far too easy to write a bad test if you forget to add one of the safe guards. I liked the AssertJ approach though, I will consider using this for future projects.

And as a bonus at the end, there is an interesting proposal in the JUnit bug tracker on something similar to what AssertJ does, which means it might become available in JUnit someday.

apt-get update

Et kort lite tips hvis du bruker apt-get mye: hvis du bruker en relativt ny utgave av distroen din kan du sannsynligvis bruke apt istedenfor. Apt er fire tegn kortere å skrive og fungerer som forventet for alle de vanligste valgene:
apt-get updateapt update
apt-get dist-upgradeapt dist-upgrade
osv.
Som en ekstra bonus viser apt teksten i ulike farger for å indikere fremgang.

Så hvis du sitter på Ubuntu 14.04 eller nyere, kan det være kjekt å gå over til apt fremfor apt-get.

Jeg er usikker på om apt vil ta over/pakke inn all funksjonaliteten til apt-get på sikt, men inntil videre er det fortsett et par valg som kun finnes i apt-get. Det jeg har merket mest er at jeg fortsatt må kjøre apt-get autoremove, men det er ikke noe jeg gjør ofte nok til at det er plagsomt.

Ukens kommando: df -i

For en stund tilbake støtte jeg på et sært problem. Jeg forsøkte å installere noe, men Ubuntu fortalte meg at det ikke var mulig siden harddisken var full. Det hørtes veldig rart ut, siden jeg kunne se at harddisken hadde mer enn nok ledig plass. Jeg lurte på noe kanskje rapporterte feil mengde ledig plass, så jeg dobbeltsjekket med nautilus og df -h som viste de samme tallene. (For de som ikke kjenner den fra før, kommandoen df viser størrelse, brukt og ledig plass på harddiskene. Jeg bruker flagget -h for å få med benevning som er litt enklere for mennesker, feks. M for Megabyte, G for Gigabyte, så jeg slipper å regne sammen bytes selv.)

Etter å ha tenkt på hva dette kunne skyldes en stund kom jeg på inoder. Hva er en inode? Kort fortalt er en inode metadata om et område på disken. Den vil typisk inneholde informasjon om filstørrelse, eierskap, rettigheter og lignende. Hver fil eller mappe har en tilhørende inode som peker til den og holder på metainformasjonen om den.

Det jeg husket jeg hadde hørt om inoder var at det er et gitt antall av dem. Hvis man legger til mange nok (spesielt små) filer vil man før eller senere gå tom for inoder for å holde orden på disse. Denne situasjonen hørtes veldig teoretisk ut, så jeg hadde ikke tenkt over det som en reell problemstilling. Likevel, det kunne jo muligens forklare feilmeldingen jeg fikk.

Etter litt søking fant jeg ut hvordan jeg kan se hvor mange inoder som er i bruk. Det viste seg fort at df også har støtte for dette; df -i vil vise antall, brukte og ledige inoder. Maskinen som rapporterte full disk viste seg å ha mye ledig plass i Gigabytes, men kun et fåtall tilgjengelige inoder for å kunne opprette nye filer. Aha! Så varslet om full harddisk betydde i dette tilfellet egentlig for få inoder til å legge til nye filer (som jo forsåvidt betyr at den er full).

Nå som jeg visste hva problemet var, fjernet jeg fjernet en del eldre pakker som jeg ikke lenger trengte. Dette frigjorde flere inoder, som jeg verifiserte med df -i, og jeg kunne gå videre med installasjon av programmene jeg ønsket. Så hvis du får feilmeldinger om at harddisken er full, selv om det ser ut til å være mye ledig plass på den, kan det være en ide å sjekke om det er ledige inoder som er problemet.

PS. Det er ingen garantier for at dette blir en ukentlig spalte, men jeg likte navnet idet jeg skrev overskriften.

A comment on comments

As the observant reader might notice, there’s currently no way to add a comment at the end of this post. In fact, it’s not possible to add comments to any of the posts. What’s going on?

The comment section was initially enabled to facilitate feedback and discussion. In practice there hasn’t been any of either. There has been plenty of comments though, which I’ve had to mark as spam from time to time. Recently, there’s been a sharp increase and since I don’t get any interesting comments I found it preferable to just disable comments all together.

Thus, the comment section has been disabled for the time being. It may return in the future, but I would need to find a better solution that the current one.

The First Law trilogy by Joe Abercombie

I recently finished reading The First Law trilogy by Joe Abercombie, which consists of “The Blade Itself”, “Before They Are Hanged” and “Last Argument of Kings”. While a fantasy series, it is also part of the grimdark subgenre. As can be guessed by the name, grimdark is darker and grittier than “normal” fantasy. Rather than a classical good versus evil story told with clear black and white characters, they characters come in varying tones of grey. It is comparable to George RR Martin’s Song of Ice and Fire (also known as Game of Thrones) where there’s really no clear-cut good guys.

This is evident in one of the main characters in the series, Inquisitor Glokta. He used to be an officer, but after being captured and tortured in a war, he has now turned to torturing others. I’ve seen him compared to Black Adder in other reviews, and while I don’t fully agree in this, I can certainly see the similarites. I would rather compare him with Dr. House, since he’s smart and capable at what he does, though constantly in pain. Glokta easily has some of the best lines in the books, and his inner monologues are a thrill to follow. He especially shines in the second book, “Before They Are Hanged”, where he is tasked with running a city while investigating why his predecessor vanished. Oh, and and the city is besieged by an army much stronger than any defence they might be able to put up.

The two other main characters are Logen Ninefingers and Jezal dan Luthar. Luthar is a young officer which is training for the annual fencing contest, hoping to win fame and glory. While busy practicing and spending his evenings playing cards, he is eventually dragged into a quest for an object which might change the fate of the world. Logen is a barbarian from the north which has been a warrior for most of his life. In addition to his skill in battle, he is able to summon and talk to spirits. After being separated from his group of fighters and assuming they have perished, he heads south. Shortly after, he is called upon by Bayaz, the First Magus, which has use for someone who can talk to the spirits.

Bayaz is a powerful wizard, who has played a vital part at several times throughout the history of the world. The backstory is presented through various means (including a play!), and helps both explain what has happened earlier and show how historical events affect the present. He is a wise old man, but can also be intimidating in his displays of magical power. I find it interesting how he fills a similar role to Gandalf, yet does things which Gandalf would never do. This is one of the fun things in the books, how the author plays with the preconceptions of how the story will progress. An example of this is that the very first chapter literally ends with a cliff-hanger.

All in all, quite interesting books. Abercombie’s also written some standalones which take place in the same universe, which I look forward to checking out.

Robert Jordan and Brandon Sanderson – A Memory of Light

By popular demand, my thoughts on the final volume in the Wheel of Time series; A Memory of Light. With 14 books and one prequel novel in total, it is one of the longest fantasy series and it has now come to an end. When Robert Jordan passed away, a lot of us were worried we would never know what happened to the group of friends which had to flee their home town all these years ago, but now the last book is out. And so, more than a decade after I picked up the first book, I’ve finished the last one.

Being the final book in a long series, there is a lot happening. Factions clash, prophecies are fulfilled and the fate of the world is determined. It all culminates in Tarmon Gai’don, the Final Battle, which gets a nearly 200-pages long chapter dedicated to it. Much of the series have been leading up to this event and most of the plot threads are resolved, as well as some new mysteries introduced. (Those who have read it know what, or rather who, I’m talking about).

It was really nice to see the series finished, and I think Sanderson did a great job wrapping it up. It is one of the best series I have read, which is a bit ironic since I initially gave up on it merely a chapter or two in. When I picked it up a second time though, I couldn’t figure out why I had abandoned it. I really enjoyed the characters, the varied cultures they encounter in different parts of the world, the magic system and the glimpses into the lost glory and wonders of the Age of Legends.

Mystery is important, and so are stories

Most who played The Longest Journey will probably remember unexpectedly stumbling across the name of its writer inside the game. At the entrance of a movie theater, the player can look at a movie poster for “A Welsh Ghost Story, written and directed by Ragnar Tørnquist”. I always wondered whether this was a reference to something he had actually written, foreshadowing something to come later or simply his way of inserting his name into the story.

Then some years went by, the sequel Dreamfall was released, and while I eagerly await Dreamfall Chapters which is due to be released this fall I had mostly forgotten about this little cameo. Then here the other day, Ragnar Tørnquist posted a link to a screenplay called “In the Dark Places”. And the interesting part is that the working title had been “A Welsh Ghost Story”! So not only does it exist, but we’ll get to read it as well. He also posted another story, “Rules are Rules”, which he now considers a sort of precursor to The Longest Journey. I’ve only skimmed parts of them so far, but both look like interesting reads.

Hvordan få fine farger i git

Mange prosjekter bruker git til versjonskontroll, så de fleste utviklere har brukt det på et eller annet tidspunkt. Dessverre (og til min irritasjon) ser det ut som standardinnstillingene til git-klienten bruker samme farge på all utdata. Den har heldigvis innebygget støtte for bruk av flere farger der det er hensiktsmessig, som er noe av det første jeg slår på når jeg setter opp git på et nytt system.

Med farger får vi blant annet tydeligere markering av hvilken gren vi står på, men den viktigste fordelen er at differ (både endringer som ikke er commitet og eldre) blir fargekodet. I eksempelbildet ser vi klart hvilke linjer som er fjernet, lagt til eller forblir uendrete siden de er merket henholdsvis rødt, grønt og hvit (standard). Dette gjør det mye klarere og lettere å få oversikt sammenlignet med å manuelt forsøke å se hva som fjernes eller legges til i et sett med endringer der alle er listet med samme farge .

Eksempel på git med fargerSå som en huskelapp til meg selv og andre, kjør:

git config --global color.ui true

for å  slå på farger i git-klienten. Det er også mulig å spesifisere mer detaljert i hvilke situasjoner du ønsker farger og ikke, se dokumentasjonen for detaljer.

 

I’ve spent ages looking for a music video and I finally found it!

Those of you who know me, might know I’ve been searching for a music video I remember seeing when I was younger. I’ve at least asked friends about whether they might know it from time to time over the years in the hope that someone would recognize it. However, all I had to go one was vague half-remembered details, since I did not remember anything regarding the name of the artist, title of the song or any of the lyrics. This of course made it hard to search for, not to mention difficult to explain to others.

What I did remember was some key scenes from the music video and roughly the time period it was from. The description would go something like this:

  • The video was more than six minutes long, and due to its length only parts of it was shown on “Topp 20” (TV show which presented the 20 most sold singles in Norway that week).
  • It charted sometime around 1994-96. Which is a pretty wide range, but about all I could pin down. (Other songs I remember from that time period include U2’s “Hold me, thrill me, kiss me, kill me” and Nick Cave and Kylie Minouge’s “Where the wild roses grow”).
  • It featured among others: some exploding barrels, a cave and a mask (some sort of treasure hunt?) and a man attempting to save a woman in a raft approaching a waterfall. The man has climbed a nearby tree and tries to catch here when the raft passes beneath an overarching branch.

As I mentioned, I’ve asked various people about this over the years, and while some have said it sounded familiar, no one really knew what it could be. I have pondered various approaches, including going through each and every entry which charted in that time period. The obvious problem is that it would take an enormous amount of time to go through them all. We are talking about a list with twenty entries which was updated weekly, which effectively means 20×52 songs, and while songs staying multiple weeks would probably bring the number down somewhat it is still a large number. Disregarding the time it would take to watch all of them, even though the lists are posted online along with plenty of the music videos these days, there is no guarantee the video I was looking for would be available somewhere.

So I haven’t really searched all that actively lately, though here the other day I ran across review listing top 10 music videos from the ’90s. Since I didn’t want to get my hopes up, I mainly watched it for the fun of it and expecting to see some songs I had completely forgotten about. And then, when talking about one of the songs, the reviewer mentioned “Oh, and by the way, this video had a sequel” and showed some random clips which seemed eerily familiar. So I looked up the artist on Wikipedia and skimmed the list of singles released around that time period, then saw if I could find the music videos for them. And indeed, after roughly 17 years I had randomly stumbled across the music video I was looking for.

The video in question? Meat Loaf’s “I’d lie for you (and that’s the truth)”. In retrospect, maybe not the best song in the world, but I was really happy to finally solve an old puzzle and see it again.

Debian bug squashing party

Last weekend I attended a Debian bug squashing party, organized by NUUG, Skolelinux and Bitraf. In other words, roughly nine people gathered in front of their computers in the same room, trying to fix bugs and make Debian better.

First we were introduced to some of the tools and how to interact with the Debian BTS. Then we looked at the list of Release Critical bugs currently affecting Debian. At the time, there was more than 1000 bugs which would prevent a new release. Since this is too many (Debian require the number to drop to zero before making a release) we took a look at some of them.

First we looked at a bug report about a program crashing at startup, while getting to know our way around the BTS. We all tested to see if we could reproduce the issue in various environments. I was the only one who got the crash in my virtual machin running Sid (yay!). However, the exact same version of the package would not crash on Ubuntu Saucy, so the underlying issue was assumed to reside in one of the dependencies. We gathered a list of which versions/environments we had tested along with the results and a diff of the changes in dependencies from a working version to the crashing one. We submitted this as a comment to the bug report.

Next up, we looked at various bugs which had been filed as a result of failing rebuilds. A lot of them had a common cause, compilers have become stricter about imports, so some programs need to explicitly import libraries the compiler would add automatically in the past. One bug was picked as an example, and we all looked into it in parallel, attempting to patch it and get it to build. Related to this, we went through the process of installing dependencies, building the package, generating a diff and adding it as a proper patch.

After getting acquainted with the various tools and parts, we were let loose, all tasked with finding a similar bug and hopefully fix it by the end of the day. After some back-and-forth, I got a working patch for one of the bugs and submitted it. (Looks like another patch was used instead, but it also looks better than mine. Anyways, the important thing is that the package is now working again.) For a total list of all the bugs we looked at, see here.

All in all, it was a fun and nice experience. I had looked at most of the tools previously, but it was nice to have someone who were more familiar with them and would answer questions when someone ran into issues. I was also pleasantly surprised how easy (relatively speaking) it was to fix an issue, even an FTBFS one in packages I had never heard about.