Category Archives: English

How to convert a physical hard drive to a virtual machine

This post is based on various notes from a project I did a while back. It all began when my laptop’s hard drive started making funny clicking noises. I took this as a sign it would stop working sooner or later, did a backup of the relevant things and replaced it. Since then, it had been lying on a shelf so I figured a fun project would be to turn it into a virtual machine. After all, a disk can be moved from one physical machine to another without any need for reinstalling or configuring with identical settings. Sounds easy enough to take whatever is on the disk, convert it to a virtual hard drive and boot it inside VirtualBox? That way, I could continue to use the existing installation even if the hardware failed. Turns out this wasn’t as straight forward as first expected.

Let’s start with the easy part, first we need to write the disk content to an image file. Not knowing the exact steps involved, it seemed like a good idea to have an exact copy. This makes it easier to experiment and go back if something fails or additional steps are needed. It also reduces the risk and issue of the disk suddenly dying if I have already grabbed all I need from it.

I plugged the old hard drive into my Ubuntu machine to make a copy of the content while the drive was still working. To create an image, I used dd which will create a byte for byte identical copy. As an example; dd if=/dev/sdb2 of=image.iso status=progress will create an exact copy of the sdb2 partition and store it in image.iso. Make sure a) that you read from the correct disk (check gparted or similar tools to be sure) and b) the target where you store the result has sufficient space. As we’ll get more into later, you want a copy of the whole disk not just individual partitions. Unfortunately this didn’t work in my case, attempting to copy the entire disk it would stop halfway through. Repeated attempts failed at the same exact number of bytes, presumably related to the disk problems. I therefore grabbed only the main Windows partition which I was able to without running into errors. That should be all I needed. Now that I had an exact copy of the disk (well, the parts I could get at least), I unplugged the physical one.

Next step was to convert the raw disk image to something the virtual machine can understand. In my case, I used VirtualBox’s tool for this conversion: vboxmanage convertfromraw image.iso virtualHd.vdi. Note that the raw image is a byte-for-byte exact copy and will need all the required space even to duplicate empty space, but the virtual hard drive only needs the actual space in use. My tip now would be to create a backup of the virtual hard drive, to ensure you can start over if (when) something goes wrong. You can possibly do this with snapshots in the VM, but I found it easier to know that I could always return to the original state and start over without any earlier changes spilling over.

Create a virtual machine in VirtualBox as normal and attach the virtualHd.vdi to the new VM. This is where the problems started, it refused to boot. The disk was there, it was connected, and if I booted with a live CD I could see all the files. So why didn’t it work?

I tried multiple things here, eventually took a look at it with a boot repair tool. The report told me the boot sector believed it should start reading from sector 200000 or so, while the disk in fact started at sector 0. This is where I should probably tell you that the original disk layout was a bit strange. The first partition was a rescue partition (for some reason), the second was Windows and the third a dual boot setup for Ubuntu. Since I had failed to copy the complete disk, I had settled for the Windows partition. However, it seemed that it had retained the offset caused by the first partition, so using only the second partition made it really confused.

Disk management overview of the partitions

Note that the boot repair tool was able to pin-point the issue, but despite the examples and documentation I was looking at it didn’t provide any solutions. I tried a couple of variations to re-create the MBR by overwriting it, but no matter how I tried it always messed up the partition so that no program knew exactly what partitions or file systems it contained anymore.

After banging my head against that wall for a while, it struck me that if it needed that partition layout, why not set it up that way? I had a recovery CD, created from when the laptop was new. (Seemed more like a clean install than a recovery, but that suited me even better). So the plan was: I do a recovery install in the VM, I get the same partition layout and then simply replace the second partition. This actually worked as expected. Replacing the content in the second partition was easy. I just booted the virtual machine with the newly installed hard drive, the copied hard drive and a Ubuntu live CD to move things from one to the other. As an experiment, it actually made a difference if you copy all the files or all bytes of the partition with dd. The former worked and booted, but strange dialogs popped up when logging in. It should have replaced and included all files, so I don’t really understand the issue here. However, going back and overwriting everything with raw bytes worked much better.

So I now had a clean install with a fresh partition 1 and a salvaged partition 2 copied over. The VM booted, everything was loading and I got to the login screen. A little detour here, before we get to the end: I was unable to remember my original password. I tried most likely variations and then some rather unlikely ones without any luck. While I had successfully moved the content of the disk, I was unable to access it.

I considered password cracking for a while, but that would require taking the time to brute-force it which I’d rather avoid. While looking around for how to extract the username and password hash I found that you don’t need to crack it, you can simply blank it out. This guide (while written for an older Ubuntu version) went through the details. In short terms, boot from an Ubuntu live CD, install chntpw, locate the SAM file and blank out the password for the account in question. After doing this and rebooting I was automatically logged in and shown my glorious desktop with all previously installed programs.

This is also when I discovered that if you convert a physical hard drive to a virtual one in a VM, Windows will count that as hardware change and require a license re-activation. This would be no different if I had done a clean install, but I had hoped it could be avoided by converting the existing install.

In conclusion:
Yes, this can be done: duplicate the disk content, convert it to a format the virtual machine program reads and you can plug it into a VM. Though apart from being a fun hobby project it seems easier and less time-consuming to create a fresh install and then setup the same programs and configuration. If you do decide to convert a physical disk, make sure you create an image from the whole disk instead of a single partition, as this will save you lots of hassle in the long run. You can always clean up or wipe superfluous partitions in the VM afterwards.

Why do trolls regenerate?

For a while I’ve wondered why everyone agree that trolls can heal their wounds by regenerating. As a monster type trolls are recognizable by being big, strong and for the most part not particularly bright. Depending on the setting for a given book or game, the trolls may look or behave slightly differently, but they all agree on a single trait: regeneration. This spans different settings, whether it is Dungeons and Dragons, the Might and Magic-series or a dozen other examples. In fact D&D feels so strongly about this point that fire or acid damage is required to finish them off.

So where did this idea originate from and how come it is so widespread? It’s not mentioned in the old fairy tales, at least not any I can remember. The closest would be the one about the troll who hid his heart someplace else and thus couldn’t be killed. They’re tough? Yes. Hard to kill? Certainly. Often have multiple heads? Ok, most games have glossed over that aspect for some reason. The core attributes are the same, making trolls challenging opponents in whichever form they take. Adding the ability to close the wounds and restore it’s health over time, they’re also able to make a comeback even when you thought the fight was over.

I did some digging and part of the answer is probably that Dungeons and Dragons included it. Most, if not all, computer role playing game are influenced by what D&D created, so it only makes sense that it would spread to other settings. So where did they pick it up? Turns out there’s a book named Three Hearts and Three Lions by Poul Anderson which feature regenerating trolls. This seems to be where D&D got the inspiration. The same novel also contains the basis for the aligment system and the paladin class! I don’t know if the story explains why or how trolls gained this trait, and I didn’t find any more details. Still, looks like Three Hearts and Three Lions was the original source, and that after it got added to D&D it spread further from there.

Testing expections in Java

Unit tests usually run a piece of code to verify the output or state to ensure it does what you excepted. With exceptions, it gets trickier. Once one is thrown the test ends abruptly, so how can you make sure that it was really triggered?

To demonstrate various strategies for testing exceptions, I’ve made a small example project in the form of a simple calculator. Most of the tests use plain JUnit4, except one which takes advantage of AssertJ assertions. But before we look at that, we should clarify what we want to accomplish by testing for exceptions. As with all testing, the main goal is to verify the code does what it is supposed to. In this case; throw an exception given a certain state or input. So we want to verify three things:
1. An exception was thrown
2. It was triggered by the state or input we wish to test
3. It was the error we expected

The example calculator is capable of adding or subtracting numbers. We will ignore the implementation for now, assume it is sane and focus on the tests. It has a special rule though, it should only add positive numbers together. For negative numbers, corresponding subtraction should be used instead. So if anything fails to follow this business rule, I want the the add()-method to throw an exception.

The first approach is covered in CalculatorAnnotationTest.java. This suite contains some normal tests to ensure the calculator works as intended and one to verify it throws an exception when adding negative numbers. The latter is annotated with @Test(expected = IllegalArgumentException.class). This tells the test runner that the test should throw an exception of the specified type. The main problem here is that the annotation covers the whole test, which means that if any line throws such an exception the test will still pass. If we try to comment out the last line with calculator.add(1, -1); we might expect the test to fail since its no longer adding anything, but to our surprise it is still passing! Sounds like something else in the test is triggering an exception, but it’s hard to tell since it doesn’t seem possible to verify error message we get with the annotation. Thus, it only succeeds in point 1, but fails on 2 and 3. As soon as you do more that one thing in a test, you can no longer be sure which of the statements triggered the exception.

Since the annotation seemed too broad, let’s try to focus more on what we are trying to test. Ultimately, we want to know if the statement calculator.add(1, -1); throws an exception. So how do we normally deal with exceptions? Try-catch, of course. On to CalculatorTryCatchTest.java. This is a quite normal pattern which has several variations, but the core concept is that we do some set up, then call the statement we wish to test inside a try block and assert that we got the exception we wanted. Of course we also need to keep an eye out for other possible outcomes, so there’s two additional checks to mark the test as a failure if another or no exceptions are thrown. Without, the test would still pass even though it didn’t trigger the exception we want.

When running this test it is easier to see why the annotated test failed earlier; the constructor is rather picky and excpects the name to start with a capital letter. Once we’ve fixed that, the test works as expected. Actually, the constructor should be called outside the try-block, since it is only part of the arrangement setting up the necessary prerequisites for the test. The core of the test is the add()-method, so we should have as little as possible else inside the try-catch. If an exception is thrown in the setup, the test should of course fail because we didn’t achieve the necessary state to test our specification.

This way, we know the exception was thrown, we’ve limited the code in the try-block to just the method call we want to test and we inspect the error message to verify what we got. In other words, the try-catch pattern accomplishes all three goals we established at the start. However, it is a bit cumbersome to set up with all the try-catch boilerplate each time we want to test an exception. Worst case, we create an incomplete test which misses a case without reporting the test as failing.

For an alternative use of this pattern, see CalculatorTryCatchAlternativeTest.java. It uses a boolean flag to ensure that we don’t leave the test without asserting properly. I think this is somewhat better, but we still need to write a lot of boilerplate and end up introducing a new variable.

While I think the try-catch pattern is a step in the right direction, I’m not too happy with the need to add the same extra lines as safeguards over and over. Luckily, I found that JUnit (version 4.7 and newer) comes with a built-in rule to make this easier. The rule is called ExpectedException and is used in CalculatorExpectedExceptionTest.java. The rule is defined at the top and basically says that by default, no exceptions should be thrown by the tests. But where you want an exception to be triggered, the rule can be instructed to look for the exception type and error message which is expected. These instructions can be placed after all the all the setup so that we have it separated from the minimal section we wish to test.

This guarantees that the exception is triggered where we wanted it (goal #2), as well as specifying type and error message (goal #3). If it doesn’t encounter an exception matching its expected criteria it will mark the test as a failure, thereby fulfilling goal #1. All in all, it does an excellent job to fulfil all requirements.

The examples above are all using JUnit4, but I also looked for other solutions. I found AssertJ, a fluent assertion framework which contains a lot of useful things. (It started out as a fork of fest assert, for those more familiar with that). CalculatorAssertJTest.java contains an example demonstrating how it can deal with exceptions.

The code which should throw the exception is placed inside a lambda expression, which makes it possible to observe and verify the result from the outside. In terms of separation, this is perhaps one step further than the other examples, since we know that the exception can only be triggered by the code we place inside the lambda. We can also inspect it, looking at the type and the error message. This allows us full control to verify the exception when it has been thrown as well as cleary separate the section we except to throw something from the other parts of the test.

In conclusion, I prefer ExpectedException because it gives you the greatest amount of control/readability when testing exceptions. The annotation can lead to brittle tests if they have more than one line or method call in them. Setting up try-catch each time seems too cumbersome, plus I fear it is far too easy to write a bad test if you forget to add one of the safe guards. I liked the AssertJ approach though, I will consider using this for future projects.

And as a bonus at the end, there is an interesting proposal in the JUnit bug tracker on something similar to what AssertJ does, which means it might become available in JUnit someday.

A comment on comments

As the observant reader might notice, there’s currently no way to add a comment at the end of this post. In fact, it’s not possible to add comments to any of the posts. What’s going on?

The comment section was initially enabled to facilitate feedback and discussion. In practice there hasn’t been any of either. There has been plenty of comments though, which I’ve had to mark as spam from time to time. Recently, there’s been a sharp increase and since I don’t get any interesting comments I found it preferable to just disable comments all together.

Thus, the comment section has been disabled for the time being. It may return in the future, but I would need to find a better solution that the current one.

The First Law trilogy by Joe Abercombie

I recently finished reading The First Law trilogy by Joe Abercombie, which consists of “The Blade Itself”, “Before They Are Hanged” and “Last Argument of Kings”. While a fantasy series, it is also part of the grimdark subgenre. As can be guessed by the name, grimdark is darker and grittier than “normal” fantasy. Rather than a classical good versus evil story told with clear black and white characters, they characters come in varying tones of grey. It is comparable to George RR Martin’s Song of Ice and Fire (also known as Game of Thrones) where there’s really no clear-cut good guys.

This is evident in one of the main characters in the series, Inquisitor Glokta. He used to be an officer, but after being captured and tortured in a war, he has now turned to torturing others. I’ve seen him compared to Black Adder in other reviews, and while I don’t fully agree in this, I can certainly see the similarites. I would rather compare him with Dr. House, since he’s smart and capable at what he does, though constantly in pain. Glokta easily has some of the best lines in the books, and his inner monologues are a thrill to follow. He especially shines in the second book, “Before They Are Hanged”, where he is tasked with running a city while investigating why his predecessor vanished. Oh, and and the city is besieged by an army much stronger than any defence they might be able to put up.

The two other main characters are Logen Ninefingers and Jezal dan Luthar. Luthar is a young officer which is training for the annual fencing contest, hoping to win fame and glory. While busy practicing and spending his evenings playing cards, he is eventually dragged into a quest for an object which might change the fate of the world. Logen is a barbarian from the north which has been a warrior for most of his life. In addition to his skill in battle, he is able to summon and talk to spirits. After being separated from his group of fighters and assuming they have perished, he heads south. Shortly after, he is called upon by Bayaz, the First Magus, which has use for someone who can talk to the spirits.

Bayaz is a powerful wizard, who has played a vital part at several times throughout the history of the world. The backstory is presented through various means (including a play!), and helps both explain what has happened earlier and show how historical events affect the present. He is a wise old man, but can also be intimidating in his displays of magical power. I find it interesting how he fills a similar role to Gandalf, yet does things which Gandalf would never do. This is one of the fun things in the books, how the author plays with the preconceptions of how the story will progress. An example of this is that the very first chapter literally ends with a cliff-hanger.

All in all, quite interesting books. Abercombie’s also written some standalones which take place in the same universe, which I look forward to checking out.

Robert Jordan and Brandon Sanderson – A Memory of Light

By popular demand, my thoughts on the final volume in the Wheel of Time series; A Memory of Light. With 14 books and one prequel novel in total, it is one of the longest fantasy series and it has now come to an end. When Robert Jordan passed away, a lot of us were worried we would never know what happened to the group of friends which had to flee their home town all these years ago, but now the last book is out. And so, more than a decade after I picked up the first book, I’ve finished the last one.

Being the final book in a long series, there is a lot happening. Factions clash, prophecies are fulfilled and the fate of the world is determined. It all culminates in Tarmon Gai’don, the Final Battle, which gets a nearly 200-pages long chapter dedicated to it. Much of the series have been leading up to this event and most of the plot threads are resolved, as well as some new mysteries introduced. (Those who have read it know what, or rather who, I’m talking about).

It was really nice to see the series finished, and I think Sanderson did a great job wrapping it up. It is one of the best series I have read, which is a bit ironic since I initially gave up on it merely a chapter or two in. When I picked it up a second time though, I couldn’t figure out why I had abandoned it. I really enjoyed the characters, the varied cultures they encounter in different parts of the world, the magic system and the glimpses into the lost glory and wonders of the Age of Legends.

Mystery is important, and so are stories

Most who played The Longest Journey will probably remember unexpectedly stumbling across the name of its writer inside the game. At the entrance of a movie theater, the player can look at a movie poster for “A Welsh Ghost Story, written and directed by Ragnar Tørnquist”. I always wondered whether this was a reference to something he had actually written, foreshadowing something to come later or simply his way of inserting his name into the story.

Then some years went by, the sequel Dreamfall was released, and while I eagerly await Dreamfall Chapters which is due to be released this fall I had mostly forgotten about this little cameo. Then here the other day, Ragnar Tørnquist posted a link to a screenplay called “In the Dark Places”. And the interesting part is that the working title had been “A Welsh Ghost Story”! So not only does it exist, but we’ll get to read it as well. He also posted another story, “Rules are Rules”, which he now considers a sort of precursor to The Longest Journey. I’ve only skimmed parts of them so far, but both look like interesting reads.

I’ve spent ages looking for a music video and I finally found it!

Those of you who know me, might know I’ve been searching for a music video I remember seeing when I was younger. I’ve at least asked friends about whether they might know it from time to time over the years in the hope that someone would recognize it. However, all I had to go one was vague half-remembered details, since I did not remember anything regarding the name of the artist, title of the song or any of the lyrics. This of course made it hard to search for, not to mention difficult to explain to others.

What I did remember was some key scenes from the music video and roughly the time period it was from. The description would go something like this:

  • The video was more than six minutes long, and due to its length only parts of it was shown on “Topp 20” (TV show which presented the 20 most sold singles in Norway that week).
  • It charted sometime around 1994-96. Which is a pretty wide range, but about all I could pin down. (Other songs I remember from that time period include U2’s “Hold me, thrill me, kiss me, kill me” and Nick Cave and Kylie Minouge’s “Where the wild roses grow”).
  • It featured among others: some exploding barrels, a cave and a mask (some sort of treasure hunt?) and a man attempting to save a woman in a raft approaching a waterfall. The man has climbed a nearby tree and tries to catch here when the raft passes beneath an overarching branch.

As I mentioned, I’ve asked various people about this over the years, and while some have said it sounded familiar, no one really knew what it could be. I have pondered various approaches, including going through each and every entry which charted in that time period. The obvious problem is that it would take an enormous amount of time to go through them all. We are talking about a list with twenty entries which was updated weekly, which effectively means 20×52 songs, and while songs staying multiple weeks would probably bring the number down somewhat it is still a large number. Disregarding the time it would take to watch all of them, even though the lists are posted online along with plenty of the music videos these days, there is no guarantee the video I was looking for would be available somewhere.

So I haven’t really searched all that actively lately, though here the other day I ran across review listing top 10 music videos from the ’90s. Since I didn’t want to get my hopes up, I mainly watched it for the fun of it and expecting to see some songs I had completely forgotten about. And then, when talking about one of the songs, the reviewer mentioned “Oh, and by the way, this video had a sequel” and showed some random clips which seemed eerily familiar. So I looked up the artist on Wikipedia and skimmed the list of singles released around that time period, then saw if I could find the music videos for them. And indeed, after roughly 17 years I had randomly stumbled across the music video I was looking for.

The video in question? Meat Loaf’s “I’d lie for you (and that’s the truth)”. In retrospect, maybe not the best song in the world, but I was really happy to finally solve an old puzzle and see it again.

Debian bug squashing party

Last weekend I attended a Debian bug squashing party, organized by NUUG, Skolelinux and Bitraf. In other words, roughly nine people gathered in front of their computers in the same room, trying to fix bugs and make Debian better.

First we were introduced to some of the tools and how to interact with the Debian BTS. Then we looked at the list of Release Critical bugs currently affecting Debian. At the time, there was more than 1000 bugs which would prevent a new release. Since this is too many (Debian require the number to drop to zero before making a release) we took a look at some of them.

First we looked at a bug report about a program crashing at startup, while getting to know our way around the BTS. We all tested to see if we could reproduce the issue in various environments. I was the only one who got the crash in my virtual machin running Sid (yay!). However, the exact same version of the package would not crash on Ubuntu Saucy, so the underlying issue was assumed to reside in one of the dependencies. We gathered a list of which versions/environments we had tested along with the results and a diff of the changes in dependencies from a working version to the crashing one. We submitted this as a comment to the bug report.

Next up, we looked at various bugs which had been filed as a result of failing rebuilds. A lot of them had a common cause, compilers have become stricter about imports, so some programs need to explicitly import libraries the compiler would add automatically in the past. One bug was picked as an example, and we all looked into it in parallel, attempting to patch it and get it to build. Related to this, we went through the process of installing dependencies, building the package, generating a diff and adding it as a proper patch.

After getting acquainted with the various tools and parts, we were let loose, all tasked with finding a similar bug and hopefully fix it by the end of the day. After some back-and-forth, I got a working patch for one of the bugs and submitted it. (Looks like another patch was used instead, but it also looks better than mine. Anyways, the important thing is that the package is now working again.) For a total list of all the bugs we looked at, see here.

All in all, it was a fun and nice experience. I had looked at most of the tools previously, but it was nice to have someone who were more familiar with them and would answer questions when someone ran into issues. I was also pleasantly surprised how easy (relatively speaking) it was to fix an issue, even an FTBFS one in packages I had never heard about.

My list of virtual machines

list_of_vmsThought I’d share the setup I have for virtual machines, how I use them to triage bugs and experiment with various software.

First a small digression, since the observant reader will notice I am using Virtualbox. When I first discovered and started playing around with virtual machines I had a computer incapable of hardware supported virtualization. I discovered this rather quickly since every virtualization solution I tried failed to work because they all required specific CPU features. After testing several solutions, I settled on Virtualbox because it also supported software-based virtualization. I’ve later replaced that machine, and while my current computer supports hardware assisted virtualization I’m still using Virtualbox as it is straight-forward and I am familiar with it. I did briefly try a couple of other solutions when I got my new computer, but didn’t find any obvious advantages they had over sticking with my existing setup.

Now, the machines. I have a set of the currently supported Ubuntu releases, organized by their code names. (Yes, I’m aware 11.04 reached end of life a while back.) They come in handy when confirming bugs or trying to track down which release something broke (or got fixed). My main use case is: load up the relevant release a bug was reported against, verify it is reproducible there, and then check whether it is also present in the latest development release.

All are kept more or less up to date, to make sure I have the latest version of libraries and other software when attempting to reproduce bugs. When I started triaging bug reports I used to simply install the software on my main system and check if the bug was reproducible there, though I quickly changed my approach for several reasons. Mainly because my main system wouldn’t easily allow me to test with multiple releases, but also in case my setup or set of installed packages would produce a different result than a system out of the box. The latter may not always be relevant, but there are some cases where it matters. For instance, say a program fails to run without a specific library which is not installed as a dependency, however since I already have installed the library for other reasons I wouldn’t be able to reproduce the issue. In cases like that it makes more sense to check what happens on a system out of the box.

In addition to the Ubuntu releases, I also run a couple of other systems. Arch Linux is nice and since it is rolling release distribution it usually includes the latest version of programs/libraries before most other distros. It’s ideal for testing whether projects still work as expected with the latest version of their dependencies, or to try out features in newer versions of programs. If newer versions of a library or compiler is released, it’s really convinient to be able to catch any issues early before it ends up the stable version of other distributions. In addition, Arch has a rather different philosophy and approach compared to Ubuntu, which is interesting to explore.

The Debian machine is running Sid (unstable). For most of the same reason as Arch, being able to test the latest version of projects, plus it will eventually turn into the next releases of Debian, Ubuntu (and related derivatives). As Ubuntu is based on Debian, it is of course also relevant for checking whether bugs are reproducible both places in case they should be forwarded upstream. As Debian is currently in freeze for the upcoming Wheezy release, there’s not many updates these days though.

Oh, and there’s a Windows 8 preview I was trying out when it became available. Used it some when it was announced. I’m pretty sure that will expire soon.