2009-11-10

Text-Based Communication Considered Harmful

Discussing important matters is best done face-to-face or at least over the phone. Text-based communication methods such as email and text messages are limited in expression, prone to misunderstandings and often lacking delivery guarantees.

I was having a discussion with another person over Facebook's chat and there was some visible lag in the way that his messages were popping up on my screen. At the end of the conversation, I had made a question but did not receive an answer before he quit, so I was puzzled. It had happened, that he indeed had written an answer and apparently it had showed up on his screen, but it had not reached the server and did not appear on my screen. Luckily it was possible to resolve the situation through text messages, but given the limitations of text messaging, not everything could be yet resolved (140 characters is not enough for everyone) and there was lots of room for misunderstandings (in his reply one word had two interpretations, although from the context it was possible to guess what it meant). A telephone call or a face-to-face discussion would have been needed.

Another case: There was an event where I should have been. One evening, a couple of days before the event, I received a text message from a friend asking whether I would be coming there. Knowing his habit of confirming things like this a couple of days beforehand, I replied "yes" without thinking anything special about it. But actually, the event had been moved to the beginning of the week and I had not heard about it, and now my friend was in reality asking, whether I would be there in 15 minutes because it was already starting. But his text message did not mention the time, that it was today, nor that the situation was urgent. So I was completely oblivious and missed the appointment that day. On the other hand, if he would have made a phone call, the reality would have become apparent from his tone of voice and the background noises. Then I could at least have said right away, that I won't be able to make it, and they could have prepared a Plan B.

What is wrong with text-based communication

The biggest problem with text-based communication is that it does not convey the tone of voice nor facial expressions. This makes it hard for the reader to interpret the intentions and motives of the writer correctly. It also makes it easier have arguments, especially when combined with relative anonymity. I can't even count how many misunderstandings and arguments I've seen happening over the internet during the last 10 years. It happens even to experienced people all the time.

High latency makes it easier to have misunderstandings. When speaking face-to-face or over the phone, the latency is zero. But when writing a message, it will take many seconds, minutes, hours or even days before the writer will receive a reply. This leads to people writing longer messages, so as to optimize the number of messages which need to be sent. But this has the negative effect of reducing the feedback per statement - the reply will give feedback about the message as a whole, but not about every statement, which in turn makes it possible for some of the statements to be misunderstood without anybody noticing.

And because many statements are made before receiving feedback about them, if there is a fundamental misunderstanding in the first statement of the message, the rest of the message only amplifies that misunderstanding because the rest of the message relies on the same false assumptions. For example if the writer criticizes the other person unwarrantably, then a long message will amplify the critique and the other person will get offended. But if he would speak just one statement and then get a reply that the critique is unwarranted, it would only be a small displeasure which could be solved quickly by apologizing, before the other person has time to get offended.

The overhead of writing hinders effective communication by reducing the amount of communication. When more effort is needed to communicate, then people try to reduce the amount of communication and they will use less words. This is especially true for text messages, which are very hard to write using a phone's number keys. Also if the cost of sending a message is non-zero, such as when sending a text message, people will try to send less messages both by reducing the number of words they use and by squeezing more information into one message. But using less words has the negative effect of reducing the amount of details in the communication, which in turn leads to communication which leaves out important things or makes the words subject to misinterpretation.

Still one more problem is unreliable delivery. Most of the communication mediums do not guarantee that the recipient of the message will receive and read it. Email messages can disappear into thin air, get caught in a spam filter, or take many days before arriving. If you're lucky, you will get an "Undelivered Mail Returned to Sender" message, but even that is not guaranteed. Text messages likewise can just disappear or take a long time to arrive. The email and text message infrastructures are fundamentally wrong, and there is no easy way to solve these problems with them.

Apparently also Facebook's chat does not guarantee delivery and gives the user misleading visual feedback. I haven't read the code, but it might be that when somebody sends a message, it is immediately processed on the client-side and shown in the chat log, after which the message update is sent asynchronously to the server. The right way would be to send the message first to the server and show the message in the chat log only after the server notifies the client about a new message. (If Facebook already uses the latter approach, then they must have some buggy code, because otherwise the issue I mentioned above would not have happened.)

How spoken communication avoids many of these issues

Just as it is said, "the most efficient and effective method of conveying information [--] is face-to-face conversation." Face-to-face communication is superior to other forms of communication. If face-to-face conversation is 90% effective, then phone calls would be maybe 50% effective, emails about 30% effective and text messages some 5% effective (statistics based on the Stetson-Harrison method).

When speaking face-to-face, it's possible to see from that person's facial expressions and tone of voice whether his intent was to give advice, to insult, to joke or something else. The words can be exactly the same, but the way they are said can completely change the meaning. But over the internet none of those cues exist, and people tend to misunderstand the writer's intent. Often the words come out very direct, even insulting. People are used to softening their words with their tone of voice and expressions, so in spoken communication the words don't come out that directly, but very few are so good writers that they can express the same softness and feeling in their writing.

When one person says something that the other person does not understand, the speaker can notice it in face-to-face discussion just by looking at the other person's facial expressions, and then he can refine his words and offer further explanation, and the misunderstanding will be fixed before it even happens. The same error-correction mechanism works also when speaking over the phone: a short pause, an interjection or a filled pause (uh, er, um) can signify that the other person did not understand something. It's also common for a person to repeat what the other said, so confirming that they have understood each other. Because spoken communication has little overhead, people are inclined to talk things through until all apparent issues have been solved.

In written communication these error-correction mechanisms don't exist. No facial expressions can be seen over the keyboard. It's not possible to detect half a second pauses in the other person's writing. People don't use filled pauses in their writing, because their use happens naturally without thinking, whereas written text always goes through some thinking (though rarely enough thinking). Because there is some overhead to writing, people are less inclined to making clarifying questions and repeating the other person's thoughts in their own words, to make sure that everything was understood correctly. Written communication removes such things as "unnecessary", which in turn leads to written communication being more error-prone.

Conclusions

The next time that you need to communicate something important, primarily try to say it face-to-face, secondarily make a phone call, tertiarily write a long email message or letter (it's easier to explain yourself through many words) and only as a last resort use a text message or other space-limited text-based means of communication.

2009-10-18

Tidy rewritten histories with Git

I imported some of my old projects from CVS to Git. I had the CVS repository of an old student project as a tarball. That one repository contained the sources of two programs - the main project and one small utility. I was able to import them into two separate Git repositories and also rewrite their version history so, that it would seem as if the utility program had always been a separate project and been using Maven (neither of which was true).

Importing the CVS repository to Git did not succeed with git cvsimport (it failed with "fatal error - cmalloc would have returned NULL"), but cvs2git worked and it was also orders of magnitude faster. It was necessary to edit the example options file provided with cvs2git - the CVS repository path and author names had to be configured. If some of the authors have non-ascii characters in their names, it's best to save the options file in UTF-8 format and use the u'Námè' format for the author names. See cvs2git's usage instructions for details on how to do the conversion.

Now that I had a Git repository with the history of both of the programs, it was time to separate the utility program's version history with git filter-branch (the main project's history did not need to be modified). It's best to take a temporary clone of the original repository before messing with filter-branch. That way it's easier to revert all changes and try again by just deleting and recreating the temporary repository.

I made a clone of the repository and in that clone I used --subdirectory-filter to remove everything else except the source codes of the utility program:

git filter-branch --subdirectory-filter src/hourparser -- --all

Originally it did not use Maven, but I wanted to modify the history to look like it had always used Maven. So then I used --tree-filter to move all the source files to the right directory structure. I also remove the manifest file, because Maven will generate it automatically. When removing files, it's best to use --prune-empty, or you may have problems for example during rebasing (I learned it the hard way). Also make sure that the last command in the filter will aways exit successfully with error code 0, or otherwise the whole filtering process will fail.

git filter-branch --prune-empty --tree-filter '
mkdir -p src/main/java/hourparser
mv *.java src/main/java/hourparser
rm -rf META-INF
' -- --all

After that was done, I had to insert the pom.xml and other Maven files to the version history. That I was able to do by making multiple commits with the initial project files and all the version number incrementing changes to them (the version number in pom.xml needs to be changed when a release is made) so that those commits were last in the history. Then I used git rebase to reorder the commits, so that the changes to pom.xml would be in the right places in the history. Changing the initial commit was more complicated, but I was able to do it by creating a new repository with that initial commit, and then rebasing the rest of the history from the other repository on top of it.

After this I had the right commits in place, but their dates were not consistent. The commits for the Maven files were dated in 2009, but everything else was dated 2005. That I was able to fix by exporting the repository into patches, editing the authors and author dates in the patches with a text editor, and finally importing the patches into a blank repository. Temporary patches are a powerful tool in editing the history.

git format-patch -M -C -k --root master
[edit the patches and move them to a new directory]
git init
git am -k 00*

After all this the authors and author dates were fine, but the committer and commit date information still needed fixing. I was able to change the committers to be the same as the author with the following command:

git filter-branch -f --env-filter '
export GIT_COMMITTER_NAME="$GIT_AUTHOR_NAME"
export GIT_COMMITTER_EMAIL="$GIT_AUTHOR_EMAIL"
export GIT_COMMITTER_DATE="$GIT_AUTHOR_DATE"
' -- --all

After this I could publish Git repositories of the main project and the utility project with nice clean histories.

2009-10-10

TDD is not test-first. TDD is specify-first and test-last.

Recently there has been some discussion about TDD at the Object Mentor blog. In one of my comments I brought forth the idea in this article's title. It was such a nice oxymoron that I decided to elaborate here that what I mean by saying that "TDD is not test-first".

The TDD Process

Because Test-Driven Development has the word "test" in its name, and the people doing TDD speak about "writing tests", there is much confusion about TDD, because frankly, the big benefits of TDD have very little to do with testing. That's what brought about Behaviour-Driven Development (BDD) which is the same as TDD done right, but without the word "test". Because BDD does not talk about testing, it helps many to focus on the things that TDD is really about.

Here is a diagram of how I have come to think about the TDD process:

When you look at that diagram, it probably seems quite similar to traditional software development methods, even quite waterfallish. Let's remind ourselves what a waterfall looks like:

The waterfall model is "Specify - Design - Implement - Verify - Maintenance". The TDD process is otherwise the same, except that it loops very quickly (one cycle usually takes a couple of minutes), it has a new "Cleanup" step, all of it is considered "Design", and all of it is also considered "Maintenance".

Step 1: Specify

The first step in TDD is to write a test a specification of the desired behaviour. Here the developer thinks about what the system should do, before thinking about how it should be implemented. The developer focuses on just one thing at a time - separate the what from the how.

When the developer has decided that "what is the next important behaviour that the system does not yet do", then he will document the specification of that behaviour. The specifications are documented in a very formal language (i.e. a programming language), so formal that they can be executed and verified automatically (not to be confused with formal verification).

Writing this executable specification will save lots of time, because the developer does not need to do the verification manually. It will also communicate the original developer's intent to other developers, because anybody can have a look at the specification and see what the original developer had in his mind when he wrote some code. It will even help the original developer to remember, when he returns to code that he wrote a couple of weeks ago, that what he was thinking at the time of writing it. And best of all, anybody can verify the specifications at any moment, so any change that breaks the system will be noticed early.

Step 2: Implement

After the specification has been written, it's time to think about how to implement it, and then just implement it. The developer will focus on passing just one tiny specification at a time. This is the most easy step in the whole TDD process.

If this step isn't easy, then the developer tried to make a too big step and specified too much new behaviour. In that case he should go back and write a smaller specification. With experience, the developer will learn that what kind of steps are not too big (so that the step would be hard) and not too small (so that the progress would be slow).

If this step isn't easy, it could also be that the code that needs to be changed is not maintainable enough for this change. In that case the developer should first clean up and reorganize the code, so that making the change will be easy. If the code is already very clean, then only a little reorganizing is needed. If the code is dirty, then it will take more time. Little by little, as the code is being changed, the codebase will get cleaner and stay clean, because otherwise the TDD process will soon grind to a halt.

Step 3: Verify

Now the developer has implemented a couple of lines of code, which he believes will match the specification. Then he needs to verify that the code fulfills its specification. Thanks to the executable specifications, he can just click a button and after a couple of seconds his IDE will report whether the specification has been met.

This step is so quick and easy, that it totally changes the way that code can be written. It will make the developers fearless in making changes to code that they do not know, because they can trust that if they break something, they will find it out in a couple of seconds. So whenever they see some bad code, they can right away clean it up, without fear of breaking something. This difference is so overwhelming, that it even made Michael Feathers (in his book "Working Effectively with Legacy Code") to define "legacy code" as code without such executable specifications.

Step 4: Cleanup

When the code meets all its specifications, it's time to clean up the code. As Uncle Bob says, "the only way to go fast is to go well". We need to keep the code at top shape, so that making future changes will be easier. We can do this by following the boy scout rule: Always check-in code cleaner than when you checked it out.

So when the developer has written some code that works, he will spend a few seconds or minutes in removing duplicated code, choosing more descriptive names, dividing big methods into many smaller methods and so on. Every now and then the developer will notice new structures emerging from the code, so he adjusts his original plans about the design and extracts a new class or reorganizes some existing classes.

Steps 1-4: Design

The specification, implementation and cleanup steps all include designing the code, although in each step the focus in designing slightly different aspects of the code. As Kent Beck says in his book "Extreme Programming Explained" (2nd Ed. page 105), "far from design nothing, the XP strategy is design always."

In the specification step, the developer is first designing the behaviour of the system, what the system should do. When he is writing the specification, he is designing how the API of the code being implemented will be used.

In the implementation step, the developer is designing the structure of the code, how the code should be structured so that it will do what it should do. In this step the amount of design is quite low, because the goal is to just make the simplest possible change that will achieve the desired behaviour. It is acceptable to write dirty code just to meet the specification, because the code will be cleaned immediately after writing it.

In the cleanup step, the developer is designing that what is the right way to structure the code, how to make the code cleaner, more maintainable. This is where the majority of the design takes place, which also makes the cleanup step the hardest step in the whole TDD process. Thanks to the automatic verification of the specifications, it is possible to evolve the design and architecture of the system in small, safe steps. When improving the design of the system, the system will be working at all times, so it is possible to do even big changes incrementally, without a grand redesign.

Steps 1-4: Maintenance

When using TDD, we are at all times in maintenance mode, because we are all the time changing existing code. Only the first cycle, the first couple of minutes, is purely greenfield code.

This continuous maintenance forces the system to be maintainable, because if it would not be maintainable, the TDD process would grind to a halt very soon. On the other hand, waterfall does not force the system to be maintainable, because the maintenance mode comes only after everything else has been done, which means that with waterfall it's possible to write unmaintainable code.

Maybe this is one of the reasons why TDD produces better code, more maintainable code. If some piece of code is not maintainable, it will become apparent very quickly, even before that piece of code has been completed. This early feedback in turn will drive the developer into changing the code to be more maintainable, because he can feel the pain of changing non-maintainable code.


Updated 2009-10-15:

Somebody posted this at Reddit and in the comments the appears to be some confusion about the kinds of specs that I'm referring to in this article and which are useful in TDD. To find out in what style my specs are written, have a look at the TDD tutorial which I have created. To see TDD in action in a non-trivial application, have a look at my current project.

And of course the executable specs are not the only kinds of specifications that a real-life project needs. Just as I said above, they are "a specification of the desired behaviour", not the only specification. TDD specs are written at the level of individual components, which makes them useful for driving the design of the code in the components. They are the lowest level specifications that a system has. But before diving into the code, first the project should have high-level requirements and specifications describing from a user's point of view that what the system should do. A high-level architectural description is also useful.

I'm also into user interface design, so whenever the system being built will be used by human users, the first thing I'll do in such a project is to gather the goals and tasks of the users, based on which I will design a user interface specification in the form of a paper prototype, but that would be the topic for a whole another article...

2009-06-11

New Architecture for Dimdwarf

In a previous post I gave an introduction to Dimdwarf - how the project got started and what are its goals. In this post I will explain the planned architecture for Dimdwarf, which should be scalable enough to evolve the system into a distributed application server.

Background

In January 2009 I got tipped by dtrott at GitHub about a white paper called The End of an Architectural Era. It discusses how traditional RDBMS are outdated and that it's time to create database systems which are designed for current needs and hardware. In the paper they describe how they built a distributed high-availability in-memory DBMS which beats a commercial RDBMS in the TPC-C benchmark by almost two orders of magnitude. It keeps all the data in main memory (today's servers have lots of it), thus avoiding the greatest bottleneck in RDBMSs - writing transaction logs on hard disk (today's HDDs are not significantly faster than in the past). It uses a single-threaded execution model, which makes the implementation simpler and avoids the need for locking, yielding a more reliable system with better performance. Failover is achieved by replicating the data on multiple servers. Scalability is achieved by partitioning the data on multiple servers.

I thought that some of these ideas could be used in Darkstar, so I posted a thread about it on Darkstar forums. After thinking about it for a day, I came up with a proposal for how to apply the ideas to Darkstar's multi-node database. And after still a couple more days, the architecture appeared to be so simple that I added the making of a multi-node version to Dimdwarf's roadmap. Using ideas from that paper, it should be relatively simple to implement a distributed application server.

Issues with the current Dimdwarf architecture

Currently Dimdwarf uses locking-based concurrency in its implementation. For example its database and task scheduler contain shared mutable data and use locking for synchronization. As a result, their code is at times quite complex, especially in the database which needs to keep track of consistent views to the data for all active transactions. Also committing the transactions (two-phase commit protocol is used) requires some careful coordination and locking.

There have been some concurrency bugs in the system (one way to find them is to start 20-50 test runs in parallel to force more thread context switches), both in the database [1][2] and the task scheduler [3]. While all found concurrency bugs have been fixed, their existence in the first place is a code smell that the system is too complex and needs to be simplified. As it is said in The Art of Agile Development, in the No Bugs chapter, one must "eliminate bug breeding grounds" and solve the underlying cause:

Don't congratulate yourself yet—you've fixed the problem, but you haven't solved the underlying cause. Why did that bug occur? Discuss the code with your pairing partner. Is there a design flaw that made this bug possible? Can you change an API to make such bugs more obvious? Is there some way to refactor the code that would make this kind of bug less likely? Improve your design.

Some of the tests for the concurrent code are long and complex, which in turn is a test smell that the system is too complex. Lots of effort had to be put into making the tests repeatable [4][5][6][7][8][9][10], for example using CountDownLatch instances to force concurrent threads to proceed in a predictable order. Some of the tests even need comments, because the test code is so complex and inobvious.

All of this indicates that something is wrong with the current architecture. Even though Dimdwarf applications have a simple single-threaded programming model, the Dimdwarf server itself is far from being simple. Of course, the problem being solved by Dimdwarf is complex, but that does not mean that the solution also needs to be complex. It's just a matter of skill to create a simple solution to a complex problem.

Ideas for the new architecture

The paper The End of an Architectural Era gave me lots of ideas on how to simplify Dimdwarf's implementation. The database that was described in the paper, H-Store, is in many ways similar to Dimdwarf and Darkstar. For example all its transactions are local, so as to avoid expensive two-phase commits over the network, and it executes the application logic inside the database itself. But H-Store has also some new ideas that could be applied to Dimdwarf, the main points being:

  • The system is single-threaded, which makes its implementation simpler and avoids the need for locking.
  • All data is stored in memory, which avoids slow disk I/O. High-availability is achieved through replication on multiple servers.

Single-threadedness

Each H-Store server node is single-threaded, and to take advantage of multiple CPU cores, many server nodes need to be run on the same hardware. This results in simple data structures and good performance, because it will be possible to use simple non-thread-safe data structures and no locking. I liked the idea and thought about how to apply it to Dimdwarf.

I considered having only one thread per Dimdwarf server node, but it would not work because of one major difference between Dimdwarf and H-Store: data partitioning. In H-Store the data is partitioned over server nodes so that each transaction has all the data that it needs on one server node. Dimdwarf has also data partitioning and strives to make the data locally available, but in Dimdwarf the data will move around the cluster as the players of a MMO game move in the game, so the data partitioning needs to be changed all the time. In H-Store the data access patterns are stable, but in Dimdwarf they are fluctuating.

What does data partitioning have to do with the server being single-threaded? When a transaction tries to read data that is not available locally, it will need to request the data from another server node. While waiting for the data, that server node will be blocked and unable to proceed. Also the other server node, that has the data, is already executing some transaction, so it will not be able to reply with the requested data until the current transaction has ended. If Dimdwarf would be completely single-threaded, the latencies would be too high (and low latency is one of the primary goals). Because Dimdwarf can not guarantee full data locality, it needs to do have some internal concurrency to be able to respond quickly to requests from other servers.

But there is one way to make Dimdwarf's internals mostly single-threaded: one main thread and multiple worker threads. The main thread will do all database access, communicating with other server nodes, committing transactions and other core services. All actions in the main thread must execute quickly, in the order of thousands per second. The worker threads will execute the application logic. The application logic is divided into tasks, each task running in its own transaction. It is recommendable for the tasks to be short, in the order of ten milliseconds or less, but also much longer tasks will be allowed (if they do not write data that is modified concurrently by other tasks).

The communication between the main thread and worker threads, and also the communication between server nodes, will happen through message passing (like in Erlang). This will allow each component to be single-threaded, which will simplify the implementation and testing. It will also make low server-to-server response times possible, because each server node's main thread will execute only very short actions, so it will be able to respond quickly to incoming messages. It will also make it easier to take advantage of multiple cores by increasing the number of worker threads. Also no data copying needs to be done when a worker thread requests data from the main thread, because inside the same JVM it's possible to pass just a reference to some immutable data structure instead of copying the whole data structure over a socket.

In-memory database

The second main idea, keeping all data in memory, requires the data to be replicated over multiple server nodes. H-Store implements its replication by relying on deterministic database queries. H-Store executes the same queries (actually "transaction classes" containing SQL statements and program logic) on multiple server nodes in the same order. It does not replicate the actual modified data over the network, but replicates the tasks that do the modifications, and trusts that the tasks execute deterministically, which will result in the same data modifications to be made on the master and backup server nodes.

The determinism of tasks is a too high requirement to Dimdwarf, as it can not trust that the application programmers are careful enough to write deterministic Java code. Determinism is much easier to reach with SQL queries and very little program logic, than with untrusted imperative program code. So Dimdwarf will need to execute the task on one server node and replicate the modified data to a backup server node. Fortunately Dimdwarf's goals (an application server optimized for low latency, for the needs of online games) allow the relaxing on transaction durability, so we can do the replication asynchronously. This helps to minimize the latency from the user's point of view, but permits the loss of recent changes (within the last second) in case of a server failure.

Other ideas

The paper has also other good ideas, for example that the database should be "self-everything" - self-healing, self-maintaining, self-tuning etc. Computers are cheaper than people, so computers should do most of the work without need for human intervention. The database should be able to optimize its performance automatically, without the need for a DBA manually tuning the server parameters. The database should monitor its own state and heal itself automatically, without the need for a server administrator to keep an eye on the system constantly.

I also read the paper Time, Clocks, and the Ordering of Events in a Distributed System, about which I heard from waldo at the Darkstar forums. That paper taught me how to maintain a global ordering of events in a distributed system using Lamport timestamps. Dimdwarf will apply it so that together with each server-to-server message there is a timestamp of when the message was sent, and the receiving server node will update his clock's timestamp to be equal or greater than the message's send timestamp. The timestamp contains a sequentially increasing integer and a server node ID. This scheme may also be used to generate cluster-wide unique ID numbers for database entries.

Overview of Dimdwarf-HA

Dimdwarf will come in two editions - a single-node Dimdwarf and a multi-node Dimdwarf-HA. Here I will give an overview of the architecture for Dimdwarf-HA, but the same architecture will work for both editions. In the single-node version all components just run on the same server node and possibly some of the components may be disabled or changed.

An application will run on one server cluster. The server cluster will contain multiple server nodes (the expected cluster size is up to some tens of server nodes per application). There are a couple of different types of server nodes: gateway nodes, backend nodes, directory nodes and one coordinator node. A client will connect to a gateway node, and the gateway will forward messages from the client to a backend node for processing (and send the replies back to the client). The backend nodes contain the database and they execute all application logic. The directory nodes contain information that in which backend nodes each database entry is, and they may also contain information needed by the system's database garbage collector. The coordinator node does things that are best done by a single authoritative entity, for example signaling all nodes when the garbage collection algorithm's stage changes.

The system will automatically decide that which services will run on which server nodes. Automatic load balancing will try to share the load evenly over all server nodes in the cluster. When some server nodes fail, the other server nodes will do automatic failover and recover the data from backup copies.

A backend node contains one main thread and multiple worker threads. The threads and server nodes communicate through message passing. The main thread takes messages from an event queue one at a time, processes them, and sends messages to its worker threads and to other server nodes. The worker threads, which execute the application logic, communicate with messages only with their main thread. The same is true for all plugins and other components that run inside a server node - the main thread is the only one that can send messages to other server nodes, and all inter-component communication goes through the main thread.

The database is stored as an in-memory data structure in the main thread. Since it is the only thread that can access the database directly, the data structures don't need to be thread-safe and can be simpler. This makes the system much easier to implement and to test, which will result in more reliable software.

The main thread will do things like give database entries for the worker threads to read, request for database entries from other server nodes, commit transactions to the database, ensure that each database entry is replicated on enough many backup nodes, execute parts of the database garbage collection algorithm etc. All actions in the main thread should execute very quickly, thousands per second, so that the system would stay responsive and have low latency at all times. All slow actions must be executed in the worker threads or in plugins that have their own thread. For example the main thread will do no I/O, but if the database needs to be persisted in a file, it will be done asynchronously in a background thread.

The worker threads do most of the work. When a task is given to a worker thread, the worker thread will deserialize the task object and begin executing it. When the task tries to read objects that have not yet been loaded from the database, the worker thread will request for the database entry from the main thread, and after receiving it the worker thread will deserialize it and continue executing the task. When the task ends, the worker thread will serialize all loaded objects and send to the main thread everything that needs to be committed (modified data, new tasks, messages to clients).

The system is crash-only software:

Crash-only software is software that crashes safely and recovers quickly. The only way to stop it is to crash it, and the only way to start it is to recover. A crash-only system is composed of crash-only components which communicate with retryable requests; faults are handled by crashing and restarting the faulty component and retrying any requests which have timed out. The resulting system is often more robust and reliable because crash recovery is a first-class citizen in the development process, rather than an afterthought, and you no longer need the extra code (and associated interfaces and bugs) for explicit shutdown. All software ought to be able to crash safely and recover quickly, but crash-only software must have these qualities, or their lack becomes quickly evident.

Dimdwarf will probably use a System.exit(0) call in a bootstrapper's shutdown hook and will fall back to using kill -9 if necessary. As one of Dimdwarf's goals is to be a reliable high-availability application server, it needs to survive crashes well. Creating it as crash-only software is a good way to make any deficiencies apparent, so that they can be noticed and fixed early.

Executing tasks

When a client sends a message to a gateway node, the gateway will determine based on the client's session that on which backend node the message should be processed. If the client sends multiple messages, they are guaranteed to be processed in the order that they were sent. The gateway will create a task for processing the message and will send that task to a backend node for execution. The system will try to execute tasks on a node that has locally available most of the data needed by the tasks, and a layer of gateway nodes allows changing the backend node without the client knowing about it. (In Darkstar there are no gateway nodes, but the tasks are executed on the node to which the client is connected. Changing the node requires co-operation from clients.)

The backend node receives the task and begins executing it in one of its worker threads. As the worker thread executes, it will request the main thread for database entries to be read. If a database entry is not available locally, it needs to be requested from another backend node over the network. When the worker thread finishes executing the task, it will commit the transaction by sending a list of all modified data to the main thread. The main thread checks that there were no transaction conflicts, saves the changes to its database and replicates the data by sending the modifications of the transaction to another backend node for backup. If some messages to clients were created during the transaction, the messages are sent to the gateway nodes to which those clients are connected, and the gateway nodes will forward the messages to the clients.

If committing the transaction failed due to a transaction conflict, the task will be retried until it passes. If a task fails due to a programming error that throws an exception, then the task will be added to a list of failed tasks together with debug information (such as all database entries read and written by the task), so that a programmer may debug the reason for task failure. A failed task may then be cancelled or retried after fixing the bug.

Tasks may schedule new tasks for later execution. When a task commits, the commit contains a list of new scheduled tasks, in addition to modified database entries and messages to clients. The system will analyze the parameters of a task and will use heuristics to predict that what database entries will be modified by the task. Then when the scheduled time for the task to be executed comes, it will be executed on a backend node that contains locally most of the data that will be accessed by the task. The backend node will also try to ensure that concurrently executing worker threads will not modify the same database entries (tasks that modify the same entries will be run sequentially on the same worker thread). The decisions, that on which backend node a task should be executed, are done on a per-task basis, so each task that originated from a particular user may possibly be executed on a different backend node. (This is different from Darkstar, which has a notion of an "identity" that owns a task, and the task will be executed on the server node to which the task owner's identity is assigned. Also Darkstar supports repeated tasks, but Dimdwarf will probably simplify it by implementing task repetition in application code level, because then the system won't need to have native support for cancelling tasks, but supporting one-time tasks will be enough.)

Database entries

Each database entry has the following: unique ID, owner, modification timestamp and data. Each database entry is owned by one server node, and only that server node is allowed to write the entry. The other server nodes may only read the entry. For some other node to write the entry, it first needs to request for the ownership of the entry, and only after becoming the new owner can it write the entry.

The database uses multiversion concurrency control, so that each task works with a snapshot view of the database's data. When a task commits its modifications, the system will check the modification timestamps of the database entries to make sure that no other task modified them concurrently. This does not require locking, which may in some cases improve and in some cases lower performance (if there is much contention and the system's heuristics do not compensate for it well enough). The transaction isolation level is snapshot isolation.

When a task running in a worker thread needs to read a database entry, it will send a read request to the main thread. The main thread will check its local database, whether the requested entry is there. If it is, the main thread will respond to the worker thread with the requested data. If the entry is not in the local database or cache, the main thread will ask from a directory node that which backend node is the current owner of the entry. Then the main thread will ask for that backend node to send it a copy of the database entry. When it receives the copy, it will forward it to the worker thread that originally requested it.

When a task running in a worker thread commits, it will create a list of all database entries that were modified during the task. This includes also tasks that were created, messages that were sent to clients and whatever other data needs to be committed. When the main thread receives the commit request, it will check that none of the database entries were modified concurrently. This is done by comparing the last modified timestamps of the database entries. The main thread will also make sure that the task read a consistent snapshot view of the database. If there is a transaction conflict, the commit request is discarded and the task is retried. If there are no transaction conflicts, the main thread will store the changes to its database, send any messages for clients to the gateway nodes, and send the modified database entries to the current server node's backup node for replication. It will also send the updated database entries to other server nodes that have previously requested for a copy of that database entry, so that they would have the latest version of the entry.

When committing, the current server node needs to be the owner node of all modified database entries. If this is not so, the main thread will need to request for the ownership of the entries from their current owner. First it needs to find out who is the current owner. Each database entry contains information that which server node is the owner of that entry version. The information can also be received from the directory nodes. When the ownership of a database entry is transferred, the old owner will tell about the ownership transfer to all other server nodes that it knows have a copy of the database entry. Then those server nodes can decide to ask the new owner to send them updated versions of the database entry, in case it's an entry that they will read often.

It is not possible to delete database entries manually. A database garbage collector will check for unreachable database entries periodically and will delete entries that are not anymore used. The garbage collector algorithm will probably be based on the paper An Efficient On-the-Fly Cycle Collection. A number of different algorithms can be implemented to find out which one of them suits Dimdwarf and different types of applications the best.

Failover

Each backend node has one or more other backend nodes assigned as its backups. The server node that is the owner of a database entry is called the master node and it contains the master copy of the database entry. The server nodes that contain backup copies of the database entry are called backup nodes.

When the master node modifies some master copies, the master node sends to its backup nodes a list of all updates done during the transaction. Then the backup nodes update their backup copies to reflect the latest version from the master node. To ensure consistency, the updates of a transaction are always replicated as an atomic unit.

When a server node crashes, the first server node to notice it will signal the other server nodes about the crash and they will coordinate the failover. One of the crashed node's backup nodes takes up the responsibility of replacing the crashed node and then promotes its backup copies to master copies. The whole cluster is notified about the failover, that which backup node replaced which master node, so that the other server nodes may update their cached information about where each master copy is.

If there are multiple backup nodes, they may coordinate with each other that which one of them has the latest backup copies of the failed node's database entries. Also, because the owner of a master copy may change at any time, the backup nodes need to be notified about ownership transfers, so that they would not think that they are still the backup node of some database entry, even though its ownership has been transferred to a new master node which has different backup nodes. A suitable failover algorithm needs to be designed. It might be necessary to have additional checks that which node in the cluster has the latest backup copy, maybe by collecting that information in the directory nodes.

Although also other server nodes than backup nodes may contain copies of a database entry, those copies will not be promoted to master copies, because they are not guaranteed to contain a consistent view of the data that was committed. If a transaction modifies database entries X and Y, at failover the same version of both of them needs to be recovered. The backup node is guaranteed to have the same version of both X and Y, because the master node always sends it a list of all updates within a transaction, but other nodes may receive an updated copy of either X or Y if they are interested in only one of them.

The other server node types (gateway, directory, coordinator) may also have backup nodes if they contain information that would be slow to rebuild.

Session and application contexts

When a client connects to a gateway node, a session is created for it. The sessions are decoupled from authentication and user accounts. The application will need to authenticate the users itself and decide how to handle cases where the same user connects to the server multiple times.

Each session has a map of objects associated with it. It can be used to bind objects to a session, for example to store information about whether the session has been authenticated. It will also be used by the dependency injection container (Guice) to implement a session scope. The whole application has a similar map of objects, which will be used to implement an application scope.

Objects in session and application scopes will be persisted in the database. It will also be possible to have non-persisted scopes, such as a server node specific singleton scope, in case the application code needs additional services that can not be implemented as normal tasks.

Session messages and multicast channels

When application code knows the session ID of some client, it can send messages to that client. As in Darkstar, there are two categories of sending messages: session messages for one client and multicast channels for multiple clients.

Messages from a session are guaranteed to be processed in the same order as they were sent. The cluster might use an algorithm similar to the Quake 3 networking model, so that the gateway will forward to the backend nodes a list of messages from the client, which have not yet been acknowledged to have been executed. In the backend side, the processing of session messages will modify a variable in the session's database entry to acknowledge the last executed message. Transactions will make sure that all session messages are processed once and in the right order.

Multicast channels will operate the same way as session messages, except that the messages are sent to multiple sessions and it will be possible to have channels with an unreliable transport. When application code sends a message to a channel, the system will list all sessions that are subscribed to that channel. It will partition the sessions based on the gateway node to which the clients are connected and will forward the message to those gateway nodes. The gateway nodes in turn will forward the messages to individual clients.

Receiving messages from clients through session messages or channels is done using message listeners, similar to Darkstar. The application code will implement a message listener interface and register it to a session or channel. Then the method of that listener will be called in a new task when messages are received from clients.

Supporting services

A Dimdwarf cluster requires also some additional services: A tracker keeps a list of all server nodes in a cluster, so that it would be possible to connect to a cluster without knowing the IPs and ports of the server nodes. A bootstrap process runs on each physical machine and it has the power to start and to kill server nodes on that machine. The trackers and bootstrappers can be used by management tools to control the cluster.

There will be command line tools for managing the cluster. There will be commands for installing an application in a new cluster, for adding and removing servers in the cluster, for upgrading the application version, for shutting down a cluster etc.

Application upgrades will happen on-the-fly. First server nodes which have the new application code are started alongside the existing server nodes. Then the new server nodes begin to mirror the data in the old server nodes the same way as backup nodes. Finally, in one cluster-wide move, the new server nodes take over and begin executing the tasks instead of the old server nodes. The serialized data in the database will be upgraded on-the-fly as it is read by tasks on the new server nodes.

Dimdwarf may be extended by writing plugins. There will be need for advanced management, monitoring and profiling tools. For example I'm planning on creating a commercial profiler that will give detailed information about all tasks and server-to-server messages, so that it would be possible to know exactly what is happening in the cluster and in which order. It will be possible to record all events in the cluster and then use the profiler to step through the recorded events, moving forwards and backwards in time.

2009-05-09

Converting confused SVN repositories into Git repositories

I've been spending this evening converting the repositories of my old projects from SVN to Git. I used to have the repositories hosted on my home server, but now I've moved them to GitHub (see my GitHub profile). Here I have outlined the procedures that I used to convert my source code repositories.

Preparations

First I installed svn2git, because it handles tags and branches much better than the basic git svn clone command. I run Git under Cygwin, so first I had to install the ruby package using Cygwin Setup. And since Cygwin's Ruby does not come with RubyGems, I downloaded and installed it manually using these instructions.

When RubyGems was installed, I was able to type the following commands to finally install svn2git from GitHub:

gem sources -a http://gems.github.com
gem install nirvdrum-svn2git

Most of my SVN repositories were already running on my server, so accessing them was easy. But for some projects I had just a tarballed version of the repository. For those it was best to run svnserve locally, because git-svn is was not able to connect to a SVN repository through the file system. So I unpacked the repository tarballs into a directory X (so that the individual repositories are subdirectories of X), after which I started svnserve with the command "svnserve --daemon --foreground --root X". Then I could access the repositories through "svn://localhost/name-of-repo" URLs.

You will also need to write an authors file which lists all usernames in the SVN repositories and what their corresponding Git author names should be. The format is as follows, one user per line:

loginname = Joe User <user@example.com>

I placed the authors.txt file into my working directory, where I could easily point to it when doing the conversions.

Simple conversions

When the SVN repository uses the standard layout and its version history does not have anything weird happening, then the following commands could be used to convert the repository.

First make an empty directory and use svn2git to clone the SVN repository:

mkdir name-of-repo
cd name-of-repo
svn2git svn://localhost/name-of-repo --authors ../authors.txt --verbose

When that is finished, check that all branches, tags and version history were imported correctly:

git branch
git tag
gitk --all

You will probably want to publish the repository, so create a new repository (in this example I use GitHub) and push your repository there. Remember to include all branches and tags:

git remote add origin git@github.com:username/git-repo-name.git
git push --all
git push --tags

After that you better clone the published repository from the central server, the way you normally do (cd /my/projects ; git clone git@github.com:username/git-repo-name.git), and delete the original repository which was used when importing from SVN, to get rid of all the SVN related files in the .git directory.

You might also want to add .gitignore file into your project. For my projects I use the following to keep Maven's build artifacts and IntelliJ IDEA's workspace file out of version control:

/*.iws
/target/
/*/target/

Complex conversions

I had one SVN repository where the repository layout had been changed in the middle of the project. At first all project files had been in the root of the repository ("/"), after which they had been moved into /trunk. This caused that when I imported the SVN repository using the standard layout options, the history stopped where that move was made, because before that point in history there was no /trunk. I wanted to import a clean history, so that this mess would not be reflected in the resulting Git repository's history.

What I did, was that first I imported the latter part of the history which used the standard layout:

mkdir messy-repo.2
cd messy-repo.2
svn2git svn://localhost/messy-repo/trunk --rootistrunk --authors ../authors.txt --verbose

Then I imported the first part of the history which used the trunkless layout. This also includes the latter part of the history, but with all files moved under a /trunk directory:

mkdir messy-repo.1
cd messy-repo.1
svn2git svn://localhost/messy-repo --rootistrunk --authors ../authors.txt --verbose

Then I created a new repository where I would be combining the history from those two repositories. I cloned it from the repository with the first part of the clean history.

git clone file:///tmp/svn2git/messy-repo.1/.git messy-repo.combined
cd messy-repo.combined

Then I would start a branch "old_master" from the current master, just to be sure not to lose it. I would also make a tag "after_mess" for the commit that changed the SVN repository layout, and a tag "before_mess" for the commit just before that, where all project files were still cleanly in the repository root.

Did I mention, that the layout changing commit did also add one file, in addition to just changing the repository layout? So I had to recover that change from the otherwise pointless commit. First I had do get a patch with the desirable changes. So I hand-copied from SVN the desired file, checked out the version in Git just before the mess, made the desired change to the working copy, committed it and tagged it so that it would not be lost.

cd messy-repo.combined
git checkout before_mess
git add path/to/the/DesiredFile.java
git commit -m "Recovered the desired file from the mess"
git tag desired_changes

Then I would make a patch with just that once change:

git format-patch -M -C -k -1 desired_changes

Which then created the file 0001-desired-changes.patch.

I needed also clean patches for the latter part of the version history. So I created patches for all changes in the  messy-repo.2 repository.

cd messy-repo.2
git format-patch -M -C -k --root master

Then I would hand-edit the 0001-desired-changes.patch file to contain the same date and time as the original commit that messed up the repo. I would also remove the patch for that commit from the patches produced by messy-repo.2.

Then it was time to merge the patches into the first part of the history:

cd messy-repo.combined
git checkout before_mess
git am -k 0001-desired-changes.patch
git am -k patches-from-repo-2/00*
git branch fixed_master
git checkout fixed_master

That way all the history was saved, even the author dates were unchanged (commit dates did however change to current time when using patches - it's possible to rewrite the commit dates using git filter-branch). After that I could just clean up the branches and push it to the central repository as normally.

2009-05-08

Version number management for multi-module Maven projects

I've been thinking about how to best organize the Maven modules in Dimdwarf. My requirements are that (1) the version number of the public API module must stay the same, unless there are changes to the public API, (2) opening and developing the project should be easy, so that I can open the whole project with all its modules by opening the one POM in IntelliJ IDEA, and (3) all code for the project should be stored in one Git repository, so that the version history for all modules is combined and checking out the whole project can be done with one command.

The project structure is currently as follows (these nice graphs were produced with yEd).

I have one POM module, "dimdwarf", at the root of the project directory. It is the parent of all other modules (that's where dependencyManagement and the common plugins are configured) and it also has as submodules all other modules. The "dimdwarf-api" module is what all users of my framework will depend on, so I want its version numbers to change very rarely - only when the API is changed, not every time that I release just a new version of the server implementation. The "dimdwarf-aop" and "dimdwarf-agent" modules handle the bytecode manipulation and they are needed as part of the bootstrap process. "dimdwarf-core" does not use the AOP classes directly, but it has a dependency to "dimdwarf-aop" for testing purposes. The module "dimdwarf-dist" assembles all other modules together and builds a redistributable ZIP file.

Yesterday I was looking for a solution for reaching my requirements. StackOverflow did not have any existing questions which would have touched exactly this problem, but in one of the answers there was a link to Oliver's blog post which matched my situation perfectly (also read the follow-up). He proposed a solution that checks for consistency in the project structure and fails the build if the modules have dependencies with a wrong version.

After thinking about that some, I came up with a possibly better way to manage the version numbers. It would be a tool (possibly implemented as a Maven plugin) that helps in updating the module version numbers. The tool would be called "module version bumper" or similar. Its commands should be run the directory that contains the project's "workspace POM" (one that has as submodules all modules of the project, but none of the modules depend on it), so that the tool can find all modules that are part of the project.

For the version bumper to work with Dimdwarf, the project structure needs to be refactored:

All the common settings (dependencyManagement, plugins etc.) are in the "parent" POM file, which the other modules then extend. I decided to make "dimdwarf-api" independent from it, because I don't want library version upgrades to be reflected in the API's version number. (I could also have created "parent-common" and "parent-deps" which extends "parent-common", but let's keep it simple for now and tolerate some duplication in the API's POM.) The workspace POM, "dimdwarf", does not anymore have the added responsibility of being also the parent POM, which helps the project get rid of cyclic dependencies between the POMs.

To explain how the version bumper would work, let's start with an example of the workflow of making changes to the project. In the beginning, version 1.0.0 of Dimdwarf has recently been released and all modules have "1.0.0" as their version number.

    parent 1.0.0
    dimdwarf-api 1.0.0
    dimdwarf-api-internal 1.0.0
    dimdwarf-core 1.0.0
    dimdwarf-aop 1.0.0
    dimdwarf-agent 1.0.0
    dimdwarf-dist 1.0.0
    dimdwarf 1.0.0

I notice a bug in the "dimdwarf-aop" module, so I need to make changes to it. Since "dimdwarf-aop" now has a release version (i.e. one that does not end with "-SNAPSHOT"), I need to bump its version to be the next development version (i.e. a "-SNAPSHOT" version higher than the previous release version).

In the project's root directory, I run the version bumper tool's command: "mvn version-bump dimdwarf-aop". This command reads the version number of all modules in the project and determines that "1.0.0" is the highest version number in use. Since it is a release version number, the tool prompts me for the next development version, offering "1.0.1-SNAPSHOT" as the default. I accept the default. Then the tool changes that to be the version number of "dimdwarf-aop" and of all modules that depend on "dimdwarf-aop" at runtime ("dimdwarf-core" has only a test-time dependency, so it is not changed). So now the version numbers are as follows, with changes highlighted in blue:

    parent 1.0.0
    dimdwarf-api 1.0.0
    dimdwarf-api-internal 1.0.0
    dimdwarf-core 1.0.0
    dimdwarf-aop 1.0.1-SNAPSHOT
    dimdwarf-agent 1.0.1-SNAPSHOT
    dimdwarf-dist 1.0.1-SNAPSHOT
    dimdwarf 1.0.1-SNAPSHOT

Then I make some changes in "dimdwarf-aop" to fix the bug and commit it to version control.

Some days after that, I begin making some bug fixes to the "dimdwarf-core" module. I change the code, but forget that I have not bumped that module's version to be next development version. I commit the changes to version control (I use Git), but thankfully I have a pre-commit hook that verifies that all modules with changes use a development version (or a release version that is strictly higher than the version in the previous commit - otherwise you couldn't commit a new release). The commit fails with a message:

The following files were changed in module "dimdwarf-core" which has the release version "1.0.0". Update the module to use a development version with the command "mvn version-bump dimdwarf-core" or recommit with the --no-verify option to bypass this version check.
    dimdwarf-core/src/main/java/x/y/z/SomeFile.java
    dimdwarf-core/src/main/java/x/y/z/AnotherFile.java

I realize my mistake, so I run the command "mvn version-bump dimdwarf-core". This command reads the version number of all modules in the project and determines that "1.0.1-SNAPSHOT" is the highest version number in use. Since it is a development version number, the tool prompts me for the development version for "dimdwarf-core" module, offering "1.0.1-SNAPSHOT" as the default. I accept the default. Then the tool changes that to be the version number of "dimdwarf-core" and of all modules that depend on "dimdwarf-core" at runtime (only "dimdwarf-dist" and "dimdwarf" depend on it, but since they already have version "1.0.1-SNAPSHOT", they don't need to be updated). So now the version numbers are as follows:

    parent 1.0.0
    dimdwarf-api 1.0.0
    dimdwarf-api-internal 1.0.0
    dimdwarf-core 1.0.1-SNAPSHOT
    dimdwarf-aop 1.0.1-SNAPSHOT
    dimdwarf-agent 1.0.1-SNAPSHOT
    dimdwarf-dist 1.0.1-SNAPSHOT
    dimdwarf 1.0.1-SNAPSHOT

Now I want to publish the new release, so I run a tool that changes all the development versions to release versions (is there already a Maven plugin that does it?). After that the versions numbers are:

    parent 1.0.0
    dimdwarf-api 1.0.0
    dimdwarf-api-internal 1.0.0
    dimdwarf-core 1.0.1
    dimdwarf-aop 1.0.1
    dimdwarf-agent 1.0.1
    dimdwarf-dist 1.0.1
    dimdwarf 1.0.1

I commit the changes to version control and tag it as "dimdwarf-1.0.1". I checkout the tag to a clean directory, build it and deploy all the 1.0.1 artifacts to the central Maven repository (the already deployed 1.0.0 version may not be redeployed). I also collect the newly built redistributable ZIP file from the /dimdwarf-dist/target directory and upload it to the web site for download.

So that is my idea for managing version numbers in multi-module Maven projects. What do you think, would a workflow such as this work in practice? Do you think that there will be problems with this version numbering scheme (mixed development and release versions) when using continuous integration or when deploying to a Maven repository (where overwriting previously deployed versions is not allowed)? Would somebody with experience in Maven plugin development be willing to help in implementing this?

2009-05-07

Introduction to Dimdwarf

My current hobby project, Dimdwarf Application Server, will be a scalable high-availability application server and a distributed object database. It lets the application programmer to write single-threaded event-driven code, which the application server will then execute multi-threadedly. The concurrency issues are hidden from the application programmer using STM and DSM. The programming model is the same as what Project Darkstar has (being involved with Darkstar is where I got the idea), but the architecture of the implementation has some differences. As for other similar application servers, there is for example Terracotta, but other than that I don't know similar systems - mostly just distributed caches and databases. My primary motivation for creating Dimdwarf is intellectual challenge, as it will be the most complicated application I have written this far.

Background

In January 2008 I got involved in Project Darkstar, which is an open source application server designed for the needs of MMO games and is developed by Sun Microsystems. I liked its simple programming model, how the objects are automatically persisted and executed transactionally, so that the programmer can concentrate more on the application logic than the concurrency issues. There were some things that I felt could be improved about Darkstar, so I invented transparent references and implemented them in Darkstar (they should be included in the main codebase in near future). There were also some other utilities that I wrote.

Then in June 2008 I got the idea for Dimdwarf and send a mail about it to a couple of other Project Darkstar community members, Emanuel Greisen and Martin Eisengardt, with whom I had been discussing about making development on Darkstar easier. My initial goal was to solve the GPL license and testability issues that Darkstar has: Since Darkstar Server is GPL'd, you can not embed it in a commercial game, for example to make a single-player mode for a multiplayer game, or to distribute the server side of your application without publishing it under GPL. Testing Darkstar applications was hard and you had to use use MockSGS for running unit tests, because Darkstar could not be easily decoupled from the code that uses it. Also debugging Darkstar applications was hard, because you would have to deal with transaction timeouts and multiple threads.

My idea with Dimdwarf was to create a light version of Darkstar Server, one that uses an in-memory database, is single-threaded (at least initially), doesn't use timeouts and has no clustering support (thus making it simple), but you could anyways use a Dimdwarf-to-Darkstar adapter library to run Dimdwarf applications on Darkstar (thus getting the scalability benefits without being infected by GPL, as Dimdwarf uses the BSD license). Even Dimdwarf's name reflects this goal: dim = not smart / synonym for dark, dwarf = small / one kind of a star. Dimdwarf would be light, unintrusive and testing friendly, so that Dimdwarf applications are decoupled from Dimdwarf and you won't need an extensive testing environment and mocking framework to test the applications.

In August 2008 I opened a project page for Dimdwarf at SourceForge and begun writing some code. I was able to reuse the code that I wrote for transparent references, but otherwise I started it from scratch.

Extended Project Goals

Originally I was aiming to keep Dimdwarf as simple as possible and to not make it a scalable high-availability application server. But in January 2009 I read a paper called The End of an Architectural Era and it gave me some ideas about the distributed database design for Darkstar, so I started a thread about it on Darkstar forums. After thinking about it for a couple of days, making a scalable high-availability database did not anymore seem too hard. Extending Dimdwarf to be a high-availability solution started to feel like being within my reach, so I added it to Darkstar's long-term goals.

The high-availability version of Dimdwarf will go under the name Dimdwarf-HA and I've thinking about it passively now for a couple of months. I will first finish the single-node version of Dimdwarf, after which I'll expand it to a clustered multi-node version. The same architecture can be used for both the embedded stand-alone version of Dimdwarf and the clustered Dimdwarf-HA - in fact the new architecture will be much simpler and testable than Dimdwarf's current development version, because it will have less concurrency-aware code.

A following article will discuss Dimdwarf-HA's architecture in more detail.

2009-05-04

Random thoughts on "Random Thoughts"

Previously I've been writing down my thoughts and plans on plain text files, notebooks and paper sheets, but I suppose that it would be good to post some of them also online. It would make referring to them much easier. Maybe someone might even mistake to read them.

When thinking about what to call this blog, I remembered a quote from an old AMV, Boogiepop Phantom - Butterfly by MindWarp. I was going to write here whatever random thoughts I happen to have, so I though "Random Thoughts" to be a good name for a blog about random thoughts. It would be nice for them to be as psychedelic as that AMV, but I'm afraid it won't happen. ;)
Butterflies are random thoughts people have
They live, They die, They are pointless.
- Jonathan Watson

First I will probably be writing about my current hobby project, Dimdwarf Application Server, which will be a scalable high-availability application server and a distributed object database, optimized for low latency (for example MMO games). Then I might write things about user interface design. I design UIs using the GUIDe+GDD method and right now I'm writing my masters thesis about the same topic. I may also write things about TDD (next autumn I'll be lecturing a course about TDD in the University of Helsinki) as well as my thoughts on what Software Craftsmanship is about (on SC's mailing list there has not yet been a clear consensus on what makes craftsmen different from other developers).