Philosophical Multicore

Sometimes controversial, sometimes fallacious, sometimes thought-provoking, and always fun.

Archive for the ‘Keyboards’ Category

Last Post on Keyboard Layouts

Posted by Michael Dickens on June 21, 2010

This is my last post on keyboard layouts on this blog. In order to keep everything more organized, I have moved all my keyboard-related posts to my math blog. You can find all of them here. Any posts that I originally put on this blog will remain here, but all new posts will be on my other blog. If you want to look into the Keyboard Layout Project, I suggest you look over there.

Happy Keyboarding!

Posted in Keyboards | Leave a Comment »

Wanted: Typing Data

Posted by Michael Dickens on January 16, 2010

In order to improve my typing program, I am looking for good data on typing speed. Previously I had based the scoring system on my own estimation of how hard different movements would be, but now I want something concrete.

If you already use Amphetype, then all you need to do is send it to me off your hard drive. If you have it, post a comment here and I will give you my email address.

If you don’t already use Amphetype, then now is a great time to start. It is a program where you can type into it and it records data on how fast and how accurately you type. Feel free to send it to me after you’ve accumulated a good amount of data.

Posted in Keyboards | 1 Comment »

Should a keyboard layout optimize for hand alternation or for rolls?

Posted by Michael Dickens on January 9, 2010

Thanks to a really nice typing program called Amphetype, I have recently been able to collect some good data on my typing habits. I compiled some data and did a rudimentary analysis of my fastest and my slowest trigraphs. I analyzed my 180 fastest trigraphs and my 156 slowest trigraphs, classifying each one in one of three categories: fully alternating, alternating and rolling, or fully rolling. If it is fully alternating, then each key is typed on the opposite hand from the previous key. If it is fully rolling, then each key is typed on the same hand. And if the trigraph is alternating and rolling, then there are two consecutive keys on one hand, and the third key is on the other hand.

Among the fastest trigraphs, 10% were fully alternating, 75% were alternating and rolling, and 15% were fully rolling.

Among the slowest trigraphs, 21% were fully alternating, 38% were alternating and rolling, and 40% were fully rolling.

So what does this mean? First, let us remember that there are twice as many ways for a trigraph to be alternating and rolling as to be fully alternating or fully rolling. So given a random sample, we would expect a distribution of 25%, 50%, and 25%. The data I have isn’t totally accurate, but it should be pretty close. What’s clear from this data is that fully alternating keys and fully rolling are rarely very fast. Not only that, but you have to count down to the 13th fastest trigraph before you find one that isn’t alternating and rolling. So alternating and rolling is clearly the fastest possibility.

Now let’s look at the slowest trigraphs. These are more evenly distributed. But notice that there are not as many alternating and rolling trigraphs as you’d expect, and there are a lot more trigraphs that are fully rolling. So there are a lot of very slow trigraphs that are fully rolling.

As simple as this data may be, it still gives us some useful information. To optimize our keyboard, we should try to maximize combos where you type two keys on one hand and then switch to the other hand. Getting a computer to do this in practice, though, is tricky. My program is designed to use digraphs; it can use trigraphs with a small modification, but using trigraphs is orders of magnitude slower. We still may be willing to sacrifice speed for accuracy; but is there any way to still maximize our goal of two-keys-at-a-time using digraphs and not trigraphs? I certainly don’t see any way.

Posted in Keyboarding Theory, Keyboards | 5 Comments »

Why Only 30?

Posted by Michael Dickens on December 18, 2009

As you may have noticed, my keyboard designs have been limited to only the central 30 characters — on a traditional QWERTY keyboard these keys include the alphabet, period, comma, semicolon and slash. Why have I not expanded my program to include other keys? It is certainly not because those keys are in optimal positions already. Many of the keys outside of the main 30 have the very worst placement. So why not try to optimize them as well?

1. They are too hard to re-learn.

I have tried to learn a layout where the all of the keys were optimized, but it did not go well. I found myself completely unable to switch back and forth between it and QWERTY. The layout was simply too complicated, so I ended up just putting all the outlying keys back into their original positions.

2. Many of them rely on aesthetics that a computer program won’t notice.

Look at the number keys. They are neatly lined up in an easy-to-remember fashion. However, their order of frequency is not so simple. A computer algorithm would end up completely jumbling these numbers. It would also likely not put the open and close brackets next to each other, as well as numerous other aesthetic benefits. A computer program would simply miss these little nuances.

3. That program would be harder to write.

Yes, I admit it, I am somewhat driven by laziness. This new program would require modification of many parts of the program, and would make it harder to evaluate the keyboard’s score. The set of digraphs used to score the keyboards would be larger, causing both accuracy and program efficiency to suffer. Evaluating the score would require taking into account all four (or even five) rows, and the extra keys on the side. The score evaluation process would be much more complicated, and therefore harder to get right. Overall, I didn’t see the benefits as worth the effort.

Posted in Keyboarding Theory, Keyboards | 1 Comment »

New Keyboard Layout Project: Fast Typing Combinations

Posted by Michael Dickens on December 13, 2009

It’s been a while since I posted anything about the New Keyboard Layout Project. But I recently downloaded Amphetype and have been analyzing my typing patterns, using MTGAP 2.0. So I now have some results, and will probably get more in the future.

The fastest trigraphs to type almost all are either a type of one key on one hand followed by two keys on the other hand, or they are a roll on one hand in one direction. Most of the slowest trigraphs alternate hands every time, and a good number of them are all on one hand in awkward combinations. The fastest words have easy rolls on both hands: what is currently the fastest word, “should” with an average of 176 WPM (hint: my average typing speed is about 85 WPM), uses a combination of hand alternations and easy rolls. In QWERTY, “should” would be typed as “jeaior”. The “ul”/”io” combination is very fast; also, “od”/”ar” is very fast, and the difference between the finger strokes to type “o” and “d” are very brief because the two letters in between are typed too fast. (Does that make sense?)

I will report more fast combinations after the program gets enough data for some better results.

Posted in Keyboarding Theory, Keyboards, New Keyboard Layout Project | Leave a Comment »

Typing Program Release

Posted by Michael Dickens on October 11, 2009

I have made some improvements to the typing program to make it more user-friendly. You can get it at

Posted in Keyboards, New Keyboard Layout Project, Software Release | Leave a Comment »

New Keyboard Layout Project: Program Release

Posted by Michael Dickens on September 12, 2009

You can find my source code at The new and faster algorithm was written by Chris Johnson, a.k.a. Phynnboi.

The algorithm repeatedly returns this result.

y p u c b  x l d , .
i n e s f  h r t a o
j v ' w z  k m g ; q

Fitness:       2263451098
Distance:      9003112
Inward rolls:  7.04%
Outward rolls: 4.48%
Same hand:     22.80%
Same finger:   0.68%
Row change:    9.01%
Home jump:     0.34%
To center:     4.17%

This is a very good layout. Strangely enough, though, if you run the algorithm for longer it comes up with this layout, even though it has a lower score:

y c o u ;  k m d p w
i s e a .  l h t n r
j z ' , x  v f g b q

Fitness:       2263597180
Distance:      9599916
Inward rolls:  7.20%
Outward rolls: 2.20%
Same hand:     16.85%
Same finger:   0.64%
Row change:    7.64%
Home jump:     0.28%
To center:     1.74%

Which is better and why? How can improvements be made?

Posted in Keyboard Release, Keyboards, New Keyboard Layout Project, Software Release | 9 Comments »

Biases of Genetic Algorithms and Simulated Annealing

Posted by Michael Dickens on September 7, 2009

Both genetic algorithms and simulated annealing have a serious problem: they get stuck. There are some possible layouts which could be very good, but which the algorithm will never get to since it requires getting past a certain hurdle. So how can this be solved?

1. Let a layout live and mutate for a while before you kill it. The problem with this is one of memory. You’re going to need some really huge arrays to hold all of those layouts.

2. Make one or a few keys inactive. This was inspired by a bug which led to some interesting results. The idea here is that you make a particular letter cost nothing, and run the algorithm as normal. If this letter was acting as a “wall” preventing the algorithm from getting to a good layout, the wall will be taken down. Then, after a few iterations, insert the letter again. I tried this out, and it had minimal effect. On the other hand, it didn’t slow the program down by much, either.\

3. Make one or a few scoring criteria inactive. This could more effectively break down potential walls than #2. This is tricky to implement reliably, though. If you rotate through the scoring criteria, making each one inactive in turn, then the definition of “best” changes every generation and so the program never comes to a stopping point. If you remove each criterion for a few generations but only one time, then you don’t know if it is adequately hurdle-skipping. And then there’s the added problem that the actual fitness will be reduced, and that has to be balanced somehow.

Are there any other methods that could actually work?

Posted in Computer Science, Keyboarding Theory, Keyboards, Math, New Keyboard Layout Project, Programming | 2 Comments »

New Keyboard Layout Project: Keyboard Version 3.11

Posted by Michael Dickens on September 7, 2009

This layout performs significantly better than any other I’ve found. But how good is it really?

Hands: 50% 49%
Fingers: 7% 15% 11% 15% 18% 13% 9% 9% 

. l u c b  q h g y ,
o r e s d  p n t i a
' x ; w z  v f k j w

Fitness:       2084759
Distance:      7386.58
Inward rolls:  6.91%
Outward rolls: 6.88%
Same hand:     26.28%
Same finger:   0.58%
Row change:    12.34%
Home jump:     0.14%
To center:     3.50%

Distance and same finger are phenomenally low. Same hand and row changing could be better. But by all measures here, it’s very good. But is it really?

One thing that jumps out at me here is the “ing” trigraph. It is just weird. I practiced with it, though, and it’s actually not too hard. There are some strange words that loop back on themselves like “thingy” or “resurrect”, but I don’t find that to be too hard either, just strange. In fact, MTGAP 2.0 (which I am using right now) has a pretty major loop in the word “themselves”, and that’s not too hard to type.

EDIT: This layout was getting a huge performance boost. Due to a small bug, there were two ‘w’s, only one of which was getting scored. So the layout was essentially 1/30th better than any other layout without the bug. In truth, this is the best layout given the criteria:

y p u c b  x l d , .
i n e s f  h r t a o
j v ' w z  k m g ; q

Posted in Keyboard Release, Keyboards | 2 Comments »

Simulated Annealing vs. Genetic Algorithm

Posted by Michael Dickens on September 7, 2009

When I read about simulated annealing, I considered implementing it instead of a genetic algorithm. But I decided not to. Why? Simulated annealing relies on the assumption that the best layout is next to the second best layout, which is next to the third best layout, etc. But this is likely to be untrue. The second best keyboard probably looks nothing like the best keyboard. But genetic algorithms avoid this problem, through repetition. The “all star” round contains many very good layouts, so it is more likely to converge on the best layout.

But when I saw some results, I had to reconsider. Simulated annealing is seemingly much faster. But how can we get it to converge on the best possible layout? Could we do something like simulated annealing, but then repeat it and pool all the best layouts and evolve those using a genetic algorithm?

I ran some empirical tests, and it turns out that simulated annealing is indeed many times faster than a comparable genetic algorithm. Chris Johnson sent me some source code, and it turns out that it is indeed much faster.

Simulated annealing works sort of like a “smart” genetic algorithm with a pool size of only one. Instead of just making any old mutation, it looks for good mutations to make. This allows much faster convergence. But this strategy, as well as the genetic algorithm strategy, can sometimes skip over very good layouts, or even the best. I will explain in a later post, coming soon.

Posted in Computer Science, Keyboarding Theory, Keyboards, Math, New Keyboard Layout Project, Programming | 2 Comments »

%d bloggers like this: