Log in

To The Point Of Collapse
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in gravityboy's LiveJournal:

[ << Previous 20 ]
Saturday, April 18th, 2009
5:50 pm
Input Hotplug Guide
There's been a lot of questions about the new input system (referred to as "input-hotplug" or "i-h") that we've enabled in the latest X upload. Questions range from "how do I configure my keyboard now?" to "why are you making me install all this other stuff I didn't have before?" While the XSF has been dealing mainly with bug triage for the software itself, we've also been wanting some documentation on the system so people can understand it.

Thus was born our Input Hotplug Guide. The first portion of the doc explains the rationale behind input-hotplug and how it works. The second section is a HOWTO, explaining how to configure it. Hopefully this will answer some of the questions that are out there right now. So if you're curious or just plain frustrated with the changes happening to the X server's input subsystem I recommend that you check it out.
Sunday, April 12th, 2009
8:18 pm
How To Follow Major X Issues In Unstable
It's been a long time since I've done any reasonable work on X stuff, so it's also been a long time since I've blogged. But Julien and Brice* just finished off the major task of getting 7.4 in to unstable, and I've been trying to help out with some of the inevitable fallout from that. Overall, the transition has been relatively smooth, although we've had a few major classes of bugs that have popped up. Since dealing with the BTS is rather complicated and doesn't lend itself to simple summaries, I created this wiki page for the team to track the major classes of bugs currently plaguing unstable. This isn't meant to track every bug (that's what the BTS is for), but for major regressions or problems with new features, we'll be trying to use this page to give ourselves and users a way to easily see what's going on with some of the more recent problems. And yes, this does include the infamous hal depends bug, which we are currently discussing, and will deal with accordingly.

* Edit: Removed by request
Friday, September 19th, 2008
12:52 pm
I Didn't Think I'd Ever Be Celebrating This Victory
This is a wonderful day. The various non-free bits of X donated by SGI have, thanks to the efforts of those at the FSF, been relicensed under the MIT/X11 license. Congratulations to the FSF for pushing this very difficult task through. I've been told numerous times whenever I tried to take a stab at the issue that it was "tilting at windmills," but the FSF persevered and made the impossible happen. This should be evidence for the naysayers that the FSF is out there doing the really hard, and all too often thankless, work that helps keep our software Free.
Thursday, August 14th, 2008
11:41 am
First Paper
It's out!

Current Mood: happy
Wednesday, July 16th, 2008
10:54 am
Goings On
I've been rather disconnected lately, trying to finish my PhD, find a job, etc. I got permission from my committee to start writing my thesis a few weeks ago, so I've been trying to get that in gear, as well as finishing up data for publication. This should all be done by November, if all goes well, so I can get back to spending more time on the things that I love.

I've tried to stay somewhat current with what's going on, and there's been a noticeable change over the past couple weeks in the tone of discussion around the community. I've personally been fascinated by the appearance of two things: the Linux Hater's Blog and the debate about Gtk 3.0. What's striking about both of these things is that they focus very much on the more consumer-oriented side of Linux. It's all about pleasing the independent vendors and grandma, and not about doing cool things. This is a huge shift from a few years ago. When I (and I assume many of us) got started with Free Software it wasn't really about these things, but more about getting your own work done and less about pleasing other people. Pleasing others was good of course, but it wasn't really expected. Just getting the system up and running was cool at the time, but using it exclusively for all your work? Only if you were in the right line of work!

We've come to a point where we expect a hell of a lot more though. We've got very vocal community members who want to spread Linux far and wide, and they want to do it today. And arguably, Linux is ready for it. We have good software that works rather well, can be easily installed and set up, and will run most of what people need. Yay us. On the other hand, after spending the better part of the decade using Linux on the desktop I'm finding that I agree with almost everything that the Linux Hater's Blog says. It's hard to argue with the truth, and the truth is that things are still difficult for people. I've spent the last few years trying to make X in Debian easier for people to deal with, and I've barely made a dent in just this one problem. And there's plenty more to pick and choose from. Sure, you can talk about how Windows and OSX have problems too, but we can't just be as good as them. We have to be better if we want to spread Linux and Free Software far and wide.

But do we really want to do that? Well, to be honest, I don't think it matters. No matter whether or not you care about grandma using Linux, we all want to have systems that work well and are easy to manage. Currently we have a lot of things in modern distros that could be a hell of a lot better. And many of them are directly related to fundamental assumptions we've made that don't really hold up as well as they should. They lead to lots of extra work leading to a sub-par product. We can do better and we should do better. If we have the absolute best system, world domination would be a natural side effect.

That's why I think that things like this need to stop and that we need more things like this. Sure, one is a hell of a lot harder, but no one cares if you solve an easy problem. It's the hard ones that matter, and provide the real payoff in the end. We need a better system to stop the hemorrhage of developers to OSX. When Miguel talks about how the people pushing for gtk 3.0 are all using OSX, I get very worried. If we want to be in control of our own destiny then we need to face our problems head on, and solve them.
Wednesday, June 25th, 2008
8:28 pm
Things To Do When You're Busy Trying To Graduate
I've been too busy to do any serious free software work lately, but here's my "low time commitment TODO list":
  1. Give xconq some serious love. That includes bugfixes, updating to the new tk, and a total repackaging with more standardized methods.

  2. Write more docs for X. I think we badly need a simple well integrated user's guide, followed by better high and low level internals documentation.

None of these things require a consistent time commitment (and no mailing lists for me to follow) so I hope I'll be able to get some work in on them soon.

Current Mood: busy
Saturday, June 7th, 2008
2:25 pm
The Cognitive Load Of Web Development
The most dynamic segment of the software industry that I'm aware of is web programming. Flamewars rage about frameworks and browsers and standards and whatnot, and the openness of the whole thing feels like some of the things that made me fall in love with computing as a whole and Free Software in particular. The problem with all that openness is that it's rather difficult to navigate. Aside from the obvious issues with browser incompatibilities, there's an enormous amount of software solutions on the server side, spanning languages, libraries, and frameworks. Each one is an ecosystem to itself, often being large and complicated.

This makes it rather difficult to do certain aspects of web programming well. You are constantly switching between tools and languages, as well as coding paradigms, in order to build a web app in full. You'll be writing HTML, CSS, and Javascript on the client side, each with its own peculiarities. On the server side you'll probably have settled on one language with its own very different varieties of coding style and patterns. Some, like Ruby on Rails, are their own unique brand of language that makes you work both in the DSL (Rails in this case) and the parent language (Ruby here). Template engines often will do the same thing, such as in the case of Django.

It's pretty obvious that having to juggle all these tools places a fairly large burden on the coder. Much to their credit, it's not difficult to work in any of these languages, but to work in them at a high level becomes more difficult the deeper and wider the software stack reaches. You get this a bit in systems programming, juggling C/C++ with a build system and a scripting language, but that's usually it. You can get that number of moving parts just on the client side of the web with HTML, CSS, and Javascript.

Something interesting that seems to be happening is that people are moving to decrease this cognitive burden. One good example is the Google Web Toolkit which lets you write all your AJAXy client-side stuff in Java, and let it get compiled in to Javascript. That way you cut the languages you have to work in. A similar concept is behind Microsoft allowing Ruby (and Python, if I've heard correctly) in Silverlight. Alternately, you can bring Javascript (back) to the server side, which is the motivation behind Helma and Rhino on Rails (the latter of which I hope sees the light of day).

Personally, I'm rooting for Javascript to not become an assembly language, but to take over the server side again. It's a capable and powerful language (much better than Java in my opinion), and we're collectively leveraging it very poorly. Rhino is interesting and has enormous potential, but it needs library binds in the worst way. Currently it's basically using Javascript to write Java, which is rather atrocious (COBOL in any language, etc) but with some Javascript-style wrappers around the Java classes it could be phenomenal. Alternately, there's Spidermonkey, which could be wrapped in a more capable shell with a good FFI, and we could easily have wrappers to our favorite Free Software libraries. This project seems far along here, but it's currently very windows-centric.

There's a lot of potential here though. Unifying the development language for both web and system apps would benefit the Free Software desktop, and give us the ability to better integrate our stuff with the web. One of the best things about Free Software is that it dropped the barrier to entry in software development. I think we can repeat that success again here.

Current Mood: optimistic
Saturday, May 31st, 2008
10:39 am
Ruby Stuff
So Avi Bryant finally showed off the work he's been doing with the Gemstone folks at Railsconf, and it's made quite a splash. With performance improvements like that it shouldn't be a surprise. The most interesting thing about it to me though is that it's the first time in a very long while that we've seen a proprietary implementation of a major tool absolutely destroy all the Free implementations. We've had things like Intel's C compiler outperforming gcc before, but nothing on this level, especially because the main ruby implementation is so notoriously slow. Just another feather in the cap of Smalltalk's long legacy.

What's troubled me for some time about the post-Rails Ruby community is that it has a distinct bent away from its Free Software roots. I understand Matz actually used to use (not sure about today) Debian Unstable, and Ruby traditionally displayed its roots quite strongly, with a Perl heritage and a community consisting largely of hardcore *NIX people. With the advent of Rails, the move has been towards things like TextMate and OSX. Software like Gems (no relation to Gemstone) fits in fine with one of these systems, but not so well with modern Free Software systems, and I think it's symptomatic of the change. Given this propensity in the Ruby community, and given the numbers Gemstone is posting, I'd be surprised if lots of Rubyists don't move that way as soon as it's available.

Given all this, I really have to wonder if the modern Ruby fits me any more. I generally think it's important for the Free Software community to support itself first and then try to grow out from there, and the Ruby community isn't really on this path right now. That's fine, there's nothing wrong with it, it's just not something that really interests me. It does make me wonder how something like this could happen, and it really comes down to the fact that a lot of smart people who might otherwise be really passionate about Linux systems are choosing OSX instead. Maybe it's hardware support (I hope the modernization of X will help here) and maybe it's just the whole package being nice and easy to use. Whatever it is, people are choosing it and we're the poorer for it.
Monday, May 26th, 2008
6:08 pm
Card Games
A few months back, keithp introduced me to Treehouse/Icehouse as a generic system for gaming, like playing cards. Recently I realized that I didn't actually know many card games, and most of those that I did know I'd forgotten long ago. So today I spent a good chunk of the afternoon learning a few new games and playing them. John McLeod's site was the first hit I found on Google, and it's a phenomenal resource full of games from all over the world. The games that I learned or re-learned today were Casino, 500 Rum (which is a Rummy/Gin Rummy variant), and German Whist.

It's striking how many games are generally unknown these days. We have so many other forms of entertainment available that we've collectively forgotten how to play most of these games. It's fun to be reminded what you can do with a simple deck or two of cards though.

Current Mood: content
Sunday, May 4th, 2008
7:23 pm
Upside Down
Things have been strange lately. I've been taking an official indefinite vacation from Debian due to real life priorities making it impossible to participate in the project at the level that I want, which just led to frustration. Mainly, I just haven't had the time to work on X properly, so I'm leaving it to the rest of the XSF for a while. This makes me sad, but they've grown in to a fantastic team so I know they'll keep doing a killer job while I'm away.

I have had some time though, just not enough to follow two major projects like X.org and Debian, so I've been trying to leverage it in productive ways. Perhaps the most notable is that I've been spending most of my time coding Smalltalk using Squeak now that it's in Debian. Squeak has its flaws, but it's very fun to work with and Smalltalk is probably my favorite language at this point, displacing Ruby. This shouldn't be too big a surprise, since Ruby consciously inherits a lot from Smalltalk. Squeak is a good environment, but I don't feel too compelled to write desktop apps in it because they feel so displaced from the rest of the system. GNU Smalltalk will hopefully fix this in time. As a result though, I've been doing my first bit of web coding in a very long time. The main reason for this is the incredible Seaside framework built on Smalltalk. It's been a lot of fun to play with, and although it's very underdocumented, there's enough out there (especially with the new book) that I've used to get going. Right now I'm trying to learn AJAX techniques as well, which is something I never thought I'd be doing.

As a result of learning Smalltalk, combined with past experiences, I've been trying to learn emacs, and switch a lot of my text editing over there. I came to realize that the reasons I chose vim over emacs many years ago aren't really holding up any more, and now that I've got a lot more of both UNIX and coding under my belt that it's time to reevaluate the decision in a more intelligent way. Vim has been great to me over the years, but I found it constraining when trying to work on very large and difficult trees like the X server. I'm hoping that emacs will do a little better, since I like the way it handles multiple buffers better. Additionally, the purity of Smalltalk "all the way down" has made me appreciate the emacs architecture of lisp (almost) "all the way down" and I'm looking forward to making use of it. It's been a painful transition so far, given the years of muscle memory I'm trying to change. I've been avoiding viper mode to really try and learn emacs, which has made it even harder. I don't know if I'll end up using emacs after all is said and done, but at least I'll have a better idea of how the two major editors really compare for my own use.

With all these changes things have been a little strange lately. Debian was the rock that I've clung to over the past few years, and not being totally entrenched in it has felt unnerving. Combining that with very new and different ways of working has been a rather large change. One thing is for certain: it's been very good for me to take a break for a while and work on small things at my own pace rather than try to keep up with large projects. People burn out all the time in the free software community, and I think that disconnecting and working on small fun things is a great way to heal.

Current Mood: calm
Monday, March 24th, 2008
7:36 pm
Little Things
Still putting most all of my brain and time in to trying to graduate. This has left me shockingly little time for things like working on Debian.

Part of the reason why I haven't been working on more visible things (X, Debian) is that I haven't had the solid blocks of time to devote to reading email. I think I need to unsubscribe from all but two or three lists so I don't end up paralyzed by the email onslaught the way I am now. I need to get back to slinging code.

My iPod completely crapped out on me a week and a half ago. I plug it in to my computer and it's totally dead. It was only a little over a year old. Massive fail. Fuck you Apple. I've purchased a 4GB Cowon U3 which plays oggs natively and explicitly "supports" linux. We'll see how that goes, but after one day of use I'm pretty happy.

The entry of Squeak in to Debian and my own realization that GNU Smalltalk exists for those of us who might not want to live inside the smalltalk environment has prompted me to look at the language for the first time. A half hour a night (about a third of the time it takes me to read my email on a very good night) has only made me more excited about this language. If you haven't taken a serious look at Smalltalk yet, it's worth your while. It's like all sorts of things that we can approximate and dream about not only are a reality, but they were invented back in the 70's and have been here all along. It's made me seriously question some of the directions that Linux and Debian have gone in.

Current Mood: listless
Saturday, March 15th, 2008
1:48 pm
Random Bits
I've been extremely busy with labwork lately, trying to get a second paper ready for submission in the coming months. It's taken an obvious toll on my free software work, but that's life I guess. The upside is that the other night I realized why a year and half's worth of experiments kept turning up with negative results that seemed to contradict our other results. It turns out that this experiment was conceptually flawed, and even though I wasted a year and a half on this I did learn something even from those negative results that should go in to the final paper. I'm actually relieved, because even though I don't have all the data yet I can really see how the paper will end up, so I can start writing soon. Oftentimes knowing why something is wrong so you can fix it is even more important than being right.

I set up an ikiwiki to help organize my lab data a few weeks ago, and it's been a huge win. Joey really did a fantastic job on the program and I can't imagine using a wiki that doesn't share a very similar design for any of my personal stuff. I showed my boss some of my data using it and now he wants me to set up a wiki for the lab. He actually wants me to show off my wiki to give my lab members a feel for the idea. Since ikiwiki doesn't seem to be appropriate for my lab members, I'm going to have to look in to the giant list of wikis to figure out the exact right one for my lab's requirements. This is something I never expected to be doing in grad school.

Because there's no way my brain can go without doing a little bit of programming, I finally sat down and wrote myself a small shell script that lets me grep through pdf's, which is something I couldn't find googling around. If you have a lot of pdf's like I do and don't something like beagle or tracker churning away in the background constantly just so you can search them, this is very helpful. It's obviously very crude, but it gets the job done nicely. It relies on the pdftotext program; hopefully it'll be of use to someone.

I've been digging in to my copy of UNIX Power Tools lately, trying to pick up new UNIX and vi arcana, which is something I haven't made a real serious effort to do in a long time. It's been surprisingly fun to try and replace old habits with things that I know are better but don't use. A good example is that I usually will fire off a new subshell in vim when I should really just suspend vim and use my original shell to save time. Re-examining the basics of how I use the system has been a lot of fun, and has made me a lot more comfortable with the simple day to day tasks.
Saturday, March 1st, 2008
9:41 am
Shell Scripting
I've never bothered to learn the gory details of shell scripting. This is somewhat embarrassing for me, although at the same time I don't feel like I've really suffered for the decision. I know enough of the basics to do some cool things (for loops, if tests and whatnot) but the the syntax is fraught with such problems that it feels like a waste of time to do more. Obviously, others have felt similarly, which is why perl was invented.

I've become more interested in it again recently though, mainly because I've been reading a lot of Kernighan. I don't want to deal with Bourne syntax though, which eliminates the only two shells I've ever spent serious time with in my life, bash and zsh, as well as ksh. I know csh scripting is considered harmful 'n shit, so that's right out.

Since I only want to shell script for local use, I don't need to worry about portability so I can try using more exotic shells. I'm also willing to script in a different language from the shell that I use, so the doors are really open. There's scsh if I want to write scheme, zoidberg if I want to write perl, rush if I want to write ruby.

What looked most appealing is rc though, which has a very nice Bourne-like syntax that's not nearly so warty. It was reimplemented before plan9 was released, and the reimplementation is in Debian, although I ran in to some undocumented differences with the plan9 version very fast. It may be worth trying out the plan9 version at this point to see how that goes.

Any other suggestions from the lazyweb are greatly appreciated. It'd be nice to have this aspect of my toolbox be more solid.
Thursday, February 21st, 2008
9:36 pm
Two Fun Things
1. After several years of it sitting on my shelf, I finally read Kernighan and Pike's "The Practice of Programming". I was always a little intimidated by this slender little book. I felt like it was full of wisdom that I couldn't really absorb at the right pace. Now that I've spent some time working on more challenging code and trying to come to grips with at the least the basics of more advanced computing concepts it was the right time to read it. I can't recommend it highly enough, it really is packed with wisdom. I've got Kernighan's "Software Tools" on the way, and I'll post a proper review of this at some point since it's not as well known.

2. I decided to spend a little bit of time poking at the almost-but-not-quite-dead language Dylan. I'd remembered being curious about this language back from when I was a serious Mac child in the mid 90's, since Apple was hinting that it would be the future. While the Gwydion Dylan implementation was removed from Debian, there's binaries for both it and the OpenDylan implementation that can be downloaded and run on both Sid and Ubuntu.

What I can grasp of the language is interesting. It's sorta like what little I know of scheme, except that it's got an object system that is apparently like CLOS from Common LISP, which I also don't know. Most notably though, is that even though Dylan is considered a LISP it doesn't look like one at all. There's no parentheses except what you'd expect, and things are infix rather than prefix, so it looks more like algol-derived languages like C. It's strikingly easy to read, and while the object system is very different from C++-style, it's fairly easy to grasp because it relies on familiar concepts like multiple dispatch to work. It's optionally typed, which is something I've been after (give me safety when I want it, but don't force me!) and provides all sorts of cool things like closures. Additionally, it does allow hygienic macros, which are apparently LISP's claim to power and fame, although I don't understand such things yet even in Dylan. Despite all this, the language appears to be relatively simple, which seems nice. It seems to be a very well designed language that was allowed to languish in obscurity because Apple abandoned it*.

Unfortunately the implementations aren't really all there yet. I ran in to obscure compiler bugs in gwydion writing something as simple as quicksort, and gwydion doesn't really have much of a standard library yet, although it can apparently bind to C quite well and fairly easily (this is something I haven't tried). In addition, there's OpenDylan, which is even less there on Linux, because it was an opened commercial compiler (derived from the original Apple one I believe) and it's had fewer people working on it. It seems to have a very large library, but documentation is sparse on it and I haven't had any time to really go after it. I've really been using gwydion exclusively, although I'd love to see OpenDylan get up to speed. While I have no time to really work on anything outside of X right now, this seems like a good language that deserves some attention.

* This is a familiar story to all of us who lived through Apple's mid-90's
9:20 pm
Where I've Been, Where I'm Going
After about two weeks I'm finally able to come back to Debian, which feels good. After my last package upload was botched (and gracefully recovered by my teammates) I realized that I needed to step away for a little bit.

I'm currently in my fifth year of my PhD program. For those who don't know how such things work in the US, I have a small committee of people (my thesis committee) who act as advisors and gatekeepers on my work. They're the ones who gauge my progress, and ultimately give me permission to write my thesis and receive my doctorate. I'm supposed to meet with them every 6 months, and it had been over a year since my last meeting. Because I'd like to actually graduate one day, I was rather nervous about having the meeting. I was more nervous than I realized, and the botched upload reflected that.

Anyway, to make a long story short, the meeting went as well as I could have hoped for and my committee seems to be happy with my progress. I'm looking to get a paper out in the spring, and they'd like to meet with me again in three months time (which will absolutely fly by) to assess where that paper is at, and how things are going. At that point, I wouldn't be surprised if they tell me to write my thesis and graduate.

What I do after my dissertation work has been weighing on my mind for years and years, and it's obviously coming to a head. My boss will let me stay on as a post-doc in the lab for a while, which I may do depending on other job opportunities and progress on my projects. Ultimately though I need to make a decision that's familiar to a few others in my position. Do I stay in science or go try and work on free software for a living? I honestly don't have an answer yet, because I love them both.

I see tons of incredible opportunities in my field right now, and while I'm not the only coder interested in synthetic biology, I'm one who's positioned to get in on the ground floor of the science in a very real way and have a significant hand in shaping its future. On the other hand I spend my free time devouring computing literature, not biology literature, as well as producing software and things related to it. I do this because I enjoy it, and I know I enjoy it on a deeper level than the biology. Would I enjoy it this way if I did it for a living? I've got no idea, since I've never coded professionally.

I'm coming up to the fork quite fast now, and I'll have to choose soon. I know I'm lucky because I do have the chance to choose for myself, but that doesn't make it any easier.

Current Mood: contemplative
Saturday, January 19th, 2008
7:28 pm
On Visualizing Biological Data
The following is a brain dump of some of the things I've been thinking about lately.

One of the biggest changes over the past several years in biology has been the incredible deluge of information. In response to this there's been a rise in bioinformatics to cope with all this. While this has led to some major successes, where it's failed is in its ability to impart a greater understanding of the subject at hand. Biologists still learn primarily from reading papers, the same way we always have. There are massive databases full of wonderful information but most of it is encoded with minimal or no context so you're always forced to go back to the papers to understand what the database is actually telling you. In that sense, these databases are fantastic at indexing information, but very poor at organizing it in such a way as to teach people about the topic at hand. We're still forced to slog through papers for just about everything.

What's striking about this is that the most informative bits in any biological paper I've ever read are encapsulated in the figures. The images themselves, provided you have sufficient background knowledge, show the basic data and give you the most understanding for the smallest investment of time. You can skim an article's abstract, it's figures and figure legends and gain a fair understanding of the topic before deciding whether or not to go further.

Now, there's a contradiction of sorts here. Many of these figures are generated via computers, usually an excel-made graph. The rest are actual photographs of things, such as blots, gels, or stained tissues, eventually inserted and processed via the computer. The contradiction is that the computer is used intensively to organize this data for publication, but we have a hard time extracting the essence of that visualization for indexing in "big picture" sorts of ways. That almost always has to be done by hand (and brain) by the biologist. This, of course, is suboptimal when you have thousands of individual genes.

The fundamental reason for all of this is that biological information depends wholly on context. For example, you can sequence the whole genome, but it's totally unclear what genes will be expressed at any given time unless you have much more information about the cell type, developmental stage, pathologies, and so forth. As far as I can see, all our bioinformatics tools have failed completely at providing any sort of context for their information. A common thing to see is so-called "wiring diagrams" that display molecular interactions. These diagrams are full of nodes and edges and it looks almost impossible to understand such things. While there is a great deal of complexity, contextual information provides us with a framework to understand what's actually working. Looking at these diagrams, there's no sense of this though, it looks instead like complexity overrun.

So that presents us with the challenge for the future. What's required of bioinformatics is to not only index the raw data, but also the context, and then present the data to us in a context-dependent manner. I am convinced that the key to presenting data this way is to come up with novel visualization methods because it's the visualizations in the papers that we use today to get the most out of our time. I believe that this problem is tractable and that there is a solution. More than likely we'll need several solutions, and I look forward to seeing them develop.
Wednesday, January 16th, 2008
7:02 pm
Maths Break
I've been mentoring a high school student in biology for the past few weeks. The other day she came to me with a math problem that she was having trouble with. Looking at this relatively simple logic problem I realized that when faced with such a thing these days that I'd rather write a program to solve it because that's way more fun. So later on I wrote a little python script to calculate all the possible values that were correct for this program up to any given value. I'm not sure if the realization that I'd rather use a computer to solve even simple problems is a good or bad thing, but I bet I would have liked math class a lot more if this is what I had to do for it.

Current Mood: calm
Monday, January 7th, 2008
11:00 pm
A few years back I asked Branden Robinson for access to the X Strike Force SVN repository in order to improve the X server's autodetection, with the goal of stealing what I could from knoppix. At the time, users were constantly wandering in to #debian saying that they had used knoppix to create an XF86Config-4 and then they used that file on their Debian installations. They were also constantly whining about how Debian wasn't doing as good a job. So I decided to do something about it, and took a look at what knoppix was actually doing so much better than us, and I was surprised to find that they pretty much just wrote a skeleton configuration file and let the X server fill in the details. I had no idea the server was capable of such things. So Branden graciously gave me SVN access and I began comparing the knoppix method to the script called by "dpkg-reconfigure xserver-xfree86" and realized that we probably couldn't adopt the same method because we had tons of checks for portability. So I put the problem on the backburner for few weeks and worked on something else for a while.

Well, somewhere in between then and now I got sucked in to transitioning Debian to X.org (using the aforementioned SVN access) and then working on all the things that went along with maintaining X in Debian, some well and others less so. Between transitioning to Xorg, and then transitioning to a modular Xorg, even with the ability to steal Ubuntu's packages it still took about two full years with me being a rookie and all that. There were tons of people who worked on this with me, but it was just a damn big job. Eventually though, etch released with modular X packages, and we were running at a pretty good pace with upstream, so it was time to revisit the problem of configuration again after the two year detour.

The problem of configuration had been reframed for me by having looked at what knoppix was doing and discussions with upstream. Upstream was starting to come to the conclusion that we shouldn't have a config file at all, and that the server should be smart enough to do everything by itself. It was already partially there, it just needed a push in the right direction. This was a major shift in how I thought of it. Coming from a Debian background, where the answer is to always just regenerate or edit your config file, having the server work things out for you was a totally alien idea. But I have my deepest roots in the Macintosh world, so immediately fell in love with it. The problem was that the server had a whole body of code to use if you had no config file at all, and another, with far less automagic goodness, if you had a config file, even if it was a 0 byte file. The goal became to have the server work really well with a minimal config file, so you could override what you don't like in the defaults and let the server figure things out for itself at boot. The way forward was to translate as much of the logic in our configure script in to the X server itself as possible.

Ubuntu had put in a lot of work in to the configuration setup that we automatically benefitted from, so given that most users were happy with things as they were, I was able to carve away at the problem without disrupting anything. And there was a lot to carve out, as the script that runs the config is an absolute mess that was slated by Branden for a rewrite all those years ago. Early on I picked off the low hanging fruits like the font path and modules. Similarly, Redhat's Adam Jackson had also been working on this problem, and he killed off the ServerLayout section as well as putting in lots of critical fixes all over the place elsewhere. More recently, I've gone and cut explicit modesetting out of the configure script. Hardcoding this information is generally a bad idea in the randr 1.2 world, and most of the drivers will do as good or better job of figuring out the modes than our config script would. This let us jettison using the xresprobe program to ask the monitor for the settings to use. This cut out a lot of the code that we had to deal with, simplifying the configure script and letting us all benefit from upstream's work. This leaves a big gaping hole in user configuration, which is something I'm looking to address in a few weeks, but for now there are workarounds like editing your xorg.conf manually to make things work.

Finally, yesterday I was able to upload a version of the script that no longer uses the discover program to figure out what driver to load. I've patched the server to do this at runtime. If there's no driver listed it simply scans the PCI bus, picks out your primary video card, and loads the first driver that claims to support that PCI ID. This let us jettison the last external dependency that the configure script had, so now we have a relatively small chunk of shell script with no external C code and a simplified setup. At this point we're now shipping a more skeletal config file than knoppix ever did simply because it wasn't possible to ship something like this before and have the server work at all. This is a huge milestone for me because finally, after over three years, the problem I originally set out to work on with X is done. There's still bugs to uncover and fix with all of this, but I'm convinced that what we have now is superior to our old method. Eventually, with any luck, xorg.conf will just fade away for most people. There's a lot to work on before that happens though, but I'm happy to have finally gotten here.

Current Mood: accomplished
Sunday, January 6th, 2008
11:28 am
First In A Totally Disconnected Series of Posts
I want to echo what Brice has said so well already. X is full of interesting problems and important things to work on. When you consider the free software that's out there, very little of it comes close to the importance of X in your day to day life. When we've done our job well, you don't even think about it because it just works, but when X fails it's like a minor catastrophe. Working on the kernel is largely considered to be something glorious and grand, but people tend to ignore X even though it plays very much in the same domain as the kernel. So working on X you not only get to play with interesting problems that are similar to the ones that the kernel folks get to play with, but you also get to work on software that's critically important and makes a significant difference in people's day to day lives.

Furthermore, X.org is a great organization to work in. The number of core contributers is very few (maybe 20 all told) so everyone knows who everyone else is and people are willing to help others out by answering questions about how things work. Reflecting this, the XSF has turned in to a fantastic team over the past few years, and it's one I couldn't be prouder to be a part of. Julien and Brice are doing insanely fantastic work, and we're constantly pushing to do a better job at just about everything, from collaborating with Ubuntu to using the best cutting edge tools out there for our work.

Despite all this though, as Brice said, we're overwhelmed by everything. We're still carrying hundreds of bugs in the Debian BTS alone, most of which we'll never have a chance to look at again. As mentioned elsewhere on planet upstream is totally swamped as well. There's just too many critical projects that need doing (new drivers, fixed drivers, adding capabilities to the server, and that's ignoring day to day needs like documentation and patch review) and too few people to do them effectively. We have a wonderful set of things to do and a great group of people to mentor motivated people to do them, and yet we're still lacking contributers to this critically important set of software.

So as Brice said, if you want to get involved in something that's essentially important to Linux and Free Software, if you want to get involved in something with great problems and really fantastic opportunities to make a difference, you can't do much better than getting involved in X. Brice's method of getting involved in processing bug reports is a perfect way to start. Another is to write much needed documentation. Another is to simply help the XSF maintain some small piece of the Xorg stack, be it a driver for some hardware that you own (the XSF only really runs two or three video drivers collectively so we badly need driver maintainers), or just start poking at something you're curious but mystified about like how mesa or the X server work. There's no magic to any of it, and while it can get complicated I promise that you will become a better coder or maintainer by working on something as challenging as X. You don't have to understand it all from the start, just picking a small thing to work on will be greatly appreciated. So drop us a note on the debian-x list or drop in to #debian-x on oftc and we'll help you get going.

Current Mood: awake
Saturday, December 29th, 2007
6:18 pm
I've been taking a little bit of time away from Debian and X to see what sort of fun stuff is out there that I've been putting off learning. Aside from finding what looks to be a good fitness training plan for the next year (and hopefully beyond), I've been looking at different programming languages. One that I found most recently is Processing.

This program is absurd amounts of fun to play with. I've long been interested in how to do display complicated data sets visually, and processing is really built to allow you to do that relatively easily. More and more often in my work lately I've been generating fairly large data sets, and I'm not really satisfied with the standard graphs that come with a spreadsheet. They work for basic data, but they completely fall down when you need something bigger. You can either add more graphs or just have a large table, both of which are suboptimal. Processing is centered around the concept of sketches, quickly where you quickly type some java code in to the IDE, hit the play button, and have it run. The API is very simple and should be intuitive to anyone who's done any coding before. This lets you play around with how to visualize your data at very low cost.

The software is GPL'ed, and while it relies on java, this shouldn't really be a problem in the near future. Unfortunately, java is java, and to be honest I'd rather be writing something like python or ruby. Options for those seem to be forthcoming (nodebox is apparently being ported to linux and scribble is out there in some form now) but processing has an impressive and easy setup as it is. I'll definitely be picking up one of the new books when it's available.
[ << Previous 20 ]
About LiveJournal.com