So my Zareason notebook decided to break (actually it was breaking for a while, the case is really terrible material-wise). I’ve been looking to buy a linux preinstalled laptop, but finally saw a sale on a lenovo u460 and decided to just get it. The machine is very nice and essentially everything works. I installed newest Fedora alpha and updated to the latest bits so I have GNOME 3 here.
Experience is not entirely positive. GNOME 3 is a solution in search of a problem. The things that GNOME 3 makes easier weren’t really all that difficult before. It doesn’t really make anything important any easier. Basically it improved on one part of the desktop experience that was already “good enough.” There is nothing that a user couldn’t have done before that they can do with GNOME 3. But there are things that were possible with GNOME 2 that aren’t with GNOME 3. So this improvement is at a cost of making lots of more rarely done things much harder. If there are 100 things, each one of them only affecting 1% of the users, it is entirely possible that 100% of the users are affected. I am sure that everyone will find a couple of things they need to do (not just want to do, but NEED to do) that will be very hard if not almost impossible in GNOME 3. For example for me, linking two computers in a temporary way with an ethernet cable was not possible with a GUI anymore, and I couldn’t any more figure out how to change the mac address the network card uses in the new dialogs. Both were things I needed to do. It doesn’t help if someone tells me I shouldn’t have to do them if say the network setup (which is beyond my control) was done better.
A good UI gets out of the way. GNOME 3 more often gets in the way by making things that I needed to do harder or impossible to find or do. So while much of gnome shell is nice there are many places where it makes life harder on purpose for whatever reason. GNOME 2.0 had the same philosophical problem.
There are many places where the linux desktop is still very deficient in a way that keeps people from using it. GNOME 3 does nothing to improve that in my opinion. It’s all nice in a perfect world, but we do not live in a perfect world where all hardware looks the same, all 3d drivers work, all people work the same way and all necessary software for linux is already written.
Someone should try to fund a study to find out “why are you not using linux” or more specifically “what does linux not do that you need it to do before you will use it”. Surely it is not fixed workspaces and starting applications from a menu.
I figured I ought to make a genius release before it is a year since the last one, so 1.0.10 is out then. A bunch of minor updates have accumulated but nothing major. Biggest change was that I added possibility to rename variables in the plotting interface so that I can set variable names to those that I am talking about in class. That reminds me to check on the availability of computer projectors in the classrooms I’m teaching in at UCSD.
Bitten again … so I now finally noticed that it seems that a ChangeLog file is now out of favour in the GNOME git. People just commit stuff (translations it seems) without anything. Plus when i do git pull, it just spits out a lot of jargon nonsense but doesn’t tell me the important things: Which files have changes. So I don’t actually notice what was changed. I have to go hunt down that information.
I DON’T CARE HOW WONDERFULLY YOU HAVE COMPRESSED THINGS AND HOW MANY “OBJECTS” YOU ARE TRANSFERING. TELL ME WHAT FILES YOU ARE CHANGING.
Even the git browser at git.gnome.org is useless. I wish I had CVS back.
I’ve looked through the GNOME Census: Apparently in the 6 or 7 years that I’ve not worked on GNOME, I still have not managed to get out of the top 20, at least based on number of commits. By a rough estimate based on time being employed by Eazel, I guess about 1/3 or 1/4 or so of my commits were as Eazel employee. Meaning that probably I account for 1/4 or 1/5 of all Eazel commits to GNOME (that sounds kind of freaky).
What’s even more freaky is that I single handedly committed about 70% as much as Canonical (which had a longer time).
Someone (can’t remember who, I’m reading these blogs while moving half way cross country) said something about that Canonical should have hired some people to just “hack on cool GNOME stuff.” Well, that was essentially my job description at Eazel. So if I managed, over the 3-4 years of really being active on GNOME to have 0.7% of “activity” on GNOME over its lifetime. Than if Canonical would have recruited me (though I was probably unrecruitable by that time) or someone like me, they could over the past 6 years have more than doubled their “contribution.” They would probably have a lot more say in the future direction of GNOME as well. A couple of dedicated engineers are not expensive in the overall scheme of things for a company.
Now number of commits is not the best way to count contribution. I think it’s probably hard to measure Canonical’s contribution to GNOME and it’s likely bigger than indicated by the number of commits.
Still … 18th still? They aren’t trying very hard these days. Must be that they’re all mucking around with git instead of coding!
For the past month or so I’ve used Chrome to test it out. At first I thought it worked really well. Then I’ve started to discover many annoyances. Firstly, there is no way to “open” files like PDF directly from the internet. Chrome forces you to click a whole bunch of times so that you download the PDF to the tmp directory, then you open it by clicking on the name on the bottom of the screen. This is really, really, really annoying. Especially for a mathematician (or I assume any scientist) who reads many PDFs, DJVUs, PSs, every day. This is enough to make me not want to use it. Whatever stress reduction from slightly faster and smoother browsing experience is totally canceled out by this. I really don’t see how hard it is to save to /tmp and open automatically. I mean browsers have been doing this forever.
Another thing that was worrying me is that saved passwords are not encrypted behind a master password.
The last straw was the fact that it can’t print right.
So the result of the fight: Chrome-Firefox is 0:1.
So ubuntu tried (or is trying) changing the window bar completely in beta 1 of lucid. They moved the window controls to the left of the bar and reordered them. Presumably to free up space on the right for “something.” What that something is, is unclear. But that “something” cannot possibly be anything critical, since apps should work in different themes and different distros.
I guess they may end up doing something cool with the freed up right hand side. But, and this is a big but, is it going to be worth all the annoyance to people switching? Given that all three of the computers run a different flavour of linux, it is likely that this “new” ui will only be on my home machine. I still can’t get used to closing the window on the left, especially since at work, on my netbook, on marketa’s machine or on whatever random computer I get to use for a bit it’s on the right. I think at some point I’ll be annoyed enough to change the window button order back (once I figure out what gconf key does that).
People that have to also use windows will be annoyed to no end. Whatever minor improvement in usability we may get from whatever cool thing is on the right hand side of the window title bar in the future will be more than wiped out by the annoyance of having to switch between two layouts on two machines. So using the UI is slower simply because I have to think what machine I am on.
This is my favourite pet peeve with the direction GUI is trending to. Consistency is usually flushed down the toilet for whatever “cool” experiment a certain designer is trying. MP3 players have been suffering from this for years. Web pages have the same problem. Now many other applications are following suit.
GNOME Shell and friends for example come up with completely different looking and working widgets for standard things such as scrollbars, buttons and menus. The netbook launcher is almost unusable for it. If it was built from standard widgets it would have probably had working keyboard navigation, would have been arguably easier to use and even would have been easier to code up and would have fewer bugs. GNOME was suffering from this from the beginning. We had widget themes way before we had a half working desktop.
So with git, I have to commit before pulling latest changes. OK, why is this braindead: I generally just go to my source tree and start hacking. At some point I want to commit so I think “hey … maybe someone did something else” so I do git pull and git yells at me. I have to do a commit. Well, if I do a commit and the changelog has changed, then the next pull will automatically will have a conflict to resolve. This means SEVERAL extra unnecessary steps simply to commit something that doesn’t have any a-priory conflict with any other commits other people did.
I am sure git is great for people who want to spend their days playing with git. But it sucks if you simply want to code. Oh CVS, where are you?! CVS also has lots of braindamage, but the braindamage only makes you work hard in exceptional situations. Git does things “correctly” apparently, but to do so, it makes you work harder in every situation.
Karmic is not turning out to be a very successful ubuntu release for me. I am hitting far more bugs than usual that are also not being fixed within the updates. The bugs have been reported but do not seem a priority. One is for example that udev/kernel do something weird and then keep eating cpu/memory. This leads to the computer having swapped everything useful out at some point. So after a while, the computer is slow as hell (unusably slow, especially coming out of screen lock). Restarting udev solves the problem but 1) I always forgot to do that and 2) it makes removable media not work. So I moved to current lucid on my main machine which seems to be working fine with me hitting no bugs yet (that’s rather odd, I generally hit many a landmine going with a development release).
The other bug I’m hitting on my netbook. The standard netbook interface is flashy but 1) slow 2) unusable with the keyboard (there is keyboard navigation, but it is so incredibly buggy it is useless). Standard GNOME is also too much for the small screen and low memory with no swap. So I am using fluxbox on it (actually I almost started using fluxbox on my main machine …) it is spartan, but after you set things up, it is really fast. Though the issue that took me the longest was the long time after login before the desktop would appear. It seems that someone had the bright idea of making the xsplash thing the default for everything with a timeout of 15s. There is no configuration, no way I could find to easily kill the splash save for removing the xsplash binary. It is a hardcoded hack that gets automatically run for EVERY session, regardless of whether the session supports it or not.
Who’s brainless idea was that? That’s why I had gdm sessions have a .desktop file, so that I can put easily readable properties there about what the session can do. So add something like X-GDM-Supports-xsplash=true godddamn it! How hard is it to implement? Far easier than a flashy pointless splashscreen which should not exist in the first place.
And that’s the other thing. What’s wrong with people who add splashscreen to anything. Splashscreen are generally annoying and make startup slower. Especially if they are moving. You are competing for very scarse resources to simply move useless pixels around. Problem is that even if you time things to make sure that resources are not being taken up, you can’t test every configuration (i.e. someone not using GNOME, obviously that configuration was not tested). The boot looks just fine with xsplash removed. I have not done timings with GNOME, but using fluxbox the boot to desktop time goes down by about 10 seconds.
I’m just mad since I wasted so much time trying to solve this mess.
I have created a new gpg key (I have lost my old one somewhere) and made that git tag. But thinking logically I can’t understand the policy of requiring signed tags. If an attacker is able to commit code using the ssh account, he is able to create bogus gpg keys. Unless I am incredibly diligent in maintaining my gpg keys, the signatures are close to worthless. Making a gpg key doesn’t even require one to own the email account. At best the whole setup gives some false sense of security. Unless you are willing to force some draconian measure and only allow trusted signatures, then the whole thing is nonsense. Actually the whole thing is nonsense to begin with. I understand the idea of allowing someone to “sign” a tag in a repository (I understand it, but I think it has little actual utility). But requiring signatures (and thus generating a flood of bogus signatures in the repository) is stupid.
This is the general problem with computer security. Vast majority of users / software ignores security and then a small percentage of users overdo it with paranoia. In fact, this paranoia is usually so great that it makes proper secure procedures too hard for bother for the vast majority of users, hence the system has built in feedback.
Example: If crappy (but easy to set up and use) encryption is available, it will likely result in higher, not lower security. For example: setting up an ssl using webserver is a hassle. Hence, many passwords are sent in the clear (because they are for websites with little interest in high security). The problem is that people hate remembering passwords, hence same passwords are used as for websites which use encryption, and voila. If setting up simple encryption on a webserver would be as simple as tuning parameters, it could be on by default, and most web traffic would be encrypted. You would not have authentication, but it is far harder to impersonate a site, than it is to sniff for passwords sent in the clear. By tying encryption and authentication together, the bar was raised high enough that encryption is rare.
Digitally signed git tags are even less useful. I would bet most people making such tags have unverified digital signatures, simply generating some warm feelings among the paranoid crowd.
So apparently I cannot make a tag without having some gpg signature nonsense in the gnome git. Given that I don’t use gpg, I now can’t make named tags for releases of genius and gob. At this point I am seriously enough annoyed to just take genius and gob and move both to someplace like sourceforge.
I mean really … WTF … I can add arbitrary code to the project without any sort of signing nonsense, but a simple tag requires my “signature”???? I want to know whatever people running the gnome git are smoking, because that stuff has to be GOOOOOOD.
I can’t fathom the apparent damage someone could do by making some tags on the code that the perpetrator cannot do by changing the code itself.