Sunday, 28 October 2012
Signup for Linux Steam beta is live!
Valve sidestepped me! There I was, diligently checking their Linux blog on an intraday basis, but no further details of the beta had been posted.
It's lucky I'm a regular Phoronix reader, as Michael posted a link to the signup page on Friday.
Instead of their blog, Valve sneakily announced the signup on their Linux beta community group (worth subscribing to if you haven't already). From the number of members at present - almost 10,000 - it looks like competition for the beta will be fierce.
Anyway, here we go... get your beta testing hats on!
To reiterate, this beta is intended for seasoned Linux users who aren't strangers to filing and fixing bugs. Please hold on for the stable release if you just want to enjoy the fruits of Valve's labour.
Thursday, 18 October 2012
Xen VGA Passthrough: Have Your Say
As regular readers will know by now, I've been struggling with VGA passthrough for some time. And if the blog stats are anything to go by, I am far from alone.
Are you still struggling with passthrough? If so, you may as well let the Xen devs know via their poll on Uservoice.com. While you're there, take a look over the many other suggestions and vote for what you require.
The Xen development team have done a great job this year in interacting with their user base. First their request for comments around security vulnerability disclosure procedure, and now a fully open user poll on which aspects development should focus on.
Are you still struggling with passthrough? If so, you may as well let the Xen devs know via their poll on Uservoice.com. While you're there, take a look over the many other suggestions and vote for what you require.
The Xen development team have done a great job this year in interacting with their user base. First their request for comments around security vulnerability disclosure procedure, and now a fully open user poll on which aspects development should focus on.
Tuesday, 16 October 2012
Out with KDE, in with Cinnamon
I've been using KDE for the past couple of months. I've really enjoyed the experience; it certainly has a lot going for it. It's the oldest (widely used) Linux desktop environment, and it shows. KDE boasts an impressive featureset, polished animations, a consistent look and feel, well considered default settings, excellent customisability, and plenty more to boot.
Everything "just works"; Dolphin remembers which tabs you had open in your previous session. Panels can be customised per-screen. Window behaviour per application. Just what can't KDE do?
Yet; I've had enough, and thrown in the towel. There were just one too many bugs.
The biggest problem was the notifications widget refusing to behave itself at all. Then there was the multiple-row panel debacle (I think the icon scaling algorithm needs some TLC). There's a frustrating WM bug which you can catch when dragging a window entry from a panel, which leaves the cursor in a seemingly unrecoverable state. The list goes on.
There's also an eminently frustrating video overlay problem I'm battling with, which may or may not be KDE related, and an apparent Xorg heap leak - again, most likely unrelated, but to be sure I need to rule it out.
So, my KDE experiment comes to a close. I will be back; I've discovered I rather like the KDE world, but for the time being I'm off searching for my next DE. Struggling with options, I find myself trying out Cinnamon, the desktop from Mint (which happens to be available in the Fedora repos).
First impressions? It's sleek; relatively minimalist. Similar to Gnome 2 in style (as was its intention), yet still flaunting some of that Gnome Shell glitz and glamour.
It looks quite nice. Win-key opens the main menu and sets focus to the search bar, which is a nice touch. The Gnome 2 window list brings with it a sense of nostalgia, and reminds one of home; or a loyal dog lazing beside an open fire; a grandmother peacefully knitting a tapestry in her rocking chair; but I digress.
Notifications are well handled, menus carry a reasonable (albeit limited) selection of popular options, the main menu is acceptable. Quite nice in fact (it's growing on me).
As one might expect for a relative newcomer, settings and configuration options are a little light on the ground. The basics are covered, but one must remember that this is no KDE; it's been built with a similar vision to Gnome and Unity: to provide simplicity above all else.
Yet you can see that the devs have tried to go that extra mile without compromising on their underlying vision. It's things like the "Effects" settings menu, enabling configuration of the visual window effects; much more configurable than most, but still eminently usable, hiding all the complexities we may remember from the likes of CCSM.
It still feels like a work in progress to some extent. Why don't I get a menu when I right click on the desktop? Why are the settings quite so barren? Where are the additional themes by default? Where is the option to move the panel to a different monitor? Yet it's pleasant; simple; bashful; Gnome 2-esque.
All in all, a friendly and usable DE from the team at Mint. I'll keep it for the time being, until the next bandwagon rides by.
Everything "just works"; Dolphin remembers which tabs you had open in your previous session. Panels can be customised per-screen. Window behaviour per application. Just what can't KDE do?
Yet; I've had enough, and thrown in the towel. There were just one too many bugs.
The biggest problem was the notifications widget refusing to behave itself at all. Then there was the multiple-row panel debacle (I think the icon scaling algorithm needs some TLC). There's a frustrating WM bug which you can catch when dragging a window entry from a panel, which leaves the cursor in a seemingly unrecoverable state. The list goes on.
There's also an eminently frustrating video overlay problem I'm battling with, which may or may not be KDE related, and an apparent Xorg heap leak - again, most likely unrelated, but to be sure I need to rule it out.
So, my KDE experiment comes to a close. I will be back; I've discovered I rather like the KDE world, but for the time being I'm off searching for my next DE. Struggling with options, I find myself trying out Cinnamon, the desktop from Mint (which happens to be available in the Fedora repos).
First impressions? It's sleek; relatively minimalist. Similar to Gnome 2 in style (as was its intention), yet still flaunting some of that Gnome Shell glitz and glamour.
It looks quite nice. Win-key opens the main menu and sets focus to the search bar, which is a nice touch. The Gnome 2 window list brings with it a sense of nostalgia, and reminds one of home; or a loyal dog lazing beside an open fire; a grandmother peacefully knitting a tapestry in her rocking chair; but I digress.
Notifications are well handled, menus carry a reasonable (albeit limited) selection of popular options, the main menu is acceptable. Quite nice in fact (it's growing on me).
As one might expect for a relative newcomer, settings and configuration options are a little light on the ground. The basics are covered, but one must remember that this is no KDE; it's been built with a similar vision to Gnome and Unity: to provide simplicity above all else.
Yet you can see that the devs have tried to go that extra mile without compromising on their underlying vision. It's things like the "Effects" settings menu, enabling configuration of the visual window effects; much more configurable than most, but still eminently usable, hiding all the complexities we may remember from the likes of CCSM.
It still feels like a work in progress to some extent. Why don't I get a menu when I right click on the desktop? Why are the settings quite so barren? Where are the additional themes by default? Where is the option to move the panel to a different monitor? Yet it's pleasant; simple; bashful; Gnome 2-esque.
All in all, a friendly and usable DE from the team at Mint. I'll keep it for the time being, until the next bandwagon rides by.
Fedora: Unknown user 'jetty'
A minor irritation during Fedora startup cropped up recently - a failure in systemd-tmpfiles-setup.service.
[/etc/tmpfiles.d/jetty.conf:1] Unknown user 'jetty'.
It's caused by a bug in the post install script for jetty, documented in bug 857708. Either reinstall Jetty; or, if you installed it using the default settings & would rather take the quicker route, just finish things off yourself:
# groupadd -r -g 110 jetty
# useradd -r -u 110 -g jetty -d /usr/share/jetty -M -s /sbin/nologin jetty
[/etc/tmpfiles.d/jetty.conf:1] Unknown user 'jetty'.
It's caused by a bug in the post install script for jetty, documented in bug 857708. Either reinstall Jetty; or, if you installed it using the default settings & would rather take the quicker route, just finish things off yourself:
# groupadd -r -g 110 jetty
# useradd -r -u 110 -g jetty -d /usr/share/jetty -M -s /sbin/nologin jetty
AMD Catalyst 12.9 Beta - Looking Good
I'm used to the inevitable sense of disappointment when trying new versions of AMD's proprietary Linux drivers.
Let's face it: they boast a poor installer, a limited & buggy graphical configuration tool (amdcccle), irritating "AMD Testing use only" overlays with their beta drivers, terrible default underscan settings, and essential configuration options hidden in the completely undocumented Persistent Configuration Store.
Oh, and painfully slow driver releases following kernel updates. Did I miss anything? (Probably plenty.)
Hence, I didn't have high hopes for the next kernel upgrade. I'd even stuck with 12.6 until now.
Moving to kernel 3.6.1, latest stable Catalyst (12.8) failed to build the kernel module. It actually gave the same error as 12.6 did under kernel 3.5.4. I wasn't surprised; just disappointed, as usual.
firegl_public.c: In function 'KCL_MEM_AllocLinearAddrInterval':
firegl_public.c:2152:5: error: implicit declaration of function 'do_mmap'
This time, however, AMD have listed their next beta driver prominently on the Linux driver download page. Kudos; I hated having to dig around their website for it.
12.9 beta comes with a hefty 7% filesize increase over 12.8, and in my sceptical way, I was expecting a corresponding 7% increase in problems.
Not so.
The installer feels faster (but no, I didn't time it).
By a stroke of luck, the kernel module actually built without a hitch.
On reboot, it worked without further intervention.
Now here's a simple but helpful one. Previously, to start amdcccle, one had to run sudo amdcccle - a dodgy workaround at best. Now, they've fixed amdxdg-su, so that the menu entries now work, along with the direct command:
$ amdxdg-su -c amdcccle
Then, amdcccle started without error, the monitor configuration screen worked without a single frustration (is it just me, or have they improved that drag and drop interface?), and the settings were duly instantiated following a reboot.
The awful default underscan I'm used to on my plasma was no longer present (meaning I no longer have to interface directly with the undocumented PCS).
And, the cherry on the cake - no more "AMD Testing use only" watermark plastered all over my monitors.
Seriously?
This just isn't fair - there's hardly anything left for me to complain about!
For once; well done, AMD.
Update after the bedding-in period:
Alas, all is not perfect. Vsync, which was working perfectly under 12.6, is now broken (making the watching of videos rather unpleasant).
Additionally, I'm seeing screen corruption in a variety of cases during overlay transitions. Forcing a screen redraw clears it, but again, this is a frustrating problem which didn't exist before.
All things considered, I'd rather have a variety of irritations during installation & initial configuration than I would persistent issues affecting day-to-day usage. Here's hoping they're fixed in 12.9 stable.
One is left wondering just how comprehensive AMD's test packs, unit & integ tests are (this is presuming they do actually perform testing). Further, with the rumours of staff cuts at AMD, is the situation going to get better... or worse?
Let's face it: they boast a poor installer, a limited & buggy graphical configuration tool (amdcccle), irritating "AMD Testing use only" overlays with their beta drivers, terrible default underscan settings, and essential configuration options hidden in the completely undocumented Persistent Configuration Store.
Oh, and painfully slow driver releases following kernel updates. Did I miss anything? (Probably plenty.)
Hence, I didn't have high hopes for the next kernel upgrade. I'd even stuck with 12.6 until now.
Moving to kernel 3.6.1, latest stable Catalyst (12.8) failed to build the kernel module. It actually gave the same error as 12.6 did under kernel 3.5.4. I wasn't surprised; just disappointed, as usual.
firegl_public.c: In function 'KCL_MEM_AllocLinearAddrInterval':
firegl_public.c:2152:5: error: implicit declaration of function 'do_mmap'
This time, however, AMD have listed their next beta driver prominently on the Linux driver download page. Kudos; I hated having to dig around their website for it.
12.9 beta comes with a hefty 7% filesize increase over 12.8, and in my sceptical way, I was expecting a corresponding 7% increase in problems.
Not so.
The installer feels faster (but no, I didn't time it).
By a stroke of luck, the kernel module actually built without a hitch.
On reboot, it worked without further intervention.
Now here's a simple but helpful one. Previously, to start amdcccle, one had to run sudo amdcccle - a dodgy workaround at best. Now, they've fixed amdxdg-su, so that the menu entries now work, along with the direct command:
$ amdxdg-su -c amdcccle
Then, amdcccle started without error, the monitor configuration screen worked without a single frustration (is it just me, or have they improved that drag and drop interface?), and the settings were duly instantiated following a reboot.
The awful default underscan I'm used to on my plasma was no longer present (meaning I no longer have to interface directly with the undocumented PCS).
And, the cherry on the cake - no more "AMD Testing use only" watermark plastered all over my monitors.
Seriously?
This just isn't fair - there's hardly anything left for me to complain about!
For once; well done, AMD.
Update after the bedding-in period:
Alas, all is not perfect. Vsync, which was working perfectly under 12.6, is now broken (making the watching of videos rather unpleasant).
Additionally, I'm seeing screen corruption in a variety of cases during overlay transitions. Forcing a screen redraw clears it, but again, this is a frustrating problem which didn't exist before.
All things considered, I'd rather have a variety of irritations during installation & initial configuration than I would persistent issues affecting day-to-day usage. Here's hoping they're fixed in 12.9 stable.
One is left wondering just how comprehensive AMD's test packs, unit & integ tests are (this is presuming they do actually perform testing). Further, with the rumours of staff cuts at AMD, is the situation going to get better... or worse?
Sunday, 14 October 2012
1and1 GIT Repositories Leave A Lot To Be Desired
Every now and again we all come across things which appear too ridiculous to be true. As was the way with 1and1's GIT repos. Allow me to recant the tale.
I was looking for a primary GIT repository to host a private project. This unfortunately ruled out Github, as their pricing model for private hosting was a little too steep.
I would have been happy to host it locally, but with high availability as a requirement, and the fact that the best connection I can get in my area is a poor ADSL with speeds of < 3Mbit dl (let's not even talk about ul!), that didn't seem ideal.
What I did have was a *nix hosting account with 1and1, and with shell access as standard, this seemed ideal. Digging around the 1and1 console, I even found a prominent section promoting the use of git, and with instructions for its use. Better still, git was already installed on the server; no need for the wget/make combo.
Unfortunately, the only protocol configured for git was SSH; but, that didn't pose any immediate problem. In minutes, I had a git repo configured, had cloned it locally, and everything was good.
Then came the obvious next move: securing the configuration by using an appropriately configured SSH account.
This aligns with the first rule* of computer security: grant only those permissions which are absolutely necessary. Obviously, nobody would use a fully permissioned** SSH account for simple pushes to a repo, let alone disseminate it to others... would they?
So I tried SSH-ing in with a secondary FTP account. This is what I saw:
A lengthy call with technical support confirmed my bewildered suspicions.
The reason given? "Security".
Perhaps it's my own marbles rolling around on the floor, but this security policy seems to have been devised by the same people behind IEEE's webserver permissioning strategy.
At the very least, they should have a secondary ssh account restricted to a preset git repository directory and the git command. At most, they should allow proper ssh user account creation & permissioning. An alternative would be to configure the webserver for git over https; but they haven't done that either.
It could be marginally acceptable if you only intend to pull patches from submitters and push them to the remote yourself, but it's still barely suitable for that use case. The only place I'd feel comfortable storing the key would be on my own boxes (not mobile devices), which would make for some rather inconvenient git practices.
Another unrelated inconvenience with this route is 1and1's rather low configuration for the number of failed login attempts prior to temporary IP blocking. It's a good policy to have, but implemented in rather brutal fashion... no gradual backoffs, nothing. The block itself lasts quite a while, and their SLA upon calling and requesting an unblock is 4 hours.
I finished my call to 1and1 with a request to revise their 1 login "security policy"... but until then, if you want a high availability GIT repo, look elsewhere.
Update 8th May 2013:
A great alternative, if all you want is a private hosted service with a handful of users, is Atlassian's Bitbucket. It comes with a rather agreeable interface, >=5 users, unlimited repos, and as good as instant account setup, all for free.
* There are lots of "first" rules, depending upon what got the author's gander up at the time of writing.
** Got r00t? Nope... but as fully permissioned as it gets on 1and1, meaning access to the entire webspace.
I was looking for a primary GIT repository to host a private project. This unfortunately ruled out Github, as their pricing model for private hosting was a little too steep.
I would have been happy to host it locally, but with high availability as a requirement, and the fact that the best connection I can get in my area is a poor ADSL with speeds of < 3Mbit dl (let's not even talk about ul!), that didn't seem ideal.
What I did have was a *nix hosting account with 1and1, and with shell access as standard, this seemed ideal. Digging around the 1and1 console, I even found a prominent section promoting the use of git, and with instructions for its use. Better still, git was already installed on the server; no need for the wget/make combo.
Unfortunately, the only protocol configured for git was SSH; but, that didn't pose any immediate problem. In minutes, I had a git repo configured, had cloned it locally, and everything was good.
Then came the obvious next move: securing the configuration by using an appropriately configured SSH account.
This aligns with the first rule* of computer security: grant only those permissions which are absolutely necessary. Obviously, nobody would use a fully permissioned** SSH account for simple pushes to a repo, let alone disseminate it to others... would they?
So I tried SSH-ing in with a secondary FTP account. This is what I saw:
This account is restricted by rssh.And therein lies the catch: 1and1 supports only 1 single SSH account, which has the same credentials as the master FTP account.
Allowed commands: sftp
If you believe this is in error, please contact your system administrator.
Connection to XXXXX.co.uk closed.
A lengthy call with technical support confirmed my bewildered suspicions.
The reason given? "Security".
Perhaps it's my own marbles rolling around on the floor, but this security policy seems to have been devised by the same people behind IEEE's webserver permissioning strategy.
At the very least, they should have a secondary ssh account restricted to a preset git repository directory and the git command. At most, they should allow proper ssh user account creation & permissioning. An alternative would be to configure the webserver for git over https; but they haven't done that either.
It could be marginally acceptable if you only intend to pull patches from submitters and push them to the remote yourself, but it's still barely suitable for that use case. The only place I'd feel comfortable storing the key would be on my own boxes (not mobile devices), which would make for some rather inconvenient git practices.
Another unrelated inconvenience with this route is 1and1's rather low configuration for the number of failed login attempts prior to temporary IP blocking. It's a good policy to have, but implemented in rather brutal fashion... no gradual backoffs, nothing. The block itself lasts quite a while, and their SLA upon calling and requesting an unblock is 4 hours.
I finished my call to 1and1 with a request to revise their 1 login "security policy"... but until then, if you want a high availability GIT repo, look elsewhere.
Update 8th May 2013:
A great alternative, if all you want is a private hosted service with a handful of users, is Atlassian's Bitbucket. It comes with a rather agreeable interface, >=5 users, unlimited repos, and as good as instant account setup, all for free.
* There are lots of "first" rules, depending upon what got the author's gander up at the time of writing.
** Got r00t? Nope... but as fully permissioned as it gets on 1and1, meaning access to the entire webspace.
Subscribe to:
Posts (Atom)