Saturday, 8 December 2012

Pink 650D RAW images under Ubuntu

Underwater photos are usually blue

The versions of libraw and dcraw currently in the Ubuntu repos have a slight problem. When accessing RAW files from Canon's latest 650D, they display them entirely in pink - which probably isn't what you want.

The problem is simple enough: the versions of dcraw/libraw used by all the RAW processing apps in Ubuntu 12.10 (aside from eog - so Shotwell, UFRaw, RawTherapee, etc.) don't have the 650D listed yet, so the ICC profiles aren't getting properly interpreted.

One solution is to rebuild your applications, such as Shotwell, with the appropriate versions of the underlying raw libraries. That can be a pain, so I took the easier option, and opted to convert the images to JPEG first, thereby circumventing the issue.

sudo apt-get install libjasper-dev libjpeg-dev liblcms1-dev graphicsmagick
gcc -o dcraw -O4 dcraw.c -lm -ljasper -ljpeg -llcms
sudo mv dcraw /usr/local/bin/dcraw
echo alias rawtojpg="\"find . -name '*.CR2' -print0 | xargs -0 -n 1 -P 8 -iIMG gm convert IMG -format jpg IMG.jpg\"" >> ~/.bashrc
source ~/.bashrc

At least it's better than pink
Thereafter, you should be able to use the primitive alias rawtojpg to convert all .CR2 files in the current directory to JPEGs (leaving the original CR2 files as-is). Note the -P 8 flag, which runs 8 conversion threads in parallel - adjust as appropriate. You can obviously change the alias to a proper function which takes some args too, but the above was sufficient for my purposes.

Update, 8th August 2013

See my comment below for a complete solution to this problem.

Assuming you've already followed the preceding instructions in this post, all you need to do is install gimp-dcraw; then you'll be able to open your 650D CR2s with GIMP:

sudo apt-get install gimp-dcraw

Has the Apple bubble burst?

This week, the Apple share price tanked. In one day alone, it lost market capitalization of some 35bn USD.

Apple Inc. hit its all-time peak in September after stunning growth in the 1st quarter of 2012. Since then, it's just been sliding back downhill (barring a brief reversal in the last half of November, which has just been wiped off the board this week).

I'll leave the detailed analysis for others, but there are a few drivers from the tech side that seem worthy of comment.
  • The top of the hill: With such a large percentage of mobile phone and tablet sales, and with an ever increasing number & quality of players in the market, it's hard to see how they could make significantly greater inroads here. It's now more about maintaining customers than onboarding new ones. What Apple needed was a new market; a new product, and I don't see one on their horizon at present.
  • Litigation: Apple has been suing companies left, right and centre. These cases haven't been to uphold moral principles, or to protect complex designs from illegal duplication; they've simply been attempts at money-grabbing (from the largest US corporate by capitalization). Now the victims have started to fight back against their attacker, turning Apple's insatiable greed into unappetizing risk. Meanwhile, end users stuck in the middle are increasingly becoming disillusioned with Apple's aggressive tactics, and seeking alternative companies to support.
  • Google Maps: If you're going to sue them, you'd better not be reliant on their services. By ditching Google Maps and running with their homegrown solution, Apple removed one risk from its risk register, but in doing so created a certainty: an immature product. No matter how much Apple may improve Siri, or how much polish they put on the icon set, it isn't going to make up for taking away from users what they already had.
  • Gradual improvement: This sounds good; it's what we often see with each iteration of the Linux kernel: a number of small improvements, which may or may not impact users much, depending on their individual interests. The problem in terms of commercializing it is that few people need or want to upgrade for each and every small set of improvements. Yet, since Apple has reached near-saturation in the western markets, it is reliant on users upgrading frequently. It either has to persuade users that they really do need those extra pixels on the screen, or it has to continually innovate. This is where they've slowed down, and understandably so. How much more can and should be packed into devices in everybody's pockets?
  • Competition: Apple led the way with the all-round quality of their first music players, smartphones and tablets. Competition existed, but it was either inferior or small-scale. Now, that simply isn't the case: competitors have caught up, and in some instances actually surpassed Apple. For example, when asked, I presently recommend the S3 for smartphones, and the Galaxy Tab for tablets. Again, it comes back to a lack of innovation on Apple's behalf. The phrase "sitting on one's laurels" comes to mind.
  • Remember from whence you came: Before Apple's runaway success with the iPod and its succeeding foray into small form factor mobile devices, it was best known as a niche hardware/software provider. Its primary market was for digital design artists across audio and visual spaces. This was back in 2001. Roll forward eleven years to 2012, and the PC space has remained largely similar; Apple hasn't made any major inroads into mainstream computing (it has gained market share, as some buyers of Apple mobile devices buy into the Apple ecosystem, but the overall impact to the global PC market share is small - we're talking a few percentage points change). Without the success of the mobile device space, Apple is just a comparatively small, niche hardware/software provider. It is reliant on the mobile space's - and iOS's - continued growth. If, for example, desktop OSs were to become popular on mobile (Ubuntu with Unity springs to mind), would there still be room for iOS?
Does Apple still have some cards hidden up its sleeve, or is it floundering precariously on the proverbial ledge, mourning the loss of direction from its most inspirational and charismatic late leader?

Thursday, 8 November 2012

Linux Steam beta emerges

It's been an important day for the Linux community at large as Valve's first limited external beta of Steam is released.

Much to my disappointment, I didn't make the first cut for beta testing. The odds weren't great - 60,000 people vying for just 1,000 places - and with just a primitive entry form to go by, selection of candidates must have been nigh on impossible. For what it's worth, I expect expansion of the programme will be relatively swift.

That aside, I did manage to get a copy of the beta anyway. Installation was obviously elementary, as one expects with superior package management systems. The Steam client itself runs smoothly (again, as one would expect), and looks very similar to Steam running under WINE. Sync with existing Steam accounts, push notifications, downloads, library, steam URIs - it all just works.

Running the beta outside of the official beta program seems to pose a problem in terms of downloading Steam beta's poster child, TF2, through the client. Other purchased titles aren't a problem however.

Some people have been impressively quick off-the-bat. directhex has composed an early attempt at a game compatibility list, to which the addition of more testing experiences is clearly welcomed.

To get my own testing ball rolling, I went ahead and purchased Amnesia: The Dark Descent.

I'm unable to "get into the game" as of yet, since the inevitable resolution switch is proving rather messy (to say the least) on my multi-monitor setup with Catalyst 12.11 beta drivers & 12.10/Cinnamon. It's going to take a little work before I get things running acceptably - and that I'll have to leave for another day.

From first impressions - it certainly starts quickly, and seems likely that it'll run smoothly once I've ironed out my resolution wrinkles.

So far, things look rosy. More to come.

Sunday, 28 October 2012

Signup for Linux Steam beta is live!

Valve sidestepped me! There I was, diligently checking their Linux blog on an intraday basis, but no further details of the beta had been posted.

It's lucky I'm a regular Phoronix reader, as Michael posted a link to the signup page on Friday.

Instead of their blog, Valve sneakily announced the signup on their Linux beta community group (worth subscribing to if you haven't already). From the number of members at present - almost 10,000 - it looks like competition for the beta will be fierce.

Anyway, here we go... get your beta testing hats on!

To reiterate, this beta is intended for seasoned Linux users who aren't strangers to filing and fixing bugs. Please hold on for the stable release if you just want to enjoy the fruits of Valve's labour.

Thursday, 18 October 2012

Xen VGA Passthrough: Have Your Say

As regular readers will know by now, I've been struggling with VGA passthrough for some time. And if the blog stats are anything to go by, I am far from alone.

Are you still struggling with passthrough? If so, you may as well let the Xen devs know via their poll on While you're there, take a look over the many other suggestions and vote for what you require.

The Xen development team have done a great job this year in interacting with their user base. First their request for comments around security vulnerability disclosure procedure, and now a fully open user poll on which aspects development should focus on.

Tuesday, 16 October 2012

Out with KDE, in with Cinnamon

I've been using KDE for the past couple of months. I've really enjoyed the experience; it certainly has a lot going for it. It's the oldest (widely used) Linux desktop environment, and it shows. KDE boasts an impressive featureset, polished animations, a consistent look and feel, well considered default settings, excellent customisability, and plenty more to boot.

Everything "just works"; Dolphin remembers which tabs you had open in your previous session. Panels can be customised per-screen. Window behaviour per application. Just what can't KDE do?

Yet; I've had enough, and thrown in the towel. There were just one too many bugs.

The biggest problem was the notifications widget refusing to behave itself at all. Then there was the multiple-row panel debacle (I think the icon scaling algorithm needs some TLC). There's a frustrating WM bug which you can catch when dragging a window entry from a panel, which leaves the cursor in a seemingly unrecoverable state. The list goes on.

There's also an eminently frustrating video overlay problem I'm battling with, which may or may not be KDE related, and an apparent Xorg heap leak - again, most likely unrelated, but to be sure I need to rule it out.

So, my KDE experiment comes to a close. I will be back; I've discovered I rather like the KDE world, but for the time being I'm off searching for my next DE. Struggling with options, I find myself trying out Cinnamon, the desktop from Mint (which happens to be available in the Fedora repos).

First impressions? It's sleek; relatively minimalist. Similar to Gnome 2 in style (as was its intention), yet still flaunting some of that Gnome Shell glitz and glamour.

It looks quite nice. Win-key opens the main menu and sets focus to the search bar, which is a nice touch. The Gnome 2 window list brings with it a sense of nostalgia, and reminds one of home; or a loyal dog lazing beside an open fire; a grandmother peacefully knitting a tapestry in her rocking chair; but I digress.

Notifications are well handled, menus carry a reasonable (albeit limited) selection of popular options, the main menu is acceptable. Quite nice in fact (it's growing on me).

As one might expect for a relative newcomer, settings and configuration options are a little light on the ground. The basics are covered, but one must remember that this is no KDE; it's been built with a similar vision to Gnome and Unity: to provide simplicity above all else.

Yet you can see that the devs have tried to go that extra mile without compromising on their underlying vision. It's things like the "Effects" settings menu, enabling configuration of the visual window effects; much more configurable than most, but still eminently usable, hiding all the complexities we may remember from the likes of CCSM.

It still feels like a work in progress to some extent. Why don't I get a menu when I right click on the desktop? Why are the settings quite so barren? Where are the additional themes by default? Where is the option to move the panel to a different monitor? Yet it's pleasant; simple; bashful; Gnome 2-esque.

All in all, a friendly and usable DE from the team at Mint. I'll keep it for the time being, until the next bandwagon rides by.

Fedora: Unknown user 'jetty'

A minor irritation during Fedora startup cropped up recently - a failure in systemd-tmpfiles-setup.service.

[/etc/tmpfiles.d/jetty.conf:1] Unknown user 'jetty'.

It's caused by a bug in the post install script for jetty, documented in bug 857708. Either reinstall Jetty; or, if you installed it using the default settings & would rather take the quicker route, just finish things off yourself:

# groupadd -r -g 110 jetty
# useradd -r -u 110 -g jetty -d /usr/share/jetty -M -s /sbin/nologin jetty

AMD Catalyst 12.9 Beta - Looking Good

I'm used to the inevitable sense of disappointment when trying new versions of AMD's proprietary Linux drivers.

Let's face it: they boast a poor installer, a limited & buggy graphical configuration tool (amdcccle), irritating "AMD Testing use only" overlays with their beta drivers, terrible default underscan settings, and essential configuration options hidden in the completely undocumented Persistent Configuration Store.

Oh, and painfully slow driver releases following kernel updates. Did I miss anything? (Probably plenty.)

Hence, I didn't have high hopes for the next kernel upgrade. I'd even stuck with 12.6 until now.

Moving to kernel 3.6.1, latest stable Catalyst (12.8) failed to build the kernel module. It actually gave the same error as 12.6 did under kernel 3.5.4. I wasn't surprised; just disappointed, as usual.

firegl_public.c: In function 'KCL_MEM_AllocLinearAddrInterval':
firegl_public.c:2152:5: error: implicit declaration of function 'do_mmap'

This time, however, AMD have listed their next beta driver prominently on the Linux driver download page. Kudos; I hated having to dig around their website for it.

12.9 beta comes with a hefty 7% filesize increase over 12.8, and in my sceptical way, I was expecting a corresponding 7% increase in problems.

Not so.

The installer feels faster (but no, I didn't time it).

By a stroke of luck, the kernel module actually built without a hitch.

On reboot, it worked without further intervention.

Now here's a simple but helpful one. Previously, to start amdcccle, one had to run sudo amdcccle - a dodgy workaround at best. Now, they've fixed amdxdg-su, so that the menu entries now work, along with the direct command:

$ amdxdg-su -c amdcccle

Then, amdcccle started without error, the monitor configuration screen worked without a single frustration (is it just me, or have they improved that drag and drop interface?), and the settings were duly instantiated following a reboot.

The awful default underscan I'm used to on my plasma was no longer present (meaning I no longer have to interface directly with the undocumented PCS).

And, the cherry on the cake - no more "AMD Testing use only" watermark plastered all over my monitors.


This just isn't fair - there's hardly anything left for me to complain about!

For once; well done, AMD.

Update after the bedding-in period:

Alas, all is not perfect. Vsync, which was working perfectly under 12.6, is now broken (making the watching of videos rather unpleasant).

Additionally, I'm seeing screen corruption in a variety of cases during overlay transitions. Forcing a screen redraw clears it, but again, this is a frustrating problem which didn't exist before.

All things considered, I'd rather have a variety of irritations during installation & initial configuration than I would persistent issues affecting day-to-day usage. Here's hoping they're fixed in 12.9 stable.

One is left wondering just how comprehensive AMD's test packs, unit & integ tests are (this is presuming they do actually perform testing). Further, with the rumours of staff cuts at AMD, is the situation going to get better... or worse?

Sunday, 14 October 2012

1and1 GIT Repositories Leave A Lot To Be Desired

Every now and again we all come across things which appear too ridiculous to be true. As was the way with 1and1's GIT repos. Allow me to recant the tale.

I was looking for a primary GIT repository to host a private project. This unfortunately ruled out Github, as their pricing model for private hosting was a little too steep.

I would have been happy to host it locally, but with high availability as a requirement, and the fact that the best connection I can get in my area is a poor ADSL with speeds of < 3Mbit dl (let's not even talk about ul!), that didn't seem ideal.

What I did have was a *nix hosting account with 1and1, and with shell access as standard, this seemed ideal. Digging around the 1and1 console, I even found a prominent section promoting the use of git, and with instructions for its use. Better still, git was already installed on the server; no need for the wget/make combo.

Unfortunately, the only protocol configured for git was SSH; but, that didn't pose any immediate problem. In minutes, I had a git repo configured, had cloned it locally, and everything was good.

Then came the obvious next move: securing the configuration by using an appropriately configured SSH account.

This aligns with the first rule* of computer security: grant only those permissions which are absolutely necessary. Obviously, nobody would use a fully permissioned** SSH account for simple pushes to a repo, let alone disseminate it to others... would they?

So I tried SSH-ing in with a secondary FTP account. This is what I saw:
This account is restricted by rssh.
Allowed commands: sftp

If you believe this is in error, please contact your system administrator.

Connection to closed.
And therein lies the catch: 1and1 supports only 1 single SSH account, which has the same credentials as the master FTP account.

A lengthy call with technical support confirmed my bewildered suspicions.
The reason given? "Security".

Perhaps it's my own marbles rolling around on the floor, but this security policy seems to have been devised by the same people behind IEEE's webserver permissioning strategy.

At the very least, they should have a secondary ssh account restricted to a preset git repository directory and the git command. At most, they should allow proper ssh user account creation & permissioning. An alternative would be to configure the webserver for git over https; but they haven't done that either.

It could be marginally acceptable if you only intend to pull patches from submitters and push them to the remote yourself, but it's still barely suitable for that use case. The only place I'd feel comfortable storing the key would be on my own boxes (not mobile devices), which would make for some rather inconvenient git practices.

Another unrelated inconvenience with this route is 1and1's rather low configuration for the number of failed login attempts prior to temporary IP blocking. It's a good policy to have, but implemented in rather brutal fashion... no gradual backoffs, nothing. The block itself lasts quite a while, and their SLA upon calling and requesting an unblock is 4 hours.

I finished my call to 1and1 with a request to revise their 1 login "security policy"... but until then, if you want a high availability GIT repo, look elsewhere.

Update 8th May 2013:

A great alternative, if all you want is a private hosted service with a handful of users, is Atlassian's Bitbucket. It comes with a rather agreeable interface, >=5 users, unlimited repos, and as good as instant account setup, all for free.

* There are lots of "first" rules, depending upon what got the author's gander up at the time of writing.

** Got r00t? Nope... but as fully permissioned as it gets on 1and1, meaning access to the entire webspace.

Monday, 27 August 2012

Xen Part 13: VGA Passthrough: Another failed attempt

Preamble: this post has been sitting around since 20th May, waiting for me to finally get things working. Yet, due to other commitments, I simply haven't found the time to invest in Xen. I'm posting this just in case it helps somebody out of a particular problem, given the level of interest I'm seeing in passthrough. Personally, I've now migrated back to a Fedora dom0 (which worked better for me OOTB), and am waiting for 4.2 to be released before trying again - hopefully with more success.

Warning: the following doesn't result in a working VGA passthrough setup.

Setting up VGA passthrough as per the xen wiki (detailed in my posts Part 9: PCI Passthrough and Part 11: ATi Graphics Drivers on the domU) got me to the stage where I thought it should be working - but I simply didn't get any graphical output on the monitors when the time came.

The only oddities I could see on the domU were in Xorg.0.log:

[    54.071] (EE) fglrx(0): V_BIOS address 0x0 out of range
[    54.071] (II) fglrx(0): Invalid ATI BIOS from int10, the adapter is not VGA-enabled
... a seemingly random period of time passes (seconds to minutes), then everything comes up roses...
[    57.325] (II) fglrx(0): ATI Video BIOS revision 9 or later detected

This occurred both on Ubuntu 11.10 running the latest stable 3.2.13 kernel, and on Windows XP, both using latest AMD proprietary graphics drivers.

I was therefore left with the inescapable conclusion that Xen 4.1.2 was to blame. Thankfully, I stumbled upon Jean David Techer's instructions for applying a collection of VGA passthrough patches to Xen unstable, which handle the provision of the VC BIOS and setting the BARs. Many thanks to Jean for posting the walkthrough, and also saving everybody the trouble of porting the VGA passthrough patches to the latest Xen revisions.

Before We Begin

Let's just make sure that your graphics card is detected and initialised correctly in the dom0. There's little point proceeding if it isn't.

1) A quick check to make sure you don't need Debian's firmware-linux-nonfree package:

$ dmesg | grep ni_cp | grep "Failed to load firmware" && echo "You need to install firmware-linux-nonfree" || echo "Looks OK, proceed to point 2"

# apt-get install firmware-linux-nonfree 

2) You may need to setup some pci quirks for your card. This is a check for a problem I encountered with my HD6970:

$ dmesg | grep "Driver tried to write to a read-only configuration space" && echo "You need to setup a PCI quirk" || echo "Looks OK, proceed to point 3"

$ dmesg | grep -A 2 "Driver tried to write to a read-only configuration space"

[927513.834633] pciback 0000:01:00.0: Driver tried to write to a read-only configuration space field at offset 0xa2, size 2. This may be harmless, but if you have problems with your device:
[927513.834635] 1) see permissive attribute in sysfs
[927513.834636] 2) report problems to the xen-devel mailing list along with details of your device obtained from lspci.

To add a PCI quirk, you need the vendor and device ID for your device (it's the last entry on the line):
$ lspci -nn | grep VGA

00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0102] (rev 09)
01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Cayman XT [Radeon HD 6970] [1002:6718]

# vim /etc/xen/xend-pci-quirks.sxp



Replace HD6970 with any name you like to identity your card, replace 1002:6718 with the vendor/device ID you retrieved from lspci, replace 000000a2 with the offset from dmesg, and replace 2 with the size from dmesg.

3) Search dmesg for the logs pertaining to your graphics card. You'll have to amend the greps below to correctly identify your graphics card's PCI ID (I'm using the 6970 grep to find my HD6970).

$ dmesg | grep `lspci | grep VGA | grep 6970 | awk '{ print $1 }'`

Look over these logs to identify any further problems, and correct any obvious faults before proceeding.

4) Verify that running lspci on the domU returns your card. If not, check the output of dmesg | grep -i pci for clues.

If you see:

XENBUS: Device with no driver: device/pci/0

verify that the domU's kernel has pcifront loaded.

Extract the BIOS from the Graphics Card

ATI cards are handled in this section, whilst NVIDIA card users should follow step 1 in Jean's instructions.

Find out how to extract your graphics card BIOS. If you determine that ATIFlash is the way you want to go, then first obtain it (ATIFlash 3.95) and find a USB drive without any important data on. Insert it, find out its /dev/XXX node and ensure it's unmounted before proceeding.

# apt-get install unetbootin
# mkdosfs -F32 /dev/XXX
# mount /dev/XXX /mnt

Run UNetbootin and install FreeDOS to the USB drive. Don't reboot when prompted.

$ unzip
# cp atiflash.exe /mnt
# umount /mnt

Reboot to the USB drive

> c:
> atiflash -i
adapter bn dn dID      asic           flash     romsize
======= == == ==== ============== ============= =======
   0    01 00 6718 Cayman         M25P10/c      20000     
> atiflash -s 0 bios0.rom

Reboot, copy bios0.rom onto a HDD and rename it to vgabios-pt.bin

Obtain a Patchable Xen Unstable

This is really just following steps 2-7 at Jean's site; I reproduce them below mostly for my own benefit for the specific case of a HD6970.

Here I'm using Xen unstable revision 25099. This is, at time of writing, the most recent version explicitly supported by the VGA passthrough patches that Jean David Techer maintains. If you want to use a later revision, you would have to recreate the patch diffs accordingly, or wait for Jean to diligently provide a newer collection of patches.

# apt-get install mercurial libglib2.0-dev libyajl-dev
$ mkdir -p Downloads/xen-unstable
$ cd Downloads/xen-unstable
$ rev=25099;hg clone -r $rev xen-unstable.hg-rev-${rev}
$ cd xen-unstable.hg-rev-25099
$ hg summary
parent: 25099:4bd752a4cdf3 tip
 x86_emulate: Do not push an error code onto a #UD exception stack
branch: default
commit: (clean)
update: (current)
$ ./configure

$ cd tools

Ensure you actually do run this command as a normal user - as indicated.

$ make
$ make clean
$ cd ..
$ xenpatches=xen-4.2_rev24798_gfx-passthrough-patchs
$ wget -q${xenpatches}.tar.bz2
$ tar xjf ${xenpatches}.tar.bz2 

BAR Configuration

Now to set up the Base Address Registers (BARs) specific to your graphics card.

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: ATI Technologies Inc Cayman XT [Radeon HD 6970]

Locate the correct PCI ID from the above output, as usual...

$ dmesg | grep XX:XX.X | grep "mem 0x"
[    4.120860] pci 0000:01:00.0: reg 10: [mem 0xc0000000-0xcfffffff 64bit pref]
[    4.120878] pci 0000:01:00.0: reg 18: [mem 0xfbe20000-0xfbe3ffff 64bit]
[    4.120912] pci 0000:01:00.0: reg 30: [mem 0xfbe00000-0xfbe1ffff pref]

In the above output, there are 3 memory ranges that Xen needs to know about. The start and end of each range is provided in hex (e.g. the first range starts at 0xc0000000 and ends at 0xcfffffff).

As Jean explains, we also need to know the size of each range. Jean uses hex->dec and dec->hex conversion for the calculations, but I think figuring it out purely in hex is easier. Just remember your basic rules of hexadecimal, and you should find this calculation pretty simple.  

If you got a bit lost here, use Jean's method instead.

To recap, In decimal we have a maximum number of 9 before we wrap around to 0 again. 0 to max (9) is a total of 10 values. In hex, the maximum number is 0xf (==15). 0x0 to max (0xf) is a total of 0x10 values. Switching back to memory ranges, this means that a range starting at 0xc0 and ending at 0xcf would have a size of 0x10.

Applying this to the first example above, the total size of memory range 0xc0000000 to 0xcfffffff would be 0x10000000 (the number of values in 0x0000000 -> 0xfffffff).

Start End Size
0xC0000000 0xCFFFFFFF 0x10000000
0xFBE20000 0xFBE3FFFF 0x00020000
0xFBE00000 0xFBE1FFFF 0x00020000

Now let's change the relevant patch file to match these BARs.

$ vim ${xenpatches}/patch_dsdt.asl

Modify the first three DWordMemory function calls, such that the second and third hex values are set to the start and end addresses, and the fifth (final) value is the size. For example,

DWordMemory( ResourceProducer, PosDecode, MinFixed, MaxFixed, Cacheable, ReadWrite, 0x00000000, - 0xF0000000, - 0xF4FFFFFF, + 0xF4000000, + 0xF5FFFFFF, 0x00000000, - 0x05000000, - ,, _Y01) + 0x02000000)

would change to

DWordMemory( ResourceProducer, PosDecode, MinFixed, MaxFixed, Cacheable, ReadWrite, 0x00000000, - 0xF0000000, - 0xF4FFFFFF, + 0xC0000000, + 0xCFFFFFFF, 0x00000000, - 0x05000000, - ,, _Y01) + 0x10000000)


+ DWordMemory( + ResourceProducer, PosDecode, MinFixed, MaxFixed, + Cacheable, ReadWrite, + 0x00000000, + 0xF4000000, + 0xF5FFFFFF, + 0x00000000, + 0x02000000)

would change to

+ DWordMemory( + ResourceProducer, PosDecode, MinFixed, MaxFixed, + Cacheable, ReadWrite, + 0x00000000, + 0xFBE20000, + 0xFBE3FFFF, + 0x00000000,
+ 0x00020000)

The third follows the same pattern. Leave the final function call as-is.

Reinstating PCI Passthrough Config via pciback

Think back to Xen Part 9: PCI Passthrough. Did you amend the /etc/init.d/xencommons script to enable passthrough for one or more PCI devices? If you did, heads up: reinstalling Xen is about to overwrite your code.

If you used some custom code, just copy it into tools/hotplug/Linux/init.d/xencommons.

If you used the bog standard code in the tutorial and just amended the BDF ID, then to make things simpler you may want to add this xencommons patch to your patch set (NB: this is built against revision 25099), and amend your BDF ID in it as before. That should make maintenance easier, and remind you to update that file if/when you build a newer version of Xen in the future.

Patch Xen Unstable

$ for file in `ls ${xenpatches}/*`; do patch -N -p1 < $file; done

Check that succeeded. Then copy the graphics card's BIOS, which you extracted earlier, to the vgabios folder:

$ cp /home/ace/vgabios-pt.bin tools/firmware/vgabios/

Compile & Install

$ make xen && make tools && make stubdom

Now time for installation.

# make install-xen && make install-tools PYTHON_PREFIX_ARG= \
&& make install-stubdom
# update-grub


# shutdown -r now

root@ace2x1:~# dmesg | grep "mem 0x" [ 0.669673] pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff] [ 0.673606] pci 0000:00:00.0: reg 10: [mem 0xc0000000-0xcfffffff 64bit pref] [ 0.673606] pci 0000:00:00.0: reg 18: [mem 0xfbe20000-0xfbe3ffff 64bit] [ 0.673606] pci 0000:00:00.0: reg 30: [mem 0xfbe00000-0xfbffffff pref] [ 0.732491] pci 0000:00:00.0: address space collision: [mem 0xfbe00000-0xfbffffff pref] conflicts with 0000:00:00.0 [mem 0xfbe20000-0xfbe3ffff 64bit]

This is where it should be working. Instead of that, I see an erroneous BAR contrary to the ranges I provided, and I get no further.

Saturday, 25 August 2012

Apple vs Samsung: The Farce

I've always been amazed this made it to trial. It seemed like an open and shut case; one that should have been thrown out long before a jury was convened and it made headline news. Apple claimed that Samsung's mobile devices violated 6 of its patents. Today, the jury sided with Apple on 5 out of 6 of these patents, and awarded it $1bn in damages.

Has there ever been a clearer demonstration of the urgent need for patent reform? It's a system which, for hardware and software, offers little if any protection for true innovation, and has simply descended into a messy lawyer's dream of suits, counter-suits, gross monopolization, the growth of patent trolls, and the ousting of the "little guy". Which is moreorless the opposite of its claimed raison d'etre.

Groklaw describes the verdict as "preposterous"; a "farce". I describe it as a disgrace.

Take, for example, Apple patent D677. It is a patent for the design of the iPhone. It describes the front as black; flat; rectangular; 4-cornered; round-edged; containing a screen; with thin side borders; larger top and bottom borders; a top speaker; button area beneath.

Did Samsung use a similar design for some of their mobile devices? I believe so, yes. But, then again, so would almost any smartphone manufacturer. What D677 describes is a blueprint for what almost has to be to constitute a phone with a touchscreen.

Think about it. To manufacture a smartphone (and remember that Apple was not the first), you need a touchscreen (which is black, flat, and rectangular). You need to house it in a container (which will obviously need 4 corners, and unless you want your end-users to stab their hands every time they unpocket it, those corners will need to be round-edged).

The thin side borders are a simple case of ergonomics. You need to hold the device, but need to be able to touch all areas of the screen. Wide borders would make it harder to reach all areas of the touchscreen. You can't sensibly add any other useful functionality (like buttons) on the sides, since you'd accidentally hit them when using the touchscreen. Finally, small bezels look better; monitor manufacturers have promoted this as a feature for years, as have TV manufacturers, laptop manufacturers, etc.

The larger top and bottom borders are also required, because there's a lot that needs to fit into a smartphone "under the hood" - and if you make one dimension shorter, you need to make the other dimension longer, just to fit everything in.

The speaker at the top; now, that's surely something that could have been placed elsewhere? That must have been copied.

Not exactly. Remembering these are phones, consider: where is your ear in relation to your mouth? It's another case of basic human requirements.

That leaves us with the button area at the bottom. Android (like iOS) requires some hard buttons, like the home button. They have to go somewhere. For the principal buttons, putting them on the side makes them awkward to use; on the top is too far to reach for most people's hands; and the sides we've ruled out already. Where else is left?

D677 simply specifies what any smartphone manufacturer would be likely to work out for themselves within the first few days, or hours, of the design process. You need a touchscreen, buttons, speaker, mic, camera(s), battery, processor, memory, lights, connectors, etc. There are requirements posed by the OS. There are human factors to consider. Putting them all together for both Android and iOS, and with presently available hardware, you end up with something similar to D677.

The same holds for many of the other patents Apple has used to secure this $1bn ruling.

The long and short of it? Apple, somehow, holds some patents which describe obvious design points for the classes of devices called smartphones and tablet PCs. It's tantamount to a PC manufacturer waving a patent describing the design for a computer case, keyboard and monitor, and asking all the other PC manufacturers in the world to cough up royalties. Or, for a non-technological example, it's tantamount to a clothing manufacturer taking out a patent for a small handbag; tapered; with a latch in the top-centre; a long adjustable strap; and a reinforced bottom.

This case considered the similarity of the external aesthetics of the hardware, of which manufacturers of smartphones have very few choices, as I've already described. The similarity is by necessity, much like the similarity in most QWERTY  keyboard designs is by necessity. The case neglected to consider the extreme dissimilarity in every other aspect of the devices; from internal hardware, to the OSs, to the applications and services on top, to the UX, etcetera.

This particular ruling seeks to ensure that Apple alone is allowed to manufacture and sell smartphones and tablets in the US.

How? It forces other manufacturers to modify the external aesthetics of their devices to sub-standard designs, in order to differentiate them from Apple's "patents" sufficiently such that juries no longer complain. Indeed, Samsung has already started to do so, with its release of the Galaxy Tab 2 - moreorless identical to the Galaxy Tab, just with an uglier and less practical external design.

I don't blame the jury. They simply affirmed that the Galaxy S3 has a speaker at the top, a screen in the middle, some buttons at the bottom, and non-lethal corners.

It was the job of the patent examiners to ensure the validity of the patent claims at issuance; to properly inspect the claims for prior art and non-obviousness.

It would have helped if the judge had permitted Samsung the right to demonstrate invalidity by displaying the prior art.

The damage claims might have sounded less ridiculous if all the damages awarded related to the claimed violations (some figures, in the millions, were requested for Samsung devices deemed non-violating).

Finally, the case might have been more believable if it had taken a length of time to deliberate which befitted the complexity of the case.

Appeals will undoubtedly follow.

Ctrl+Space (Content Assist) doesn't work in Eclipse

Navigate to Window -> Preferences -> General -> Keys. Find Content Assist. Delete Ctrl+Space from the binding field, and hold down Ctrl and press space:

  • If only Ctrl+ is displayed, then something is intercepting the Ctrl+Space key binding before it reaches Eclipse. For me, it was IBus (in XFCE, go to Settings -> Input Method Selector, Use IBus -> Preferences*, and check the Enable or disable keyboard shortcut. If it's set to Ctrl+Space, clear the field, click Apply, and restart Eclipse).
  • On the other hand, if Ctrl+Space is displayed, then Eclipse is able to receive the key combination. Go to Window -> Preferences -> Java -> Editor -> Content Assist -> Advanced and make sure all relevant proposals are enabled. Other than that, just ensure dodgy import statements aren't affecting content assist's ability to recommend completions.
* Update: under Xubuntu 12.10, this seems to be slightly different, and Preferences is no longer available in this dialog. No matter; fire up a terminal and start ibus-setup. Delete Ctrl+Space from the keyboard shortcuts. There was no need to restart Eclipse.

Saturday, 28 July 2012

Catalyst driver problems on Fedora

Both AMD and NVIDIA have a less-than-ideal graphics driver offering. Many of us are forced to use their proprietary binary drivers due to marked performance issues and feature limitations in alternative open-source offerings.

With proprietary drivers comes pain, as driver release schedules don't necessarily match up with kernel or xserver release schedules, and module incompatibilities post-upgrade often cause post-reboot soft crashes, kernel panics and headaches in general.

I've posted most of this before, but with further kernel updates causing problems for people, it seemed a good time to combine everything into one post.

I upgraded my kernel, and after rebooting my system hangs on a black screen

You need to install the latest Catalyst driver and/or recompile the fglrx module. Before you can do this, you first need to get to a shell; and, as usual, the following triage procedure for doing so applies. Keep going until you get a friendly login prompt.

1. Try switching to another tty with Ctrl + Alt + F[1..12].

2. Alt + PrintScrn + R, then retry the above.

3. You'll need to reboot.
i) Do so the soft way, waiting a couple of seconds in between each alpha key: Alt + PrintScrn + [R,E,I,S,U,B] 
ii) If that doesn't work, you'll have to do so the hard way - with the power button.

4. At the GRUB prompt, press 'e' to enter edit mode. Locate the kernel line beginning with 'linux'*. If 'rhgb' is present, delete it (this will show the textual output of the boot process, and is useful for debugging). In its place, add 'single' (this boots to single user mode, i.e. a shell). When done, press F10. This should dump you at a shell.

5. If even that has failed, your problems are likely more serious than an incompatible module. You're going to need an installation disc to boot into rescue mode, mount your drive, and continue your investigations there.

Once you have a shell, you're in business. Check the /var/log/Xorg.0.log logfile to confirm there was a problem with your fglrx driver. Locate your fglrx installation file, which will be named something like ''. Ideally, you will use the newest version available. Run it:

# chmod +x
# ./

You may need to use the --force flag to overwrite the previous installation.

Install as usual. If there are no errors, reboot. If there are, check the logfile at the location provided.

The fglrx installation error log contains
"error: ‘cpu_possible_map’ undeclared (first use in this function)"

This is a known issue which still isn't fixed. A patch has been made available. If you're using the patch application, direct it towards /usr/lib/modules/fglrx/build_mod. If you don't have internet access on the box, the quickest solution IMHO is to literally apply the patch manually (given its small size).

To do so, edit the /usr/lib/modules/fglrx/build_mod/firegl_public.c file. Search for the first instance of 'i387' and add this line beneath (line 190):

#include <asm/fpu-internal.h>

Search for the first instance of 'FN_FIREGL_KAS', and replace the line beneath with (line 4160):


Edit the /usr/lib/modules/fglrx/build_mod/kcl_ioctl.c file. Search for the first instance of 'to allocated' and add this line beneath (line 220):

DEFINE_PER_CPU(unsigned long, old_rsp);

After saving those files, rebuild and install the fglrx module:

# cd /usr/lib/modules/fglrx/build_mod
# ./
# cd ..
# ./

Finally, test that you can load your fglrx module with

# modprobe fglrx
# lsmod | grep fglrx


I've rebooted back into Fedora, but my multi-monitor configuration is awry

Use the AMD Catalyst Control Center (sic) in administrative mode.

"AMD Catalyst Control Center (Administrative)" fails to open

Run this instead**:

$ sudo amdcccle

My HDTV shows large black borders around the edges

See my AMD Catalyst: Fixing Underscan post for some background info, but you should just need to run this and reboot:

$ sudo amdconfig --set-pcs-val=MCIL,DigitalHDTVDefaultUnderscan,0

There's an ugly "AMD Testing use only" watermark permanently placed on the bottom of each monitor

You can upgrade/downgrade to the latest release (i.e. non-beta) driver, or run Kano's script which I've reproduced below (and tweaked slightly for use on Fedora):


cp $DRIVER ${DRIVER}.original

for x in $(objdump -d $DRIVER|awk '/call/&&/EnableLogo/{print "\\x"$2"\\x"$3"\\x"$4"\\x"$5"\\x"$6}'); do

    sed -i "s/$x/\x90\x90\x90\x90\x90/g" $DRIVER


* As you already know, Linux is just the kernel. Much of the rest of the OS is based on the recursively-named GNU utils. 

** Unpolitically correct, I know, but it "just works" across all desktop environments and has never caused me a problem with amdcccle in particular. If anybody knows of a gksudo/gnomesu equivalent package in the Fedora repos, I'd be grateful to hear of it. The administrative menu entry for amdcccle calls amdxdg-su -c amdcccle, but this doesn't work for me (under xfce at least, it fails with "no graphical method available"). ATI bug report. Ubuntu bug report.

Friday, 8 June 2012

AMD Catalyst: Fixing Underscan

The proprietary ATI Catalyst fglrx driver always tends to underscan on my 1080p Panasonic TV.

Underscan is the condition of seeing a smaller image than should be the case: a "black border" around the edges. It is sometimes set by default to counteract overscan (where the image is enlarged and cropped; a throwback from the old days of CRT monitors).

The method for fixing it is non-obvious. One expects to be able to adjust the underscan/overscan settings via amdcccle - the AMD graphical configuration tool - but all too often amdcccle doesn't make this facility available for HDTVs.

Let me briefly step you through the bizarre history of AMD's support for fixing underscan, before presenting the solution that works for today's drivers. If you can see any logic behind AMD's strange decision-making, I'd be intrigued to hear about it!

The solution to this problem used to involve aticonfig's* --set-dispattrib command. You had to establish which arbitrary display name (from a possible list of of 12 candidates) the driver was using to identify the monitor in question, then identify the size/position settings for that display, and play around with them until the output looked right.

This used to be the procedure:

1) One would find the "display type" (here I'm outputting the horizontal offset in pixels for each display, so of the display types that return values, I can determine which display is the one I want, based on the horizontal offset):

$ for disptype in crt1 lvds tv cv tmds1 crt2 tmds2 tmds2i dfp3 dfp4 dfp5 dfp6 ; do amdconfig --query-dispattrib=${disptype},positionX; done

2) Then one would find the relevant settings for the display identified as exhibiting underscan:

$ for dispattrib in positionX positionY sizeX sizeY overscan ; do amdconfig --query-dispattrib=<display type>,${dispattrib} ; done

3) Finally, one would check over these values and amend them as appropriate:

$ sudo amdconfig --set-dispattrib=<display type>,<display attrib to correct>:<correct value>

However, here's the problem with this method: if you're using RandR >= 1.2 (which you will be if you're running recent software), this procedure will fail with the following.

$ amdconfig --query-monitor
Error: option --query-monitor is not supported when RandR 1.2 is enabled!

The next step was to work around this by disabling XRandR (by editing /etc/ati/amdpcsd and adding EnableRandR12=Sfalse under [AMDPCSROOT/SYSTEM/DDX], and by adding Option "EnableRandR12" "false" under the Driver section of your xorg.conf).

Again, this has been superseded.

The new solution, for cases where XRandR is >= 1.2, took me a while to figure out, and is surprisingly non-intuitive.

There is a practically undocumented variable in AMD's Persistent Configuration Store (PCS), used to store AMD specific driver settings. It's called DigitalHDTVDefaultUnderscan, and if set to false, it disables underscan completely.

Following this procedure should remove the black border and restore your 1:1 pixel mapping:

$ sudo amdconfig --set-pcs-val=MCIL,DigitalHDTVDefaultUnderscan,0
$ sudo reboot

One wonders what AMD was thinking, leaving such a common and important task out of amdcccle, and without documentation. It's a serious oversight.

As for other secret PCS variables: you can find some in /etc/ati/amdpcsdb, some in /etc/ati/amdpcsdb.default, and the rest are buried somewhere in the output of strings /usr/lib64/xorg/modules/drivers/

Not exactly what one could call intuitive.

*Note that aticonfig is now called amdconfig.

Monday, 4 June 2012

Raspberry Pi: Expanding the SD Card Partition

32GB card with an expanded root partition and
an expanded swap partition
The Debian image for raspberry pi only totals 1.8GB; so, if your SD card is larger than 2GB, a lot of space will be going to waste.

You can resize the partition directly via the Pi (the hard way), or you can take out the SD card and do it on another PC (the easy way). Let's do that.

Insert the SD card into your other computer. Don't mount the SD card; or if it automounts, unmount it first. Install and open gparted (the friendly GUI for parted):

# apt-get install gparted
# gparted

find your SD card device using the dropdown menu in the top-right. It might be obvious which it is from the partition sizes, but if in doubt, check. You can refer to the system logs:

# tail -n 1 /var/log/messages
Jun  4 17:46:02 ace2 kernel: [321194.297736] sd 9:0:0:0: [sde] Attached SCSI removable disk

Once you've selected the device, you should see a fat32, and ext4, and a swap partition. 
  • the fat32 partition is the boot (/boot) partition
  • the ext4 partition is your root (/) partition
  • the swap partition is for swap space, but is by default unused by the Pi
You have various options here. To expand the root partition leaving other partitions untouched:
  • select the swap partition, right-click and select Resize/Move. In the window that appears, drag the partition to the end of the available space, until "Free space following (MiB)" shows 0.
  • select the ext4 partition, right-click and select Resize/Move. In the window that appears, drag the arrow on the right of the partition to the end, until "Free space following (MiB)" shows 0.
  • ensure you have backed up any important data
  • click the tick to "Apply all operations". This may take a few minutes. 
Now when you boot your Pi, your root partition should be considerably larger.

Swap space
4GB SD card with an expanded root partition and no swap

You are free to delete the swap partition instead of just moving it. By default, your Pi has swap space disabled, hence it is just wasted space anyway. 

Alternatively, you can enable use of the swap space via the swapon command or by editing /etc/fstab. I think the reason this isn't done by default is twofold:
  • it will make heavy use of your SD card in terms of IOPs, thereby reducing its lifespan
  • random read/write performance is pretty poor for SD cards, hence this might slow you down considerably
I think it's best to leave swap disabled, and delete the partition if you see fit.

Saturday, 2 June 2012

Oracle v. Google v. Common Sense

Oracle's drawn-out legal campaign against Google has finally reached its conclusion. First came the judgment that Google has not breached copyright (aside from one elementary 9-line method), and that it has not infringed upon Oracle's patents. On Thursday, the final determination came, which asserted that APIs are non-copyrightable.  Here it is:

This command structure is a system or method of operation under Section 102(b) of the Copyright Act and, therefore, cannot be copyrighted


Finally, some common sense in the world of patent litigation. It's been a long time in coming.

This is great news for software developers everywhere, and we all owe a debt of thanks to Google (for not caving in and paying Oracle's ransom demands), to the FSF (whose support over the years has been invaluable), the EFF (with great coverage), those at Groklaw for their tireless monitoring of the trial, and countless others; not forgetting the patient jury and most learned judge.

The problem is this represents just the tip of the iceberg. Software patents are a menace to the very practice of software design, leaving programmers (especially lone programmers) at the mercy of corporations with large patent portfolios, even if their work is entirely of their own inspiration and manufacture. Software patents are simply nonsensical.

The argument for software patents goes like this: those who spend time and effort creating a product should be assured of their ability to reap financial rewards, free from risk of their competitors duplicating and reselling their work. Without this assurance, say the patent advocates, companies and individuals won't produce software. (You have to put open source to the back of your mind for this argument to hold water.)

Yet, every programmer knows that their work is amply protected by existing copyright legislation. Consider the Oracle v Google case: the only complaint that was even partially upheld was copyright violation in a single nine line method. Copyright assures that another competitor can't take your code and use it (even with changes) in their own product without your permission. They would have to rewrite it.

Software patents go further, and copyright ideas behind the code. Yet, there's an odd dilemma here: ideas which make up code are expressed in algorithms. Algorithms are not copyrightable.

You can't just go and copyright the concept of a loop. That would make programming nigh on impossible - unless you paid royalties to the owner of the "loop" patent. Yet, a loop is hardly that much of a mental leap when considering machine code. It's basic. Just because somebody wrote a loop first before somebody else doesn't give them the right to collect money from everybody who subsequently writes something similar. There are only so many fundamental programming constructs which one can use to write software with.

It's tantamount to copyrighting the method of harvesting an apple from an apple tree. Somebody managed it first - they made a mental leap, and determined that to acquire the apple, one needs to:

harvest_apple : apple
  while !sense_apple
    move_hand(observe_rel_pos(apple, hand)) 
  return pull_toward

Since this individual did it first, they should be able to receive money for every subsequent apple harvest that takes place on the Earth. This is the principle behind a software patent.

If software patents sound crazy, that's because they are. In too many cases, they prevent programmers from coding the obvious. It shows, too: the amount of litigation over software patents is simply astonishing. It's why Google bought Motorola for its patent portfolio. It's why Google's filing an antitrust suit against Microsoft and Nokia.

Yet the situation will be hard to change, as patents are worth a lot of hard cash, and no business wants to throw away most of its net worth. RIM's patents are said to be worth billions of dollars. Google bought Motorola for over $12bn, moreorless entirely for its patents.

Perhaps some common sense is coming to the situation. Kodak recently lost a case against Apple and RIM, where it was trying to enforce a patent around camera image previews. The judge threw out the case due to the "obviousness" of the patent. This case also highlighted the financial significance of these patents: Kodak's share price dropped 25% after the ruling.

This lunacy will have to reach a balance soon, between assurance of revenues on the one hand, and common sense on the other. At present, the only way companies can protect themselves is by investing billions in vast patent portfolios and expensive lawyers. A long term solution needs to involve more intelligence than this.

The supremely amusing factor in the Oracle v Google case was that the case concerned copyright and patents for Java. Up until only a couple of years ago, Java was owned by Sun Microsystems. Sun released Java under the GPL license to open things up to the community. Oracle bought out Sun in early 2010, and in just half a year, were taking Google to court for supposed infringement of Sun's patents. All this, even though Oracle's official position was only to use patents for defensive purposes.

It's really quite sad. Tarnishing an excellent reputation in just a couple of years, thanks to one company hell-bent on sapping R&D and replacing it with lawyers. How they could fail to realise that their existence depends on the developer community is mind boggling.  It's no wonder that the father of Java, James Gosling, has recently switched sides and joined Google. We wish him all the best.

There's been much dissatisfaction with Java 7, Java security, Java's falling behind other languages, and  plenty of other Oracle products of late. For example, think about Oracle's atrocious treatment of OpenOffice (which sat almost dormant, before the community branched off LibreOffice, and Oracle finally relinquished OO to Apache).

Oracle must stop pursuing quick monetary wins through unwarranted litigation, get back to developing their own software portfolio, and re-engage a battered, disillusioned and diminished community. Lest they slip and fall on their own sword.

Thursday, 31 May 2012

Debian Testing/Unstable: FGLRX Just Broke!

I know. Annoying, isn't it?

The X server has just been upgraded to 1.12 in wheezy and sid. Unfortunately, this version uses a new interface which is incompatible with fglrx at present. In other words, you can't install fglrx in wheezy or sid without downgrading (and, I would suggest, pinning) the xserver version.

The next proprietary fglrx driver version, 12.5, is supposed to support the new X.

Meanwhile, over in the land of stable, Squeeze is fine; that's still on 1.7.

Update 4 Jun 2012: AMD announced they're changing their driver release schedule, so they have fewer releases: hence a 12.5 version is no longer on the cards. Thanks AMD. I just tried the 12.6 beta driver (curiously labelled as 8.98), and xserver 1.12 is still broken. However, there are people claiming that it works on 32-bit Debian and all architectures of ArchLinux.

Update 5 Jun 2012: I've just tested Fedora 17 with Xorg server 1.12, and here the Catalyst 12.6 beta driver is working fine.

$ yum info xorg-x11-server-Xorg | grep Version
Version     : 1.12.0

There's only one catch; the *@$%@rds wise overlords at AMD decided it would be a spiffing jibe to take the only working driver and plaster a bold "AMD Testing use only" watermark over the bottom right hand corner of every display.

Thankfully, there's a simple script to hack the fglrx module binary and remove the EnableLogo calls. It just warrants an elementary tweak to correct the module location for Fedora 17, and also to backup the original binary first. The resulting shell script, to be saved, exec perm'd, and run as root, is:

cp $DRIVER ${DRIVER}.original
for x in $(objdump -d $DRIVER|awk '/call/&&/EnableLogo/{print "\\x"$2"\\x"$3"\\x"$4"\\x"$5"\\x"$6}'); do
sed -i "s/$x/\x90\x90\x90\x90\x90/g" $DRIVER

If you'd like to get a feel for what that does, run the following:

$ objdump -d /usr/lib64/xorg/modules/drivers/ | grep EnableLogo

The callp lines shown there invoke AMD's wonderful EnableLogo function. To get around this, the awk code in the script is getting those callp lines, and returning the hex for those calls. sed is replacing it with  a series of 0x90 commands - i.e. NOPs.

Quite a neat solution to a messy problem, eh? Thanks go to Kano (post 2).

Update 30 Jun 2012: A kernel update broke my 12.6 beta, forcing me to revert to the OS driver. I tried installing the 12.6 final release from AMD, which claims to support 3.4 kernels. However, building the kernel module failed with "'cpu_possible_map' undeclared" - an error which apparently stems from the 12.4 release. Applying this patch seemed to do the trick.

0. Install 12.6
1. Apply the changes to /usr/lib/modules/fglrx/build_mod/*
2. Run /lib/modules/fglrx/build_mod/ as root
3. Run /lib/modules/fglrx/ as root
4. Reboot