How to reset Viewsonic VP181b EEPROM / DVI

I’ve long been searching for the method to reset the EEPROM on my Viewsonic VP-181b 18″ monitor. I bought this monitor second-hand and the DVI input has never worked since I got it. This didn’t matter when I was using it with my PC, but now I want to use it on a Raspberry Pi with an HDMI-to-DVI cable, I’d like to get it working.

Research suggested that doing a factory reset of the EEPROM can sometimes fix a recalcitrant DVI input. Unfortunately it doesn’t seem to have worked for my monitor, so it looks like the circuit is actually fried, but just in case it helps anyone else, here’s how you do it.
I got this info from the Service Manual which I was able to download from here (but only after disabling Adblock Plus).

NB: It seems you need the monitor connected to a VGA input which is displaying something, and have that input selected, because it won’t show the on-screen menus otherwise.

First, put the monitor into “burn-in” mode by holding down the “2” and “down” keys while you switch it on (you can use the front-panel power button, no need to resort to the switch)

Second, switch off again and then go into “factory” mode by holding down the “1” key while you switch on. Factory mode is only accessible once you are already in burn-in mode.

Now when you press the “1” key, you should see a different menu than normal. It looks like this:

Use the down button to select the “Initial EEPROM” option and press 2 to activate.

At this point the EEPROM should be reset, but you are still in burn-in mode – if you switch to the DVI input at this point, you will see a white screen and it will say that you’re in burn-in mode.

To exit burn-in mode, you need to power off, and then power on again while pressing the “down” and “up” keys.

How to revert Firefox 14 awesome bar auto-completion behaviour / switch off URL autofill

Firefox 14 introduced a change to how the “awesome bar” (aka location bar) works – it now auto-completes in place. A lot of other browsers do this, so I guess it’s consistent, but I don’t like it – I find it much faster to recall my most commonly visited sites by typing a few letters (usually 3 is enough) which often occur in the *middle* of the URL, while the auto-complete always works from the *beginning* of the URL, and I find it confusing to be offered a suggestion which isn’t what I’m looking for. I’d rather see only the letters I’ve typed in the field, and a list of suggestions below.

I eventually found out how to revert to the old behaviour, but it wasn’t easy and involved being sent round in circles a few times. So, for your benefit, here’s how:

1. Type about:config in the awesome bar / location bar.
2. Use the Search field to locate the preference browser.urlbar.autoFill
3. It is true by default in Firefox 14. Double-click to set to false.

How to make sure you don’t miss posts by your favourite Facebook Pages

Make an Interest List and add it to your sidebar Favourites

You may have seen the following message circulating on Facebook:

Due to Facebook’s new policy, only about 10% of people that ‘like’ a fan page will see the status updates.

In order to see my posts and notifications just click/hover over the ‘Liked’ button (beneath the cover photo, to the right) and activate the ‘show in news feed’ option.

This will allow you to see all of the posts.

Unfortunately that information is WRONG.

You will probably find that your Show in News Feed option is already active for the pages you like, unless you specifically switched it off. It does not guarantee that you will see the page’s updates in your feed. If you click this option you may be inadvertently switching it off!

Facebook decides which posts to show in your News Feed based on how often you “interact” with a Page’s posts – basically how often you hit Like, Comment, or Share on them. But regardless of how much you do that, it now shows Page posts far less often than it used to, because FB wants Page owners to pay for the privilege of having their posts seen (by people who have already asked to see them!)

If you’re a big fan of a particular Page, there is something you can do to help ensure that you don’t miss its posts – put it in your sidebar Favourites (or Favorites for US spellers). It’s a bit complicated, but here goes:

When you hover over the Liked button for a Page, you’ll see an option in that menu to create a New List. This creates what’s called an “Interest List”. As well as the currently selected Page, you can then also add to it any other Pages that you want to follow (such as Quextal 😉 ) – but I suggest not putting too many pages into one list, or Facebook might again intervene to decide which posts it shows in the list. Or you could just have one page per list, and create a new list for each page that you want in your sidebar, but there may be a limit to how many lists you can have in there.

So, choose which pages you want on this list (you can always add or remove pages from the list later if you change your mind), and click Next. You then have to choose a name for this list, and decide whether to make it public – that’s up to you, and it largely depends on whether you think it might be useful for other people. Making a list public shouldn’t infringe your personal privacy in any way.

When you finish creating the list, FB will show you the list page, hopefully full of posts by the Pages you selected.

Now go back to the main FB homepage. Your new list should be in the sidebar under INTERESTS (you might have to click MORE at the bottom of the sidebar to see this. If you hover over the name of the list you just created, a pencil icon appears to the left of it[1]. Click that and select Add To Favourites (or Favorites if you’re using US English). This will (a) put it in your Favourites / Favorites section near the top of your sidebar, and (b) put a number next to it whenever there are new posts to read. To read them, just click on the name of the list.

  • [1] Users of Matt Kruse’s excellent Social Fixer plugin may have to disable it temporarily to do this step.

Note that this probably has no effect on how likely a Page’s stories are to appear in your main News Feed, which will continue to be driven by Facebook’s desire to extort money from the Page owners. But at least you’ll be able to see in your sidebar when there are new posts to read.

You can subscribe to other people’s public Interest Lists. For example, here is my list of selected psybreaks artists and labels (it’s not meant to be exhaustive, so apologies to anyone I’ve left off).

I hope this is useful. Please share this article, especially where you see people sharing the wrong information quoted above, and add a comment if you notice any errors or have additional information.

One size does not fit all

I wrote this in reply to an insightful article by Jenni Tennison giving an insider’s view of the UK government’s current project to unify all of its websites into a single one. I agree with the doubts she raises about this project, because I’ve been there, done that before…

Some years ago, I worked on a long-term project which was funded by the then Public Record Office. When that institution was rebranded as the National Archives, complete with shiny new website, they decided that our hitherto independently-styled and -managed website must be rebranded to mimic theirs in look and feel.

This was far from easy, partly because their design had a horrendously messy implementation, and partly because (of course) it had been designed without any reference to us or how our data delivery might fit into it. It was imposed on us as a fait accompli, and we had to – somehow – squeeze our square peg into their round hole.

We spent a full year smashing our clean, lightweight design into pieces and gluing it back together in order to fit their restrictive, bloated one. I didn’t much enjoy doing it (can you tell?), but I like to think we did a good job.

Possibly too good. What we found when it went live is that users got confused: our site, now a subdomain of theirs, looked and felt so similar to the main site that users expected it to work in exactly the same way, but this was ultimately impossible as ours had a fundamentally different set of functions than theirs. Those areas where we overlapped had been made to work identically, but this just led to confusion where the functionality diverged.

One size does not fit all. And the more distance there is between those responsible for the design and management of a site, and those producing the content for it, the more likely it is that some of that content will be presented poorly, or not at all.

I don’t think people really want all government websites to look the same, or to be in the same domain1. I think what they want is for information to be easy to find and easy to access. The best way to ensure that is to keep the designers and managers of the website as close as possible to the people producing the information. By all means have standards to ensure best practice, but keep them as minimal as possible, with a mechanism for those bound by them to suggest changes if they find them too restrictive.

And let different things look different, because that helps people to realise that they are different.


  1. URLs are irrelevant to many non-technical users, who nowadays routinely rely on search engines – even to find sites that they visit every day, as evidenced by the “Facebook Login” debacle 

Another day, another WTF

Can’t find the customer’s home country in the database? That’s ok; just pick any country with a vaguely similar-sounding name, that’s good enough.

A bizarre bit of code in the e-commerce software1 I’m currently fixing up does exactly that; it uses the Soundex algorithm to look for an approximate match to where the customer lives, according to the similarity of how the countries’ names are pronounced, rather than the more conventional considerations like geography.

There are 27 countries in the database that share a Soundex value with at least one other (mostly just pairs, but the largest matching group is 4: Ghana, Guam, Guinea and Guyana). In each group, all the countries would be rewritten to whichever was first alphabetically. Addresses in Greece would appear to be in Georgia; Norway became Nauru.

This sort of thing is nice and easy to fix (finding it is the hard part), but leaves a strange aftertaste… the insoluble mystery of just what was going on in the mind of whoever decided to write that code that made them think it would be a good idea…


  1. The software in question is an extended version of OSCommerce with lots of add-ons and customisation. I’m not sure whether the code in question originates from one of the add-ons, or is a specific customisation of this site done by their previous developer. In a way, I hope it’s the latter, so that other sites aren’t being affected… 

Perl-Powered DJ

No, it’s not really my DJing that’s script-powered, but over the last couple of years that I’ve been doing regular net radio shows, I have written a number of Perl scripts to help with some of the more tedious aspects of the job, particularly related to the posting of the MP3 archives and tracklists of those shows (and my occasional promo mixes) on quextal.com, but also for the broadcasting process itself.

In fact one of the first scripts I wrote was to assist with the fact that I broadcast (using darkice on my Linux box) on different stations, necessitating having multiple different configurations for darkice. What began as a one-liner to do the equivalent of darkice -c /path/to/darkice/configs/$1.cfg then expanded to do things like shut down certain daemons before broadcasting, and start them again afterwards, as my elderly PC would occasionally struggle to cope with the demands of running two MP3 encoders if it was also dealing with a large incoming mail or a disk-heavy cronjob.

I then tired of hitting reload on the server stats page to keep an eye on my listener count, so now I have a script which fetches that page every couple of minutes, parses the relevant number out of it, and shows it with a timestamp, so I have a full record of how many were tuned in at each point of the show, what the peak was etc.

Scripts followed to automate filling in the ID3 tag, and renaming darkice’s output spool name into a standard format prior to uploading it to the site.

quextal.com is a WordPress-based site with a heavily customised skin and a couple of extra plugins, nothing too fancy. After writing the first few posts by hand, I came up with a simple template-driven script which would simply wrap my plain-text tracklist of the show in some HTML to make it look a bit prettier for the site. This evolved so that it would read the metadata from the MP3 (eg filesize, bitrate, length in minutes and seconds) and put that info in there as well.

After a while I decided to have my online tracklists in table format rather than just reproducing what I write in plain text. So this meant adapting the script to split up each entry in the tracklist for the separate columns. I had the prescience to choose a roughly standardised format for my plain text tracklists anyway — at its simplest, it’s just “Artist – Title” or “Artist – Title – Label” — but over time it’s evolved a number of variations to deal with, for example, marking out who played which track when I have a guest in. I sensed it was time to create a separate library (Perl module) to parse tracklists into separate information, and a number of my scripts now use this.

Just this year I expanded the templating script into a more complex system which interfaces directly with the WordPress API. It determines which radio station the broadcast was on (which is in the filename), searches for some of my past mixes for that station on the site, and offers a selection of their post titles so I can choose one (eg with, or without, a guest DJ, as applicable) on which to base the default title for the new one, helping to keep the title format consistent. Both my current regular shows feature the number of the show in the title – the script will automatically increment this, be it in ordinary numerals or Roman numerals. Appropriate tags are chosen automatically, and any additional words for the article can be added before the script posts it directly to the site via the API.

Why stop there? Since my Tracklist library conveniently gives me information about the artists and labels played in each show, the script now also creates a Custom Field entry for each. I don’t really know why I’ve done that… just a vague sense that it might be useful at some point in the future. For now, a slight tweak at the WordPress end provides A-Z lists of artists and labels for each mix at the end of the article. At some point, if so desired, it should make it easier to search for all the mixes containing a specific artist or label…

Most recently, the thing I was finding particularly time-consuming was to fill in the label for each tune, which information I often don’t have handy during the show when I’m writing down the track. So now I have a couple of scripts to help with that. The first just looks for the “artist – title” string in all my previous tracklists and copies the label info from there if it finds it. The second, which is a work in progress, attempts to automate looking up the track details on the sites where I do most of my tune shopping, and screen-scraping the label from there.

Curiously, the net effect of all this automation has not really made it significantly quicker or easier to post a mix, compared to when I first started out and was doing it all by hand. What it has done is escalated the amount and quality of information I’m putting up, its consistency and reliability, while taking about the same amount of time and effort. Obviously that doesn’t include the effort required to write the scripts… but that’s not effort. That’s fun. It’s been a whole series of interesting little coding tasks… which of course is the main reason I did it.