Archive for the 'internet' Category

My facelets are no longer parsed – an adventure in Google and Java

The new feelitlive.com website is powered by Seam running over Hibernate and JSF. The stuff you see – the HTML source code – is generated by Facelets.

Facelets is a layer over JSF and the two work well together. I have mixed feelings on JSF – I don’t think I’d use it without the corrections made by Seam – but the combination of Facelets+JSF+Seam is quite compelling – once you get it set up. If you are an ex-ASP.NET guy like me then Facelets is equivalent to the union of “master pages” and “user controls”. JSF is equivalent to the whole supporting notion of “controls” in ASP.NET, but has has a cleaner separation of front and back end logic.

In Facelets the “master pages” part is known as “templating” and gives you a template for the whole page. Your content is added into the middle.

The “user controls” feature is known as “composition components” and gives re-useable blocks that you can drop into the page or template ad-hoc by simply adding a namespace qualified element. These are used all over feelitlive.com for everything from individual links and images to whole blocks of content.

my composition components were not being parsed

So, I was a little shocked to find that all of a sudden my composition components were not being parsed! Though live for months, half the site was now simply missing;  little bits of each page eaten away like swiss cheese!

I had moved to using Netbeans and was focusing on a backend web service so hadn’t actually run my homepage from my development machine for a week or so. When I went to do so, the templates and every built-in tag library was working, but the tag library containing my composition components was not parsed or executed. In facelets the tag libraries are assigned a URL and that URL becomes the the XML namespace in the XHTML source file – today all the XML in my own namespace was simply transferred into the output verbatim.

So, I scoured my change-log for evidence of any relevant changes I might have forgotten. I found nothing that even the most desperate coder would choose to consider related. I tried building from the command line – still no joy.

Netbeans was using the very latest Apache 6.0.20, so I tried the locally built WAR in three different versions of Apache including the version used in live – nothing. So “great”, I thought, “I can’t even release untested code if I wanted to”.

Thinking of the live server gave me an idea, so I downloaded the WAR file running from live and ran that in two of the Tomcat versions I had locally. Nothing. “Huh? Surely that should work”!

What about the JVM? Nope, my JVMs didn’t seem to have changed. This was mysterious – I’m sure I remembered an update arriving, but then I have three computers I use regularly.

So, no clues. Stuck. I’m a team of one on this project so nobody to ask. I could possibly change JVM anyway, though I’m not sure what I’d change it to, or where to get an older version. Possibly I could try downloading old versions of the code and trying to do a binary search for the change that caused it, but that’s assuming a code change had caused it and nothing logged looked likely. I could waste a lot of time on this one!

understanding this issue was secondary to fixing it

I left those two ideas untried, because something else occurred to me. Some stuff was working – the built in tag libraries. These are packaged in the facelets JAR and I could see them being picked up by facelets in the tomcat log. My tag lib was not being picked up.  I decided that understanding this issue was secondary to fixing it, and went about building a version of my tag library in a separate JAR, breaking it out of its home inside the main WAR file, and hoping that making it look more like that stuff tat worked would cause it to work.

I followed the layout of the seam-mail JAR, but this doesn’t use composition components – only custom components. So I added my existing folder full of XHTML fragments into  “META-INF/facelets” with the “fil.tablib.xml” in “META-INF”. My fragments were already identified as “facelets/fragment.xhtml”. I created a separate Maven project for all that, added it to the parent POM as a module, to the WAR as a dependency.

I built the new JAR, then built the WAR.

I then checked the layout was showing up okay in the “Libraries” folder in Netbeans – the JAR layout matched the ones that were picked up. Good.

Build again – can’t hurt.

Right click — run.

Wait.

Drum roll.

Bang! It worked – phew.

So what. The hell. Was that all about…..

Googling was useless

I don’t know. I still need to check it all over to make sure its all there back working, but frantic Googling was useless on this one so I wanted to share the adventure and the solution out of pity for the next developer down the road. I’d also like to make an observation about the process of searching for a solution on line.

This problem occurred near the top of a stack of technology starting at Windows XP, then Sun’s JVM, Apache’s Tomcat, JBoss’ Seam, then my code and finally Sun’s Facelet’s implementation. The stack could have been interfered with by the IDE – Netbeans – and the build tool – Apache’s Maven, or by any Maven plugin I’m using.  Searching the web for the phrase “facelets composition components are no longer parsed” gives you a blog post saying “please use my new site at [blah], I will no longer be keeping this one up to date” even the migrated article was not relevant, but did mention facelets. Fail. Big stinking fail.

What I needed was to find all documents – blog and forum posts, bug reports, and knowledge base articles – related to any of the technology components I listed and to the feature of “Facelets” called “composition components”. Properly understood, that is a small set of documents. Bonus points would have gone to the search engine that allowed me to select my symptoms from a list (they are not uncommon) but using supplied keywords to rank with would be good enough. A find-engine like this could have also offered an alert service and gone to work trying to find more results over a course of days, even monitored new bug reports and forum posts for results.

I got my answer after about 5 or more  hours of work spread over two days. I’d have been jubilantly happy to have had my answer after 2 hours. Others might be happy with 6 rather than 24, or 24 rather than 72, but I don’t think it would have taken anything like that long. I think you could do this kind of thing in interactive time, but the point is it doesn’t need to be that fast to be useful. I can always find other stuff to do as long as I can expect an answer. With Google I can’t get that answer without reading thousands of pages, and without hope of the answer being there.

Of course, linked data is the only current technology likely to ever offer answers to this sort of query. It involves too much precise classification and description for a statistical-web approach. How do statistics tell you the release history of  the Maven plugins which are used to build WAR files? They don’t. How do statistics provide an unambiguous list of the features in Facelets? They don’t. The runtime dependencies of JSF? Likewise. Can statistics help to rank pages according to the nature of their relationship with other concepts? Can stats weight pages directly related to given  concepts over those with links to concepts two degrees apart? Maybe. Placing problem reports over brochures, or holding that product components are more relevant than competitive products? Unlikely.

People say we don’t need semantic search because we can always add keywords, or they put up ridiculous road blocks because they think its “too hard”. Don’t they ever encounter issues like this one? Really? This is long tail stuff, but its definitely worth addressing. Those five hours were worth over £200. Once it exists I will use the web site that does this, I’d even pay a fee for the privilege.

How the health system should work

I decided to blog this here since its a bit involved for a comment on a politics blog. Suffice to say I’ve been argumentatively agreeing with LPUK.This is also the second time, I’ve advocated the semantic web to solve market place failure. There is no particular reason for this, its just that modern life seems to gel with it, or something, and once you are familiar with it you can’t help but apply it when thinking about IT issues. It’s also a very open liberal system of working.

I’m going to get really specific here, because that’s the the best way to avoid being vague:

I’d choose to use Linked Data expressed in RDF+N3 to represent information about my health, and something not unlike FOAF+SSL for authentication, since I’d like to be able to use more than one service provider at the same time so it needs to be a format with native features that enable integration of data. RDF happens to also resemble EAV/CR which is a medical design pattern. The data would be stored wherever, whenever, by any number of arbitrarily chosen organisations and would be brought together ad-hoc via a tools in the Linked Data tradition. Integration tools would also be selected from an open market for doing exactly that job. If I chose to use one provider for every medical service obviously I wouldn’t need this extra bit, but having that allows a more diverse market, more privacy. Importantly, emphasising data integration as a feature leaves scope for organisations to add integration features to whatever system they already have, which includes retaining human procedures speeding up the evolution of this ecosystem.

If I were someone with specific conditions likely to need it then I’d carry a card designed to permit rapid access at A&E departments. I’d buy these cards from a similar marketplace of providers, but probably all of them would be eventually forced to catch up with the state of the art circa 1994 and support content negotiation such that once the URL is accessed whatever the doctor needs is delivered to them over HTTP. All the standard authentication options for HTTP, including FOAF+SSL would be available and may or may not be used in deciding to serve up the data. Imposing a standard protocol and format here wouldn’t be too bad, but the state needed bother compatibility with A&E is the core feature.

The method of formatting the card would be decided by an industry organised standards body, but need only be a URI. There is nothing scary now about URIs! It will contain a very long random number – too long to bother guessing it – and after first use, this URI can only be accessed for a few days. The server will know its serving emergency data and can take care of procedure matters, like waking up your mom, if that’s what you want.

The system is essentially a Summary Care Record resource but hosted by the person I chose to host it, and containing whatever I decided to put on it. If I remain in a coma, the care record will name someone to come sort out access to data, probably a relative or a staffer from one of the many organisations I might buy services or insurance from. Possibly the expired record will still provide that data, just in case.

XML, having at least the ability to be unambiguous and machine verified would be my 2nd choice of format. Automated integration is not a feature, but there is a good selection of tools and an experienced workforce. Stuff like SOAP might make things harder – too many variables – but a proper REST implementation would evolve as a norm (Linked Data is RESTful). With content negotiation the syntax doesn’t actually matter that much, because providers of emergency care cards will be incentivised to run really good software to handle syntax issues, as would the ad-hoc integration providers used in every other circumstance. Obviously then, data integration features  are the deciding factor for consumers, and whatever data-integration techniques work best will rise to the top in an open market.

That said, if people want to squirt pigments through feathers onto bits of reconstituted tree to depict vague and inconsistently applied words and move the resulting “information” around using horse and cart, then they should be free to do that. Want to use something properly stunted like JSON? Sure, but those organisations doing so  should also be perfectly at liberty to loose customers to competitors doing it properly.

Insurance providers would insure against the cost of transferring data out of systems when providers go bust and will set premiums according to how well the chosen providers operate, taking into account things like security, off-site back-up procedures as well as the quality of implementation details. If I choose to rely on paper records, I pay a bit more. If I am foolish enough to use a record-keeping service that uses  JSON then I will pay a lot more – obviously ;-)

Abody of shared knowledge will be created about the quality of each company – known as its “reputation”,  remember them? – that will include horror stories about data coming out wrong and  user interfaces being good, bad or ugly just as search engines or price comparison sites have reputation today for the same features. Obviously consumers won’t ask “is that SOAP or REST?”, or “do you have a comprehensive OWL ontology for health records?” but they’ll get to know the consequences of those technology options. Just like with search engines there will be default choices that people make, and times when you want something different, or more complex to suit your needs, so no-one will be greatly  inconvenienced.

Anyone too stupid to want to own their own records could just be handed the existing dead tree words or digital records on CD and told to keep them safe or suffer their fate. A kinder alternative would be to apply the kind of opt out system advocated for education (not the one for health, but I don’t feel strongly on that) in The Plan, to the NHS until such point that all the pathetic losers that can’t be bothered to think about staying alive end up dead and the NHS is mercy slain in 2060.

It goes without saying, that side issues like access to anonymous data by academics will also be subject to market forces and people will vote with wallets.

On the web, no one knows you’re a man

girl from a show with beautiful eyesSNOSoft and DanBri write about social engineering hacks involving Facebook. Someone who isn’t a hot chick working in your company joins your firms Facebook group, and of course, on the internet nobody knows you’re a dog. If you saw a photo like this one, would you check or just let her in your group?

Of course, if your company sets up an Open ID provider coupled to the corporate directory and some other nice person (maybe facebook themselves) designs a widget to force group members to authenticate against particular OpenID providers then the fake employee would stick out by not being in the group.

Open Spectrum as an alternative to a broadband universal service obligation

While the recognition of the internet as an important facilitator of economic growth is accurate and in some senses laudable, I find issue with the Government’s recent announcement of a universal service obligation for internet infrastructure companies.

A universal service obligation can only increase costs on the companies involved and must involve a large Government subsidy, a new tax or the involvement of the BBC – a dominant player in the TV and on-line industry who also benefits exclusively from a special tax. Such options involve a direct use of tax payer money, an explicit redistribution of wealth that will be harmful to economic growth in the short term, a slackening of competition in the telecommunications market, and potential bias introduced by a determinedly left-wing state media company. These side effects are I feel they might make a the measure an overall negative for the economy, broadband service quality and media independence.

An alternative free-market solution exists, but at a time when the Prime Minister is repeatedly criticising other parties for doing nothing, this option requires the Government to sell the idea of doing less than it does already. It is a laissez-fair option.

Wireless internet services allow for the widening of broadband coverage without necessitating the laying of cables along every street, or if there are tall buildings or other high spots in an area, without even requiring the raising of antenna masts. For example, I work at a building in central London that is signed up to a service run from the top of the Centre Point building.

Unfortunately these services traditionally operate using a small band of electromagnetic spectrum which has been left unregulated. An expensive licence is required to expand into other areas of spectrum and licensees are unlikely to share spectrum as readily as they do in the unregulated section. Simply put, more available spectrum means a better wireless broadband service but the Government is selling this monopoly access to this resource to rich corporations at the expense of normal people.

The solution is very simple indeed, and this is to reserve additional blocks of spectrum for unregulated use – that is, to stop regulating parts of the electromagnetic spectrum. The spectrum previously allocated to analogue television provides a spectrum gap and an immediate opportunity for decisive action.

This idea and the deregulated spectrum are called “Open Spectrum”, because access to the spectrum is open to the entire market of providers from individuals, to small grass roots charitable or hobbyist operators (for an example see SPC’s Open Wireless Network), and commercial operators of all sizes – not just large corporations. This free-market access has the potential to fuel an immediate growth in coverage combined with a gradual increase in service quality as device manufacturers improve the technology. Interestingly, it may even drive a shift in infrastructure ownership away from Government and corporations and literally into the ownership of the people, with individuals voluntarily co-operating to mesh their own devices together to further improve services.

A Frequently Asked Questions document is available which covers this from a historical and technical perspective, and is quite accessible to laypersons.

Derived from a letter sent to my MP.

Seeing Links

Dendrons, Pisces and the CosmosI’m currently engaged by a small systems integrations and – oddly enough, you might think –  web development company. That is to say, I’m working with a company that does web development, creative work and systems integration. I’m working on the systems integration side of things doing architecture and proofs of concept for an event driven integration platform focused around XML processing. This has involved a bit of rules based logic, arguing about defining schema upfront or letting the customer do it using RDF (it’s easy but its complicated) vs using relational databases (its complicated but its easy), a bit of coding with the DOM API, reviewing some  graph orientated process definition languages (if only to prove we didn’t want one) and some thought around long running business processes involving customers in an e-commerce context (which proved we actually did), and straying into architectural issues like whether to incorporate an ESB and what the hell an ESB is anyway.

This collection of abstract issues allows me and my colleagues to spend some time thinking in the abstract, and researching topics and increasingly seeing previously obscure links between things. For example, the fact that a web design company has ended up doing systems integration using a web language like XML  looks like a link, though actually its a complete coincidence which I only just saw. Other weird stuff comes up too, like the fact that the web page of a tool we’re reviewing was two clicks away from a definition of something very like what we’re  building, though we only came across the definition of it (and even  a related book) three months after we started to write proofs of concept. My guess is that it’ll be a useful tool.

Then came a less conceptually loaded link, in fact it was just a plain HTML link of the “if you liked that then you’ll like this” variety that lead me to an excellent InfoQ presentation on what REST is. If you’ve troubled yourself to read any of the links embedded in this article, or even if you’re familiar with some of the terms already then you’ll realise that this presentation actually sits right in the middle of the jumble touching on SOA, good web site design, and the importance of URIs as business identifiers. Of course good business identifiers are important in any system especially relational databases, almost certainly SOAs, and definately in Linked Data and in RDF and were a big topic at Linked Data Planet where I went last year so I’m seeing links stretching that way too.

Do you ever get a feeling somewhere in the back of your head of neurons rewiring themselves? You might just dismiss it as a headache but there is a particularly satisfying ache I get sometimes which is a bit like the aches the day after some strenuous excecise (another weird link) and its a feeling I get when concepts are shifting about and getting connected together in my mind. Well I have that feeling now, and the shapes being formed back there in the etched lines of synapses are pretty interesting, but are too big for one post…

I’ve been watching BBC Parliament

I’ve been watching BBC Parliament coverage of a debate about Parliament’s relationship with the people and new technology such as blogs, twitter, and PR methods such as issuing pamphlets, inviting school children into visitors centres and education centres etc.The PR type stuff still has a feel of dreary pointlessness about it, but I suppose it may work on the brighter or more enthusiastic kids, but the tech stuff was in some cases just as dreary.

They are a pre-web generation embracing this new gadget because its a new gadget rather than because it will work or because its the best way forward.  Eventually there were a few bits of good news, but in general I was discouraged about how much emphasis there was on well publicised gizmos of debatable value and not a lot of substance.

Then, a grey haired old Tory stood up and delivered this corker:

My noble friend has introduced a subject of extraordinary importance, much greater than we are giving it credit for today. My noble friend Lord Marlesford reminded us that Parliament was invented to control the Government. Before that, we had chaos and blood-letting. It actually cost a great deal of blood to build this institution that we now occupy so placidly. It is what stands between the British people and a reversion to some unsatisfactory, undemocratic and, quite possibly, violent existence. It is foolish to think that mere stasis will preserve it.

The line between government and Parliament has been so blurred since the reign of George I that many of the public do not understood the function of Parliament, because they see government functioning inside it. There are, I think, 140 Members of the Government and PPSs occupying Benches in the House of Commons. They are inside the machine invented to control them, into which none could have put a foot before the reign of George I, who did not speak English and had to have somebody here to do his work for him. We are looking at a precious thing. As the noble Lord, Lord Grocott, who has not yet returned to his place, pointed out, the product is very good: it is liberty.

Now, if the British people do not understand that, and if Parliament becomes devalued, they will not stand to protect Parliament because they will not see it as protecting themselves. Therefore, we have a real duty to show the people how the power of Parliament has been eroded, is being eroded and will, if future Governments of all political colours have their way, continue to be eroded, because Parliaments are a thorn in the flesh of Governments. If the public are to understand that, they must understand what we are doing.

Absolutely nothing at all to do with the web or new technology at all – just an old fashioned desire to focus on what really matters and do it properly.

He continued talking, and demonstrated what I mean, with this proposal:

When I was a parliamentary candidate and started looking at these things, I well remember the furore of excitement if a Minister ill advisedly let a government policy out of the bag, deliberately or accidentally, outside the premises of his appropriate Chamber in Parliament. … [If] a Minister in the … House of Commons, were to make a policy statement outside it, as soon as that was known he was hauled back by the Speaker to face an emergency debate. He got a headline, but not the one that he wanted ….

What happens now, almost without comment and as a matter of routine, is that almost all government policies—or all but those of the hugest importance—are made outside the House, by the Government, to an audience invited by them… As a result, the only comments that the media hear come from Ministers… That means that not only are the voices of the enraged Opposition, of whatever party, not heard but the voices of the disenchanted Back-Benchers of the government party are also silenced. So what the public get is a picture that bears no relation to Parliament at all and nothing gets reported from these two Chambers.

…. Would it not be a simple matter for the House of Commons to take this matter back into its hands and to require the Government to release all news about their business that affects the electorate inside the Chamber? That is where the news would then be, as would the reporters, who would hear what Members of Parliament thought about it. That would be the news, and it would be broadcast on the traditional media, at least. That way, at no extra expense to anyone, Parliament would begin to come back to being the focal point of public interest, which is where it must be if this sovereign and free state of ours is to maintain its freedom in the years to come.

Simplicity is priceless, which is why another old Tory gets the prize for being the most forward thinking peer with this piece of good news:

We can do far more to utilise the internet. Bills are now published in XML format, so anyone can use the material to tag particular clauses and subsections. That takes us some way towards meeting the aims of bodies like mySociety. We should be able to build on this capacity so that Bills posted on the website are indexed in order to enable users to search text and sign up for more specific alerts.

Of course, if he’d said RDFa I’d have had kittens, but he scattered a few more precious stones around:

The Constitution Committee of your Lordships’ House, …  advocated the greater use of informal Keeling schedules, where a Bill amends an Act, enabling people to see how the original sections are amended by the Bill. The Modernisation Committee of the other place has also recommended exploring the possibility of publishing on the web the text of Bills as amended in Committee, with text that is added or deleted shown through the use of different colours.

I understand thought has also been given to interleaving Bills and Explanatory Notes, so that relevant material from the notes appears on the page facing the clauses referred to. That not only makes it easier to grasp the purpose of a clause, but may also encourage those who write the Explanatory Notes to ensure that a note on a clause does not simply repeat the provisions of the clause. I suspect it will be as helpful to parliamentarians as to members of the public…. These are examples of the sort of thing we should be pursuing.

Now there, emphasised, is an example of somebody understanding that presenting information differently can influence the people writing the information and really getting it in a detailed way and turning it into a simple practical proposal. Probably the Constitution and Modernisation Committees took days to trash those out, consulting all manner of experts, but that Lord Norton is citing them and giving them appropriate emphasis is very encouraging.

Less encouraging are the words of Lord Brabazon of Tara the Chairman of Committees who says:

In addition, Bills are already available in XML format—whatever that is—which allows individual clauses and subsections to be tagged, as mySociety wants.

The noble lord clearly does not recognise the power that you get as a computer programmer from using a standard syntax. Perhaps he doesn’t drive a car since its pretty obvious that standardising fuels, fuel caps, and pumps and other technical details in cars have enabled us to have a proliferation of petrol stations to our great benefit.

The reasons this is the case are much the same between the two fields and are not exactly complicated so it’s discouraging that Lord Brabazon regards it as an appropriate place to make self-deprecating jokes.  Using XML to describe the activities of Parliament is a way to expand the community of people able to get involved with presenting the data in new and interesting ways. It will allow parties, think tanks, charities and search engine companies as well as an army of enthusiastic voters to help the public stay informed about Parliament.

In short, publishing XML is the cheapest possible way he can achieve the goals they agreed on during the debate. Not only bills but all Parliamentary data should be published in XML, and its should be reliable consistent good quality XML to enable the widest range of contributors to get involved in the widest range of Parliamentary activities.

Resilient DNS cache on Ubuntu

One of the most irritating things about being a geek (though far from the most irritating thing) is becoming annoyed with apparently foolish or below par performances from technical widgets. What gets you is that you know exactly what’s wrong, it all seems apparently obvious that either a) its simple and common and should have been prevented or fixed already or b) that the hazard was so clearly obvious that it should have received a higher priority. Today, I’m talking about the internet’s obvious single point of failure: DNS.

Have you ever noticed that Firefox is sitting there apparently inactive (translation: nothing is flashing) with a status bar message like “Looking up feelitlive.com…” despite the fact that you looked it up just fine a few minutes earlier? You want to find out whats on and go out, not debug your network, so you never investigate it and never call your ISP because it does work eventually and ISPs use call queuing technology rather than investing in extra human beings.

Anyway, this malady affected some of my favourite political blogs on the night of the US election and it didn’t take much F5 bashing to work out that popular sites like sky.com worked fine and less popular sites worked slowly and really niche market sites like er… ubuntu.wordpress.com, for example, didn’t work at all. Since it was election night I wasn’t going anywhere so I called O2 to have them confirm the obvious – a DNS server somewhere on BTs network was broken and local caches were only populated with the more frequently hit domains so that was all you got. Hmmnn… big event happening, everyone looking for news? Might it get busy on the web? Do you think?

I figured, “this is stupid, I visited the site earlier, why doesn’t my computer keep the IP address and re-use it?” I wanted a DNS cache! That way, my ISP’s DNS service only needed to work once and I would be protected from such foolishness.

The techy bit

Luckily, the article I wanted was in Google’s cache (accessed using an IP number not a DNS name, so working just fine…)  but its proper URL is http://ubuntu.wordpress.com/2006/08/02/local-dns-cache-for-faster-browsing/

The article is a little over complicated for a laptop user, since most laptop users know the button to reset their wireless connection and aren’t DSL users as such either. I got away with simply installing dnsmasq using Synaptic Package Manager and editing two files using “sudo vim <filename>”.

First I opened /etc/dnsmasq.conf and uncommented the line:

#listen-address=

and entered my loopback IP so it looked like:

listen-address=127.0.0.1

You can also listen on the loopback interface “lo” by editing the line above instead, if you prefer.

Then in /etc/dhcp3/dhclient.conf I found the line:

#prepend domain-name-servers 127.0.0.1;

and removed the “#” to make it active:

prepend domain-name-servers 127.0.0.1;

I gave dnsmasq a precautionary restart with:

sudo /etc/init.d/dnsmasq restart

and after pressing the button to reset my wireless connection – which on Ubuntu is the little blue bar chart thing on the bar at the top right, followed by the little blue round widget for the network your on.

Anyway, that clearly didn’t work because the ISPs DNS server didn’t work at all for the little web sites, so reducing the minimum to having it work one time was still too high a burden on the overloaded machinery.  I didn’t find a solution until just now, after another server blip. OpenDNS allow you to use their DNS servers for free, no questions asked, but with a DNS cache installed it seems silly to use the OpenDNS server as the main server.

Luckily, there is a command to append the OpenDNS servers to the end of your nameservers list, it goes in the file /etc/dhcp3/dhclient.conf :

append domain-name-servers 208.67.222.222,208.67.220.220;

When I checked resolve.conf I saw the .222 address listed at the end, and the .220 server had vanished, but I still have a local cache, and two independent nameservers and my blip is gone, so am quite content (Jaunty doesn’t have this issue, but doesn’t guarantee it’ll try every DNS listed)

Extra dnsmasq.conf tweaks:

Uncomment (make active) line 406 to stop failures being permanent:

no-negcache

If one of your upstream DNS providers has executed an immoral land grab on unregistered domains (a la Verisign) then list their IPs likewise (see line 420):

bogus-nxdomain=64.94.110.11

Note that I don’t put Open DNS in that category, they are giving you something free on certain conditions, its up to you to obey those conditions. It is useful and proper to list Open DNS like this if there is a temporary problem with their redirections, otherwise you are basically stealing. I use this on a network where simple hostnames like “fredspc” don’t resolve on the first attempt.

Peer to Peer Web Search technology

A mailing list message on the topic of Microsoft Live’s search privacy prompted me to take another look at peer to peer web search applications, and I discovered two – YaCy and Faroo – both promise to protect your anonymity while searching, but paradoxically both will index the web using your click stream.

There are some interesting concepts at work there, in particular YaCy’s reverse word index coupled with downloadable Linked Open Data such as DBPedia, WordNet could form a powerful combination as long as the privacy protection was sound.

Don’t give up your mobile for free

I was pleasantly surprised to find my old phone, like many others, has a value despite being broken and unusable. Mobiles are a great example of the market delivering on reduce, reuse, recycle.

Fasthosts bottle under pressure of lawyers letter

Fasthosts, that bastion of the web hosting industry and pillar of the Gloucestershire community has pulled the plug on a web server hosting several important (and some less important) political blogs, including that of Boris Johnson. They say its because accusations made by Craig Murray and Tim Ireland at bloggerheads defamed a certain controvertial Russian, and mention a lawyers letter.

“In this case, we examined a website for potentially defamatory material and communicated to the customer that they had indeed breached the terms and conditions for Fasthosts Internet hosting.”

Would someone please pull the plug on this pathetic institution?

More on Fasthosts at the Register.

Links and quotes added 22/09/2007.

Next Page »