Funny to see an Opera guy suggesting a way to browse the web without using Opera :) He even says it is easier to read all the comments that way (via text using curl / wget/ grep) instead of using their browser (not that I don't agree with him).
Because if he needs to do that, then presumably other people have the same problem, and I'd be reasonably sure at least some of them can't or aren't prepared to use wget, sed & grep to find things they did in another program.
Browsers already have history, so it seems to me it would be logical to extend that functionality to show how a page changed in response to an input somewhere in your history.
Great approach, but some work could have been saved (and robustness added) using the W3C’s HTML-XML-utils. For example, there’s hxselect, which filters HTML/XML against a CSS selector, and hxpipe, which breaks XML input into a more grep/awk-friendly format. I’ve used these tools myself on multiple occasions, they’ve saved me a huge amount of time.
He's not alone. :-) Richard Stallman goes a step further as his laptop isn't directly connected to the internet.
Quote: For personal reasons, I do not browse the web
from my computer. (I also have not net connection
much of the time.) To look at page I send mail to
a demon which runs wget and mails the page back to
me. It is very efficient use of my time, but it
is slow in real time.
Talk about wrong tools for the job. A far simpler to write, and more robust solution could be had many other way, simple_html in php for example, or javascript, any DOM parser will be more robust and easier to follow than all those sed/grep calls.
I don't think there is anything simple about learning PHP to find a few blog comments, or learning to use a 3rd party DOM parser when you are already familiar with tools that apparently work for this one-off application.
http://aur.archlinux.org/packages.php?ID=10333
http://aur.archlinux.org/packages.php?ID=33276