How To View The Web From 9front

The World Wide Web is a wretched hive of scum and villainy, a fact all too familiar to refugees from the old Internet. September merged with IRL, and now everyone who was busy telling you to go outside and play for the past twenty-five years can’t be convinced to put down their phones. Sorry, teacher’s pets, but government and big business are both in on the scam. Dangers abound, and navigation is fraught with opportunities for disaster. Fortunately, some browsers don’t support Javascript or CSS.

mothra(1)

Enter mothra(1). Written in 1995 as a toy for the 2nd edition of Plan 9, mothra doesn’t support <div> tags, color attributes, or even fonts beyond bold, italics, underline, and fixed width. When 9front developers discovered it tucked away on the old Bell Labs server (mirror), it was no longer even shipped with Plan 9. Since that time, some work has been done on it.

First was stopping it from printing the HTML version at the top of each document. Haha. The first fundamental change was to rip out all the network code (sorry, ftp:// lovers), and cause it instead to rely upon webfs(4), which brought it into line with all the other HTTP-eating programs in 9front. Next was implemented moth mode, empowered by the addition of a generic text entry box to event(2), which allowed for nibbling chunks out of web pages and saving them with arbitrary file names in arbitrary locations on the file system. Finally, text selection and copy/paste was added. Future improvements are planned to include a Plumb menu option, and easier customization of the browser’s default appearance.

webfs(4)

9front’s webfs(4) was re-written from scratch. The program is a file server that takes requests from local programs (yes, even shell scripts), contacts remote HTTP serveres, and returns the results. Its user-agent string is configurable. You’re probably already thinking of uses for it.

mothra sends requests to webfs and renders the results as a graphical web page.

Honestly

mothra does a pretty good job of rendering basic HTML, but many web pages have such wonky formatting, or otherwise rely so heavily upon CSS for layout, that sometimes quite a lot of vertical scrolling is required to get past the initial burst of banners, affiliate links, and menus. While some sites are irretrievably shitted up with mandatory Javascript, many still fall back to a relatively sane presentation via an RSS feed, which it turns out is the most efficient conduit for spraying web content onto your 9front screen.

hget(1)

Sidebar: Maybe you don’t really need to see images or click on interactive links. Ever heard of curl(1) or wget(1)? 9front’s re-written hget(1) offers a similar bag of tricks. Using hget and its companion tools, it’s even possible to fill out web forms from the command line. That’s right, say it again: everything is scriptable.

RSS/Atom

Anyway. Through some extended acrobatics it’s feasible to pull RSS feeds and deposit them into a format mothra can handle. A conveniently extant Go program called rrss runs independantly of webfs (the RSS parsing library it uses handles HTTP requests all by itself), and translates them into either human readable plain text, or a blog format compatible with werc (via its third-party barf plugin).

9front ships with most of what you need to build this self-contained RSS feed reading setup. The rest is easily installed. You can run everything on one machine, or you can pull the feeds to a remote 9front server and then surf to them from anywhere you want in whatever browser you choose. Do all of this:

That’s everything you need.

Configure cron(8) to periodically run rrss against a list of RSS feeds. This, too, is best customized with a shell script. It looks like I haven’t made significant changes to my script in over over four years so it must be doing something right.

Put something like rrss.barf in your $home/bin/rc/:

#!/bin/rc
# Dump RSS feeds into werc/apps/barf entries.
# Run from cron.
rfork en
name=$1
names=(read tumblr)
log=$home/lib/$name.err
root=$home/www/werc/sites/$name.stanleylieber.com
urls=$home/lib/$name
if(! ~ $name $names){
    echo guess again. >[1=2]
    exit usage
}
rm $log
>$log
chmod +at $log  # +t so this file isn't copied to the dump.
# go used to hang a lot, so we erect a safety net.
{
    while(){
        sleep 30
        if(ps | grep -s -e 'Broken[ ]+rrss'){
            {date; echo Kill Broken rrss} >>$log
            Kill rrss | rc
        }
    }
} &
ofs=$ifs
ifs='
' {
    for(i in `{cat $urls}){
        ifs=$ofs {
            j=`{echo $i}
            url=$j(1)
            tag=$j(2-)
        }
            alarm 60 rrss -f barf -r $root -t $"tag -u $"url >>[2]$log
            echo $"url $"tag >>$log
    }
}
for(i in `{f $root/src}){ chgrp www $i }
{date; echo Kill rrss.barf} >>$log
Kill rrss.barf | rc

And in /cron/$user/cron:

1 1,3,5,7,9,11,13,15,17,19,21,23 * * * local rrss.barf read
1 2,4,6,7,8,10,12,14,16,18,20,22 * * * local rrss.barf tumblr

The files $home/lib/read and $home/lib/tumblr are lists of RSS feed URLs (one per line), and $home/lib/read.err, etc., contain any errors encountered when the script is run by cron.

When you surf to your barf page (in mothra or otherwise) a convenient heading will indicate the total number of feed items pulled. After logging in to barf, each item will likewise be accompanied by a button you can clicked to delete the item from your disk.

Conclusions

Voila. I’ve been getting by this way for several years. Surviving, if not living (as my mother used to say). Obviously, mothra is no one’s conception of an ideal browser, but then again the web is no one’s conception of an ideal mechanism for delivering content. At the end of the day at least I’ve imposed these usability constraints upon myself.

Pros:

Cons:

I used to be able to track Twitter and Instagram via RSS. Both have resorted to repeated and extensive measures to prevent humans from interacting with their services without first logging in via an approved client running on an approved operating system. Which obviously is the exact opposite of what I’m trying to do here. Only one of the several reasons I gave up on these services. Tumblr, and some other failed extraction operations, still offer RSS feeds.

Maye some of this is a blessing in disguise.

You Surrender

I know, this sounds like a lot of trouble to go through just to end up with diminished functionality on an operating system you’re probably only running in a virtual machine anyway.

If by some unlikely confluence of irregular events you are actually running 9front on bare metal, and you just can’t take it anymore, and life is starting to look bleak without access to 8chan’s AI-training captcha, you can always install OpenBSD or Linux in vmx(1).

I understand. At least you’re trying.

I won’t tell if you don’t.