StckMrktStatus - Providing Logical Explanations for the Stock Market

I've always thought the stock market reports you hear on the news are fairly silly. "The Dow Jones was up x% because this or that happened." The people saying those things always sound smart and informed, but no one really has any idea why a stock goes up or down in value. So, I made a bot to do the same thing. @StckMrktStatus will pick a stock from the NASDAQ or Dow Jones, see how it is doing for the day, and then add a reason for the change. The reasons are pulled from tweets that have the word 'because' on them. It's pretty simple but seems to work nicely:

The code is pretty simple, and I'll post it sometime soon (I'm working on a post about the code of my last few bots in general).

A Random Collaborative Drawing Thingy

As a bit of a throwaway project, I made a super-simple multi-user drawing app. It uses Server-sent events, which is something I've wanted to play with for awhile.

The code is on github. The server is running on Sinatra, and the frontend is written using p5.js. There's not a lot more else to say about other than it's sort of amazing what you can do on the web these days – This whole thing is around 200 lines of code, and most of that is my lazy javascript.

-------------------------------------------------------------------------------
      Language                     files          blank        comment           code
      -------------------------------------------------------------------------------
      Javascript                       1             33              1            156
      HTML                             1              1              0             36
      Ruby                             1              8              0             34
      -------------------------------------------------------------------------------
      SUM:                             3             42              1            226
      -------------------------------------------------------------------------------

Try it out! Draw with your friends!

SpaceJamCheck: Space Jam website monitoring on Twitter

People who have been online for awhile probably know that the website for Space Jam, a movie from 1996, is online still, and is essentially unchanged:

(If you don't know what I'm talking about, you can read about it here.)

At the end of 2010, someone noticed that the website was still online. Before I did a little research, I was convinced that people must have realized this before then, but Google suggests otherwise.

Anyway, here's an article that summarizes how it all happened, basically some Reddit user noticed, the word spread, and then it went viral on Twitter.

I haven't seen this mentioned anywhere, but according to the headers for the website, there were actually some modifications of some sort in 2005:

HEAD http://www2.warnerbros.com/spacejam/movie/jam.htm
200 OK
Connection: close
Date: Fri, 10 Jan 2014 02:12:09 GMT
Accept-Ranges: bytes
ETag: "89dfb-13c5-4027752a8ca80"
Server: Apache
Content-Length: 5061
Content-Type: text/html
Last-Modified: Thu, 06 Oct 2005 15:10:18 GMT

It's possible this was just a server move or something like that, but it's interesting to think that someone actually did some maintenance of some sort on the site.

I enjoy visiting the site, especially when I get nostalgic for the early days of my work on the internet. There are so many projects which I've worked on over the years, and a lot of them are gone forever. It's nice to see one that has managed to survive.

Because I'm lazy, and like easy reassurance, I wrote a @SpaceJamStatus, a Twitter bot that will check on the status of the website every few hours and tweet out the status:

Furthermore, because I am apocalyptic, I wrote @spacejamisdown, a bot which checks the status of the website every few hours, and will only report if it's not online:

With a little luck, this bot won't tweet any time soon.

Finally, because I have a love of writing random libraries, I wrote the ruby gem spacejam, which is a pretty simple Ruby library you can use to check on the status of any website. It can do tests against expected response codes, the body of a page, etc. It's pretty simple, but it's good enough to check on the status of the Space Jam website.

Requiem for a Twitter Bot

On the evening of December 24th, shortly after tweeting for the 500,000th time, @for_a_dollar retired from Twitter. After launching on 24 September 2009, the bot responded to all sorts of people who mentioned 'Robocop'.

Although it is easy to treat @for_a_dollar as a tongue-in-cheek project, it was always intended as a specific statement about the nature of online conversation – mainly that quality discussion is lacking. As I wrote in response to some criticism of @for_a_dollar, "99% of the content on Twitter is total garbage." I was being pretty harsh, but there's also some truth to that statement, especially now that Twitter is marketing themselves as the premier online destination to participate in your favorite television and media events. There's still value in the remaining 1%, but a huge majority of tweets are not very compelling at this point.

It's easy to forget that for the first couple years of Twitter, there was a public timeline of everyone's tweets. You could find interesting people, jump into conversations, and see what was going on. Twitter has removed that feature, probably out of necessity since I imagine it wouldn't scale at all, but when you see what it has been replaced with – a list of celebrities to follow, and curated trending topics, it's easy to imagine a decline in the quality of content on Twitter.

Anyway, at a certain point, sometime when the tweet count was over 400k, I decided that once the bot sent the 500,000th tweet, I would shut it down. I had two main reasons. First, I think that any statement being made by running the bot is pretty complete at this point. Since launching, @for_a_dollar has mercilessly responded to anyone mentioning Robocop for with the utterly senseless reply "I'll buy that for a dollar." Keeping the bot alive still has a certain value, but I also feel like the law of diminishing returns is more applicable with every tweet.

Second, the remake of Robocop is coming out in a month or two. It sounds like the satire and social commentary that made the original movie (while admittedly quote imperfect) compelling has been removed, leaving behind a fairly generic action flick. I don't have a lot of interest in helping to generate any buzz around that when the movie is released, nor do I want to deal with the logistics of maintaining the bot and my other bots while all sorts of people are tweeting from the movie theater.

I've requested a copy of @for_a_dollar's Twitter archive, and if/when I receive it (it's been a couple days), I might make something out of it that I will post online. But for now, the project is done.

Anyway, so long Bixby, it was nice knowing you.

Your Very Own ebooks Twitter Account

BIG NOTE: Are you actually interested in making an ebooks account for yourself? You should checkout mispy's twitter_ebooks library, which is a lot better than the hacky code here.

PREDICTION: In the future, everyone will be Internet-famous for 15 minutes. One of the consequences of that fame will be an ebooks-style Twitter account, just like @horse_ebooks, but actually generated by computer.

For kicks I decided to write a generic script that can take any Twitter account and make an ebooks version of it. Here's what @mitchc2_ebooks looks like:

And here's the script:

It's pretty straightforward. It's written in ruby, and it runs on top of chatterbot which does most of the heavy lifting, and a neat library called marky_markov handles the markov chain generation. It will tweet every few hours, or any time that my account tweets. It will also reply to mentions.

As much as anything, this seems like an exercise in banality. I enjoy the tweets being generated, but they're certainly nothing amazing. There's a pile of accounts like this out there. Most of the fame has gone to horse_ebooks, but it never used Markov chains. I think the first account like this that I remember reading about is @dedbullets. There's a blog post about it, and an even longer one as well.

Each Town - Listing All Towns in America on Twitter

A week or two ago I launched @eachtown on Twitter. It will spend the next couple years tweeting the name and location of every populated place in America, in alphabetical order.

A couple of years ago, I spent a lot of time fiddling with the USGS database of Geographic Names. It's a cool set of data and I've often thought of doing more with it. I was inspired by @everyword to create something similar, and decided to create a bot which iterates through every populated place in America, and tweets the name, and a link to a Google Map for the location. I enjoy the context you get from having the ability to look at a place. Not every location in the database is a city or even a town. There's mobile home parks, condominiums, etc. Seeing them on the map gives you a sense of the fact that these places are real, and gives them a little context.

Agnew Mobile Home Park, WA

Agnew Mobile Home Park, WA

It's a pretty simple bot, and I'll post the source code at some point once I clean it up a little.

Gopherpedia - The Free Encyclopedia via gopher

My last release for Project Dump week is Gopherpedia – a mirror of Wikipedia in gopherspace. If you happen to have a gopher client, you can see it at gopherpedia.com on port 70. Otherwise, you can browse to gopherpedia.com and view it via a web proxy.

A couple of years ago, I landed on the idea of a gopher interface to Wikipedia. Originally it was probably a joke, but it stuck with me. So one day I registered a domain name and got to work. The first thing I needed to do was build a gopher server, because none of the currently available options were up to the task. So I built Gopher2000. Then, I quickly realized that the current gopher proxies weren't any good either, so I built GoPHPer. Once both of those were written (well over a year ago), it didn't seem like there was much left to be done – gopherpedia should've been ready to launch.

But I hadn't reckoned on the challenges of churning through a database dump of Wikipedia.

Wikipedia is very open. They have an API which you can use to search and query documents, and they provide downloadable archives of their entire collection of databases. They encourage you to download these, mirror them, etc.

My first implementation of gopherpedia used the API. This worked well, but had two problems. First, it was a little slow, since it needed to query a remote server for every request. Second, Wikipedia prohibits using the API this way - if you want to make a mirror of their website, they want you to download an archive and use that, so their servers aren't overloaded.

So I downloaded a dump of their database, which is a single 9GB compressed XML file. Nine. Gigabytes. Compressed. A single file.

Then a took the opportunity to learn about streaming XML Parsers. Basically I wrote a parser script that parsed the file while it was reading it, as opposed to reading the whole thing into memory at once, which was clearly impossible. The script splits up wikipedia entries and stores them as flat text files. Running that script took a couple days on my extremely cheap Dreamhost server – that's right, I have a gopher server hosted on Dreamhost.

So, when someone requests a page, the gopher server reads that file, does some parsing, and returns the result as a gopher query. Sounds simple, right? Not quite, because parsing the contents of a wikipedia entry is also a mess. It's part wikitext, part HTML, and there's plenty of places where both are broken. If I was just outputting HTML, I could probably get away with it. But since this is Gopher I really needed to format the results as plain text. I spent a while writing an incredibly messy parser, and the imperfect results are what you see on gopherpedia now. Sorry for all the flaws.

Anyway, this was a fun project, and it occupied a pleasant chunk of my spare time over the last year or two, but it's time to release it to the wild. Unless I'm mistaken, this is now the largest gopher site in existence. There are about 4.2 million pages on gopherpedia, totaling somewhere over 10GB of data.

Here's my favorite page on the site – the gopherified wikipedia entry for Gopher.

Please note, this is in extreme beta, and is likely to break, just let me know if you have any problems. Enjoy!

Gophper - A Modern Gopher Proxy for the Modern Age

As I mentioned yesterday, building Gopher applications is fun, but using gopherspace is actually pretty challenging unless you're a die-hard throwback geek. I have a super-secret gopher project (to be revealed tomorrow), but it's pretty useless if no one can actually see it. Sure, I could write up a blog post about how to download a gopher client, etc, etc, but that's just dumb.

There's a few gopher proxies out there – primarily floodgap and meulie – these are websites which you can use to browse gopher servers. But there's a few problems with these proxies. First, they're a little clunky. They're handy tools, but they're not really attractive, and the HTML they output is pretty old-fashioned. And most importantly, neither one is open-source.

I wanted a simple gopher proxy, using modern web standards, that was open-source and easy to install. So, I wrote gophper. You can see it in action at gopher.muffinlabs.com.

Here's the details:

  • It runs on PHP using Slim, which is a nifty lightweight application framework.
  • It caches requests for faster response times.
  • All of the rendering happens in the browser, which means someone could easily write a different backend.
  • It has a wacky theme switcher, so you can choose between a nice modern look, or an old-school monochrome CRT look.
  • If the user accesses a binary file, they can download it. If they click on an image, they can see it in the browser.
  • It can be integrated with Google Analytics.
  • You can restrict it to a single gopher server, so you can integrate it into your project without any fears of someone using your proxy for naughty tricks.

It's still a little rough around the edges, but it definitely works. I would love to see it used all over the place. But tomorrow I'll reveal where I'm using it.

Gopher2000 - A Modern Gopher Server

I'm old enough that the Internet basically didn't exist for anyone other than a college student or scientist when I was a teenager, but by the time I graduated from college it was everywhere. My first access to the Internet was via a friend-of-a-friend-of-a-friend's borrowed account on a Clark University server while I was in high school. I still remember the password.

I was nerdy enough to be dialing into BBSes at this point, and I even managed to communicate over some distance in discussion groups via FIDOnet, but that was a pretty pale comparison to undiscovered wilderness of the Internet. Most of my knowledge of the Internet came from reading The Cuckoo's Egg. When I finally had real access, naturally I spent most of my time playing on Multi-User Dungeons like DartMud and EOTL – and somehow they both still exist. At the time, everything was text-based, so welcome screens like these were pretty amazing.

My friends and I would learn about interesting FTP servers, and we tried to download interesting documents and applications from them, but we barely knew any commands to use, and the files were always in weird archive formats that we didn't understand at the time, and of course you couldn't Google it.

So while it was amazing to be online, in a lot of ways it was very limiting. Until I learned about Gopher.

If you aren't familiar with it, Gopher is a very simple protocol for browsing text documents on the Internet. It doesn't sound like much, but before HTTP and the World-Wide Web, it was a revelation. There was data out there, and you could get it, if only you knew the hostname. Luckily the first few years of Wired would post interesting gopher address you could visit. Here's their description of gopher from the 'Net Surf' column of one of their early issues:

Is There a Rodent In Your Future?

If you surf the Internet and haven't heard of gopher, you're probably reefed in the backwaters somewhere. Gopher is one of cyberspace's hidden gems - the application even employs that buzz-term of computing, "client- server architecture."

Specifically, gopher is an information gathering tool that offers a smooth, menu-driven way to traverse international "gopherspace" - which these days literally means several hundred servers worldwide, offering text (from the CIA Fact Book to the Bible), computer programs, audio, still images, and even movie clips. Gopher provides a seamless, "hidden programming" interface with which you can transfer files, browse databases, and telnet to sites around the globe, simply and easily. For example, gopher the University of Wisconsin-Parkside (gopher.uwp.edu) and you'll find the music server: a collection of song lyrics, discographies and sound files from a variety of selected tunes.

Another destination, the ArchiGopher at the University of Michigan, contains photographed examples of French architecture and Ann Arbor campus buildings, as well as scanned copies of paintings by Kandinsky. Via gopher, academics can search for employment while students can seek information on various campuses. But there is a catch: To access these goodies, you must have direct access to the Internet (with client software), or be able to remotely login to Net servers that offer that capability. (The software is publicly available via ftp at boombox.micro.umn.edu, in directory pub/gopher.) Then it's as simple as typing "gopher" and the server address (with proper command accompaniment, such as "%" for Unix clients).

Gopher, the helpful rodent, was initially born of programmers at the University of Minnesota (the Gopher State) in an effort to link and search disparate, specialized computer systems on campus. Later offered up to the Net, most public gopher servers have sprung up only within the last year, while new rodents appear to be tunneling fresh soil almost daily. This little tool is a definite nugget in the ore of the Internet, rich with information. - Tom Zillner

We're Listing, Captain Every two weeks, surfers anxiously await the "Yanoff List." Compiled by Scott Yanoff, a computer science student at the University of Wisconsin (yanoff@csd4.csd.uwm.edu) the list offers concise descriptions of helpful sites around the Net. Started in 1991 as a personal log with only six entries, public distribution of The List brought a flood of suggestions: Topics now range from philosophy to amateur radio, astronomy to games. Yanoff also documents locations for such research essentials as Archie, WAIS, Netfind and World Wide Web (WWW or W3). (Internet Hunt participants remember this one ) Cut over to USENET group alt.internet.services, or ftp or gopher csd4.csd.uwm.edu (available in /pub/inet.services.txt). Don't leave cyberspace without it.

I'll Gopher That Known also as the Whole Earth 'Lectronic Link, this particular gopher shreds an info-tube It offers access to a host of electronic magazines, an SF arena featuring input from well-known cybernauts such as Bruce Sterling, as well as all the stuff you'd expect from old (and young) hippies. You'll find text from some of the major, progressive magazines, help files for traversing the big-bad-networks, the online Factsheet Five, art world calls for action, and lots more edgy stuff to gnaw on. All in an easily navigable, menu-driven environment that won't flatten out on you. This service is provided by the Well, and can be accessed at gopher.well.sf.ca.us. E-mail (gopher@well.sf.ca.us) with any questions.

Wow, has the tone of that magazine changed over 20 years.

Beyond sharing hostnames with your friends, if you knew about the Gopher search engine Veronica, then you could find all sorts of stuff. I first learned about Veronica from the teacher who ran our high school Model United Nations club. He showed us how to use it to download copies of UN Resolutions and other documents that would've been very hard to get otherwise. They still list a couple gopher servers on their website, but unfortunately I can't find one that's active anymore.

Of course, Gopher lost out to HTTP. There were some regrettable licensing decisions that scared away a lot of interest, and HTTP was always open. And even though in some ways it has never fulfilled this promise, HTTP was all about collaborative sharing and even editing of documents, something that was lacking from Gopher.

Today there's still a couple hundred gopher servers out there, with maybe a million or two pages on them, which is obviously nothing compared to the incredible mass of data you can access via your browser. But I still have a fondness for Gopher, for a few reasons. First, because it was part of my formative years on the internet. Second, because it has an important place in the history of the internet, and given how ephemeral digital history is, it's easy to lose track of this. And finally, because it is super-hackable.

And since it's so hackable, I went ahead and wrote a modern, fully-functional Gopher server in Ruby: Gopher2000

     _____             _                 _____  _____  _____  _____
    |  __ \           | |               / __  \|  _  ||  _  ||  _  |
    | |  \/ ___  _ __ | |__   ___ _ __  `' / /'| |/' || |/' || |/' |
    | | __ / _ \| '_ \| '_ \ / _ \ '__|   / /  |  /| ||  /| ||  /| |
    | |_\ \ (_) | |_) | | | |  __/ |    ./ /___\ |_/ /\ |_/ /\ |_/ /
     \____/\___/| .__/|_| |_|\___|_|    \_____/ \___/  \___/  \___/
                | |
                |_|

There are a few gopher server frameworks out there, but most of them are lacking in one way or another. They're focused on delivering static pages, or they force you to use weird methods of putting together your menus. There's even a few rough Ruby scripts out there for serving gopher requests, but they are all either so old that they don't work with a modern Ruby, or the code is lacking in one way or another.

I wanted to build the best Gopher server imaginable, using everything I've learned in my career writing software. I wanted something simple, with an easy, flexible syntax that stays out of your way. For example, this is the code for a working gopher application:

Gopher2000 is inspired by Sinatra, an awesome web framework also built in Ruby. Reviewing the code for Sinatra (and reading the book Sinatra: Up and Running) has inspired me and educated me about code more than any other place in recent memory.

Here's a few nice things about Gopher2000:

  • Simple, Sintra-inspired routing and templating DSL. It's easy to define routes, build menus, accept input, etc.
  • Dynamic routing of requests via named parameters on request paths, and dynamic responses – so you can have a dynamic, interactive gopher site.
  • It's easy to mount directories and serve up files if you're into that.
  • Integrated logging and stats.
  • Lots of helper methods for formatting output as prettily as possible

I wrote most of Gopher2000 well over a year ago, and it's been functional for a long while, but I never publicized it until now.

Anyway, the real reason I wrote Gopher2000 is to help with the top-secret gopher project which I will announce in a couple days. Frankly, it's going to blow the fucking roof off of gopherspace. The only problem is that no one will be able to see it – Gopher support has been stripped from all major web browsers over the years, and I'm guessing that you don't have a gopher client handy. My next post will talk about how I handled that problem.

Next page