December 16th, 2004

Planet Lisp brokenness

Here is a tale of Planet Lisp woe.

Planet Lisp works like this:

  • a python program periodically fetches feeds, separates them out into individual items, and writes each item as a file with sexps in it; the file's name reflects some permanent part of the item, like its link or permalink
  • a lisp program reads those item files, orders them by date, and creates an HTML page from the most recent ones

There are two complications. First, feeds that don't provide dates on individual items. How do you know if an item is new or not? Planet Lisp decides by checking to see if the file exists; if it doesn't, it must be a new item and it is assigned the current date. Second, feeds that don't contain many items. If someone posts five times in a day, but the feed only carries the two most recent, items that are no longer in the feed should still contribute to the Planet Lisp page. Those items should eventually be expired so the software doesn't spend time processing items that don't contribute to any page being built.

I tried to solve the "save, but eventually expire items" part while forgetting about the "undated new items get today's date". The program dutifully expired all of Mikel Evins's old items, then fetched all the items in his feed and assigned them today's date.

I think this is corrected now. The Clikis page should also start to get a little more history, too, as the program uses a mix of current feed items and older items to produce the front page.

If you're thinking about starting a blog that you'd like to be included in Planet Lisp, please consider producing a feed that contains about a dozen items, includes the full text of each entry, and includes dates on each item. That makes my life so much easier!

On a completely unrelated note, Franz's nameservers, which handle the lisp.org domain, were apparently down yesterday morning, so http://planet.lisp.org/ was unreachable. I set up http://planet-lisp.xach.com/ as an emergency backup.