Monthly Archives: March 2012

New Media researcher Anne Helmond asked if her 1997 “Unofficial Eric’s Trip Homepage” at SunsetStrip/3500/ was archived.

It is there. The lo-fi (html) part is almost complete and functional, only guest book and “Sounds” are missing. Hi-Fi link leads to a non existing ethi.html.

As I mentioned in Ruins and Templates of Geocities, we rarely know whether missing files were lost due to glitches during the archiving process, or to the site owner’s lack of skill or failure to maintain the files and links between them. But this time I could ask the user personally what was there.

Anne said that probably she never ever made a HiFi version and can’t remember what she planned for this.

It is so 1997! I remember myself believing that broadband is just some weeks away and preparing for the nearest future with links like that. This can be it. Broken links were links to the files to be made very very soon.

Original URL:

A Research Database for Everybody

Geocities was a people’s effort, and it should continue belonging to the people, even after its demise. This is important to keep in mind when doing research on the millions of home pages that have been rescued. The results of this research should be easy to build new research upon for any interested party.

The scripts and system setup guides I am preparing to be published on GitHub will make it relatively comfortable to work with parts of Geocities that already have been recovered or will be recovered in the future.

Using a database for helping to make sense of this huge body of data is a very straightforward idea, but how to organize it so that it is as portable as the rest of the project? Additionally, the database must be able to work with multiple, contradicting versions of the same file that where harvested by different people at different points in time, using different tools, making different mistakes. And it should be possible to dump the complete database, or parts of it, hand it over to another researcher, and later merge the changes.

Naïve database designs1 usually work with surrogate keys to uniquely identify records. The simplest form of such a key is a counter that increases with every new database record.2 So the first entry gets an id of 1, the second an id of 2, and so forth. Another approach to generating surrogate keys is randomizing numbers or strings. Because such keys have nothing to do with the records they help identify, they are called “surrogates.”

This is fine for a centralized system collecting data in one place, but unfit for a distributed case as outlined above. Without knowing the state of such a counter on one computer it is impossible to add entries on another computer without the possibility of a key collision when the databases are merged again. Even when staying isolated inside one computer, the keys’ values are — in the case of a serial counter — determined by the in many cases arbitrary timely order in which records are put into the database.3 This is critical because it is likely that it will be infeasible to distribute the over-sized database as a whole, and rather distribute scripts that generate it.

The solution is to use natural keys to identify database records. These keys are generated from unique properties of the records they identify and therefore are predictable. This approach leads to certain constraints that will be explained below.

Truth and opinion

Trying to normalize a dataset like Geocities seems like heresy in the first place. Most of the insights waiting for discovery inside of it are not enumerable. So the database has to reflect on this by separating truth and opinion.

There is little absolute truth in each file extracted from the torrent, apart from actual data and sparse filesystem metadata it contains: name, size, last-modified date and of course its contents. It is not even possible to say for sure from which URL each file was harvested: the classic wget4 does not save this information and the original Geocities server used case-insensitive URLs5, so the same file could be retrieved from many different URLs (and indeed many duplicates can be found in the collections). Then there is the classic case of a file being named when it was in fact downloaded from just without the “index.html” part, and so on.

So, this is The Truth:

For the simple case of reviving Geocities to a state that can be experienced again in a browser, a mapping of URLs to files is needed. While the URL can be figured out in most cases with a straightforward lower-casing of the file name, it is actually guesswork for some percentage of the files. Especially in the case of duplicates, it is not easy to decide which file to serve for a certain URL.

To solve such cases, a system of agents is introduced.

Every opinionated entry in the database is signed by an agent. An agent might be a script that does some analysis of a file to extract information. Later, a human agent might correct mistakes the script has made, or a later version of the script might add new information.

A table that can be customized for each user/researcher contains information how much weight each agent has. In the case of contradicting information, the information created by the agent with the highest weight wins. — Usually, humans should win over robots.

Different agents can have the same opinion as the agent name is part of the natural key for the entry. This is vital for the weighting system to work. But each agent can have only one opinion on each topic. Sloppy scripts that made a mistake might undo it by adding corrected information, but under a different agent name — this is why the agent name for scripts should always contain the version number of the script.

Humans might need different identifiers, too, if they intend to erase a mistake they have made before. I suggest using twitter handles as identifiers, and “namespacing” them with dashes if required.6 For example, while human agent “despens” made some bad decisions, “despens-wiser” might overwrite them. Of course this is only required for publicly released database records or scripts.

Stay tuned as the database gets fleshed out more with additional tables. These were just examples.

About SQL

There has been a development of “new wave” databases – e.g. couchdb and mongodb, summarized under NoSQL. While they are rightfully praised for their performance and flexibility, they do not really address any problems that come up when handling complex semantic relations like it is required, as I believe, for working with something like Geocities. Instead, they make this power and complexity optional by pushing it out of the database system into the software that wants to make use of this data.

My main motivation for creating this database is to amass knowledge and interpretations about Digital Folklore, giving researchers (including Olia and myself) a tool to verify assumptions and find surprising correlations. SQL appears like a bit of a weird language, but it is tried-and-tested, can solve complex tasks and knowledge about it is wide-spread among archivists and researchers.

I decided to build everything on the PostgreSQL database, which is a free software project, refined since 1989, with mature documentation and some nice graphical frontends7 ; plus, PostgreSQL is actually reasonably fast when it comes to writing records.

  1. … meaning “What I used to do until now” :) []
  2. In for example the PostgreSQL database, these counters are called serial. []
  3. For instance, when reading a list of files to add to the database, the order in which these files are read from the disk is typically determined by the order they are stored on disk. This order is quite random, to create a predictable order would mean to sort the file names before ingest. This represents just another step where a lot could go wrong. []
  4. wget is a GNU software for automatic downloading of network data. The archiveteam recently fixed some of its problems and released wget-warc. []
  5. See Cleaning Up the Torrent []
  6. Twitter, or any other service that allows only unique names, could be used as an authority; however this will only become a problem when hundreds of people start to research Geocities, which seems quite unlikely at the moment. []
  7. I prefer pgadmin. []

FortuneCity, a free home page hosting service just like Geocities, will be closing April 30th 2012. ArchiveTeam is busy copying it, and if you want to help, look at the instructions. The distributed copying system with a centralized tracker seems quite sophisticated. While the Geocities download was organized on a wiki page, this time everything is fully automated and smooth. I am happy to contribute three computers in three different networks!

I am still working on the Geocities database, and it will be fit to hold data from FortuneCity as well. Time to buy a new harddisk.

I was happy to present our imaginary social networks and services of 1997 — Once Upon — at Unlike Us, hosted by the Institute of the Network Cultures in Amsterdam.

It was a great event with a truly interested and competent audience. The only problem was that in the end it was mostly about Facebook, even though in The Netherlands Hyves has still more users than Facebook.

That’s why it was very important to see the presentation by Philipp Budka from the Department of Social and Cultural Anthropology at the University of Vienna. He talked about Myknet, an obscure Canadian social network and web hosting service.

[…] a system of personal homepages intended for remote First Nations users in Northern Ontario. This free of charge, free of advertisements, locally-supported online social environment grew from a constituency base of remote First Nations in a region where numerous communities lived without adequate residential telecom service well into the millennium (Ramirez, Aitkin, Jamieson, & Richardson, 2003; Fiser, Clement, & Walmark, 2006). now hosts over 30,000 registered user accounts, of which approximately 25,000 represent active homepages. This is particularly notable considering that the system primarily serves members of Northern Ontario’s First Nations whose combined population is approximately 45,000 (occupying a geographic area comparable to the size of France). Equally significant is that over half of this population is under the age of 25, making primarily a youth-driven online social environment.

A simple alphabetic list of all the users makes it possible to go from account to account. You can also look at the last updated ones and see that there were already almost 400 pages updated today. It is only around 14:00 in Ontario now.

The first impression: Myknet users make their pages in a special way that is obviously influenced by some simple and not strict content management system or a custom made site builder. Would be interesting to get access to it.

Clearly, the pages are made for low resolution displays and slow connections.

The most exciting fact is that they look exactly like classic homepages, but many are used like tweets or status updates. Visually and structurally it is an unexpected mix. See examples on the top of the article.
Many users have moved to Facebook, leaving their facebook badge on Myknet — functioning like classic “this page has moved” notices.

Many accounts are deleted or were only updated in the last decade, which is interpreted like it never happened by the database.

Working Title:
From Heartland Neighborhood to
The Rise and Fall of “organizing and sharing the things you love” Culture.

Homepage of heartlandhelpinghands, Heartland’s satellite.

Fig.1.2 with menu “Everything” unfolded.

A very interesting thread on Reddit.
Geocities is mentioned quite often. some times seriously, some times as a joke.
I liked the comment “You think you’re being funny, but Geocities probably should be on that list.”
Some mention Altavista. Napster and other dead services, others mention what is hot right now and do really believe, that “internet is only 8 years old.”
I could read this discussion forever. Still this is my favorite reaction:“I somehow read this as the “Stevie Wonders” of the internet.”