Recovered from the Wayback Machine.
Through the various link services, last week I found that my RSS entries were being published to a GreatestJournal site. I’d never heard of GreatestJournal, and when I went to contact the site to ask them to remove the feed, there is no contact information. I did find, though, a trouble ticket area and submitted a ticket asking the site to remove the account. The following is the email I received in return:
Below is an answer to your support question regarding “/RSS feed on GJ”
(http://www.greatestjournal.com/support/see_request.bml?id=22973&auth=xnjd).FAQ REFERENCE: Can I edit or delete a syndicated feed?
http://www.greatestjournal.com/support/faqbrowse.bml?faqid=135Dear MS. Powers,
Your RSS feed is open for use on your website, which means that the user in question does NOT violate your copyright.
You might wish to take your feed down if you don’t want people to use it.
GJ Abuse.
_________________
NOTICE: This correspondence is the intellectual property of GreatestJournal & may not be reproduced in any form, electronic or otherwise, without the express written permission of GreatestJournal. Any reproduction of this correspondence on GreatestJournal itself, with or without this notice of copyright, may be grounds for the immediate suspension of the account or accounts used to do so, as well as the account or accounts to which this correspondence was originally sent.
Normally I won’t publish a private email without permission, but I found the company’s disclaimer on intellectual property and the email response to be particularly disingenuous, considering the nature of my request. Perhaps the company will delete the user account since I am publishing the response. Regardless, this is yet another reason not to provide full content within RSS.
First, a clarification: syndication feeds are NOT meant to be republished other than for personal use. I do not provide a syndication feed so that you can duplicate it in your weblog or site in its entirety. This would be no different than using a screenscraper to scrape my web pages and then duplicating them elsewhere. If a person specifically likes an item, or wants to comment on something I write, and quotes a post, then there is a human element involved in the effort and the use is selective; it is not a blanket replication of material, just because technology has made this blanket replication easy.
I’ve had my syndication feeds republished elsewhere, but never as anything other than a feed. There were no separate comments attached; nor name given that implied I created the site.
danah boyd ran into something of a similar nature recently when she found a weblog that was created from a mashup of several different weblog entries. I gather in weblogs like the one she points out, the results are based on searching on specific terms and then pulling together the returned bits from the search engines. A good example of this is a second post where one can see that “star” was the term used in searches.
There is little I can do about my writing being scraped and merged into a hodge podge of posts at some fake weblog; but there are things I can do about my syndication feed. I’ve blocked the GreatestJournal web bot so it can’t access the syndication feed. I’ve also returned to feed excerpts. An unfortunate consequence of this, though, is that when I link to other sites, these links weren’t showing up in any of the aggregators, such as IceRocket or Bloglines.
To counter this, I’ve gone to a rather unusual feed format: encoded to allow HTML, and with a linked list of references from the article. I use RDF to maintain this list of links, which also means that semantic web bots can easily find and consume this data (recognizing it as external links through the use of the SeeAlso relationship). I’m also most likely going to change the print out of this data to use microformats so that bots that prefer microformatted data can also easily consume the list.
Currently at Burningbird, I’m using my SeeAlso metadata extension to maintain these manually, but I’m working on a plugin that should be compatible with WordPress and Wordform, and which automatically scarfs up hypertext link marked as external links and stores the data into RDF files for each page (along with other metadata for a specific post). A second plugin than outputs the list and also the links manually input using the SeeAlso extension (which has already been converted into a metadata extension for WordPress); a third plugin does the same for syndication feeds. You can see the progress of this at my Plugins workspace. As you can see, it’s still a work in progress. Right now, I’m working on a way to access the current linked list count so that new links from an edited post start at the right number.