[Home] [By Thread] [By Date] [Recent Entries]
On Thu, 30 Jan 2003 06:11:12 -0800, Paul Prescod <paul@p...> wrote: > > If we can't agree that pervasive use of URIs is a defining characteristic > of the Web as we know it, then I really can't imagine how we have the > building blocks for any meaningful conversation at all! Sure, no dispute there. The open question for me is whether "the Web as we know it" proves the concept of URL's that *locate* something or other to be determined by all sorts of context, MIME types, ad hoc conventions and out of band agreements ... or whether it proves the concept of URIs that *identify* abstract resources with representations. The former is much less general and abstract than the latter, and I'm skeptical of the argument that the success of the less general form proves the validity of the more general form. > In any particular case there would obviously be some conflating factor > that could be argued was the "real" reason that the particular site took > off. But let's do a thought experiment. Let's say that there are two > Googles on the Web: Google and Giggle. They have the same algorithms and > basic techniques. But one of them uses a resource-centric view where > everything has a link. This means that news sites and blogs can link to > Google caches and Google searches. "everything has a link" doesn't equate to "has an abstract identity specified by a URI". Or more to the point, Google as we know and love it would work just as well if links were just plain ordinary URLs rather than nice abstract URIs. This get's back to Miles Sabin's "what would break if we stopped thinking about abstract resources?" question. > The other does not expose these resources as URIs. This means that news > sites and blogs must instead describe the steps required to force the > POST-based interface to get to the right information. Which service will > win? Nothing I've said in this thread says anything about the GET vs POST debate. I tend to agree with you (and the TAG) on this point. The sad fact is that Google used POST in their SOAP API, and there are essentially no implementations of the SOAP 1.2 GET binding out there to see if this is going to actually "win" in the real world. I assume that it will, but I can't on one hand argue "trust in what actually works" and then argue "except for the GET binding, it WILL work, trust me." :-) > Does RSS count as human readable content? Even if it is routed and > filtered through a variety of automated processes before a human sees it? Yes, AFAIK RSS has mostly human-readable semantic content (as generally deployed in the real world). Sure it's routed and filtered, but I'll guess that's because of its very limited syntax rather than any machine- processable semantics. Again, I expect that RSS (like Google) will eventually be a "win" for the REST approach, but it's hard to demonstrate that from current practice. More to the point, I suspect, I'm unwilling to extrapolate from Google and RSS to applications where machine processing is much more important, e.g. automated order submission, bill payment, etc. In such cases I suspect that the "contract" between the producing and consuming programs about the syntax and semantics of the data being exchanged is much more important than the architectural style of the communication (POST vs GET/PUT/DELETE) between them.
|

Cart



