Whole Data  vF
contribute · other topics
11 November 09 from Shane Curcuru 5
The world is a cool place where a friend from long ago (via $dayjob and gaming) works in the realm of a friend from newer days (via $dayjob, but a different one). Stefano is quite the character, and definitely interesting work they're doing at MIT - neat you're now in the same space.  

Still fondly remember your yearly music lists - waaaay back when, once I realized how to read your commentary, I realized I had an excellent introduction to new music from your lists. Thanks.
23 October 08 from glenn mcdonald 1
Arguably that's what Thread is, as long as by "Super" you mean "written in a totally different idiom", and by "Arguably" I mean "my marketing department would have me impounded if I described it that way"...
23 October 08 from twas-fan 4
Why has no-one made some kind of 'SuperProlog' yet? Bah.
15 September 08 from glenn mcdonald 3
Well, eliminating the data-modeling process by eliminating the data-model is not what I mean. Modeling is hard, but models are valuable.  

Also, Brainwave's query language is effectively Python, which is neither simpler nor easier than SPARQL. Adding classes and methods to a programming language is not quite what I mean by a "query language", either.  

But the meme/link part of it sounds right, at least!
15 September 08 from Syed Abbas 2
Hi,  

"...a data-model that's more usable than RDF, and a path-based query-language... that's better than SPARQL. I want this thing to exist..." yes they do :-)  

Check Brainwave Platform which is a complete development and deployment suite packed with a Schemaless database "Poseidon" implemented in Python and C, which eliminates the Data modeling process giving the developers the fastest time to start, and making it easy to accept changes. The query language is much much simpler than SPARQL.  

-Syed Abbas
18 August 08 from glenn mcdonald 1
There's a response, sort of, on Kingsley Idehen's blog.  

Kingsley was the other presenter at the same Cambridge Semantic Web Gathering meeting where I gave my talk. He demoed a Firefox extension for seeing a page's Linked Data sources*. Or tried to, anyway. It produced a screen that was a mixture of actual visual gibberish (text on top of other text) and legible but unintelligible LinkedData-speak. So possibly he was busy trying to get it working while I was talking, and thus didn't hear anything I said.  

At least, that's the most charitable explanation I can think of for his "responses", which consist almost entirely of term-definitions (interspersed with "it's" where he means "its").  

Lest there be any doubt, I will clarify: I know what RDF, SPARQL, Linked Data and the Giant Global Graph are. I am suggesting that the first two are bad solutions to real problems, and that the last two are noble but distracting.  
 

*If this extension worked properly, it would still be inane. The advance we're trying to make in web tech, whether or not you agree with me that we should be fixing database tech first, is giving machines access to the data behind the human-readable web. The humans already have access to the human-readable version. Later in his post Kingsley says "The Web is experienced via Web Browsers primarily, so any enhancement to the Web must be exposed via traditional Web Browsers". You could hardly miss the point of the semantic web any more widely than that.
14 August 08 from glenn mcdonald 1
Following up on this blog post, and the oral adaptation of it I gave as a talk at the Cambridge Semantic Web Gathering on 12 August 2008 at MIT, here are some bits from my speaking notes about the shortcomings of various current Semantic Web pieces as solutions to the problem of doing for graphs of data what VisiCalc did for columns of numbers. This is a discussion among practitioners, so don't feel bad if you don't have any idea what I'm talking about here.  
 

RDF  

Ingenious data decomposition idea, but:
- too low-level; the assembly language of data, where we need Java or Ruby
- "resource" is not the issue; there's no such thing as "metadata", it's all data; "meta" is a perspective
- lists need to be effortless, not painful and obscure
- nodes need to be represented, not just implied; they need types and literals in a more pervasive, integrated way  
 

SPARQL (and Freebase's MQL)  

These are just appeasement:
- old query paradigm: fishing in dark water with superstitiously tied lures; only works well in carefully stocked lakes
- we don't ask questions by defining answer shapes and then hoping they're dredged up whole  
 

Linked Data  

Noble attempt to ground the abstract, but:
- URI dereferencing/namespace/open-world issues focus too much technical attention on cross-source cases where the human issues dwarf the technical ones anyway
- FOAF query over the people in this room? forget it.
- link asymmetry doesn't scale
- identity doesn't scale
- generating RDF from non-graph sources: more appeasement, right where the win from actually converting could be biggest!  
 

Giant Global Graph  

Hugely motivating and powerful idea, worthy of a superhero (Graphius!), but:
- giant and global parts are too hard, and starting global makes every problem harder
- local projects become unmanageable in global context (Cyc, Freebase data-modeling lists...)  
 

And my thus my plea, again. Forget "semantic" and "web", let's fix the database tech first:
- node/arc data-model, path-based exploratory query-model
- data-graph applications built easily on top of this common model; building them has to be easy, because if it's hard, they'll be bad
- given good database tech, good web data-publishing tech will be trivial!
- given good tools for graphs, the problems of uniting them will be only as hard as they have to be
vF software copyright © 2005-6, glenn mcdonald · www.furia.com