public marks

PUBLIC MARKS from parmentierf with tags corpus & web

January 2007 - Using Wikipedia as a Web Database

by 7 others is a community effort to extract structured information from Wikipedia and to make this information available on the Web. dbpedia allows you to ask sophisticated queries against Wikipedia and to link other datasets on the Web to Wikipedia data.

September 2006

Official Google Research Blog: All Our N-gram are Belong to You

by 1 other (via)
Here at Google Research we have been using word n-gram models for a variety of R&D projects, such as statistical machine translation, speech recognition, spelling correction, entity detection, information extraction, and others. While such models have usually been estimated from training corpora containing at most a few billion words, we have been harnessing the vast power of Google's datacenters and distributed processing infrastructure to process larger and larger training corpora. We found that there's no data like more data, and scaled up the size of our data by one order of magnitude, and then another, and then one more - resulting in a training corpus of one trillion words from public Web pages.

March 2005

start [WaCky]

The WaCky Project is a nascent effort (I always liked the expression nascent effort) by a group of linguists to build or gather tools to use the web as a linguistic corpus.

parmentierf's TAGS related to tag corpus

accès libre +   bibliographie +   chat +   cnrs +   culture +   dictionnaire +   documents +   francophone +   français +   gnu/fdl +   gnu/gpl +   google +   gratuit +   ia +   image +   inist +   jeu +   langue +   libre +   ontologie +   open source +   org +   python +   science +   search +   SERV'IST +   sémantique +   statistiques +   taln +   text/processing +   texte +   veille +   web +   wikipedia +