public marks

LINK

Official Google Research Blog: All Our N-gram are Belong to You

by parmentierf & 1 other (via)
Here at Google Research we have been using word n-gram models for a variety of R&D projects, such as statistical machine translation, speech recognition, spelling correction, entity detection, information extraction, and others. While such models have usually been estimated from training corpora containing at most a few billion words, we have been harnessing the vast power of Google's datacenters and distributed processing infrastructure to process larger and larger training corpora. We found that there's no data like more data, and scaled up the size of our data by one order of magnitude, and then another, and then one more - resulting in a training corpus of one trillion words from public Web pages.

Comments

No comment on this link yet.


PUBLIC TAGS
on this link

corpus   google   taln   texte   web  

BY

parmentierf
the 06/09/2006 at 07:44

jhhd
the 03/08/2006 at 21:29