public marks

PUBLIC MARKS with tag ai

2010

FREE PSD File Viewer - PSD Viewer 3.2

by vrossign
http://aiviewer.com/ http://epsviewer.org/

2009

life : Built with Processing

by ycc2106
reproducing virtual bugs Annotated link http://www.diigo.com/bookmark/http%3A%2F%2Fwww.betaruce.com%2Fjava%2Flife%2Fapplet

2008

Conditional Random Fields

by ogrisel (via)
Conditional random fields (CRFs) are a probabilistic framework for labeling and segmenting structured data, such as sequences, trees and lattices. The underlying idea is that of defining a conditional probability distribution over label sequences given a particular observation sequence, rather than a joint distribution over both label and observation sequences. The primary advantage of CRFs over hidden Markov models is their conditional nature, resulting in the relaxation of the independence assumptions required by HMMs in order to ensure tractable inference. Additionally, CRFs avoid the label bias problem, a weakness exhibited by maximum entropy Markov models (MEMMs) and other conditional Markov models based on directed graphical models. CRFs outperform both MEMMs and HMMs on a number of real-world tasks in many fields, including bioinformatics, computational linguistics and speech recognition.

An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation [PDF]

by ogrisel (via)
Recently, several learning algorithms relying on models with deep architectures have been proposed. Though they have demonstrated impressive performance, to date, they have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation. These models are compared with well-established algorithms such as Support Vector Machines and single hidden-layer feed-forward neural networks.

YouTube - Visual Perception with Deep Learning

by ogrisel (via)
A long-term goal of Machine Learning research is to solve highly complex "intelligent" tasks, such as visual perception auditory perception, and language understanding. To reach that goal, the ML community must solve two problems: the Deep Learning Problem, and the Partition Function Problem. There is considerable theoretical and empirical evidence that complex tasks, such as invariant object recognition in vision, require "deep" architectures, composed of multiple layers of trainable non-linear modules. The Deep Learning Problem is related to the difficulty of training such deep architectures. Several methods have recently been proposed to train (or pre-train) deep architectures in an unsupervised fashion. Each layer of the deep architecture is composed of an encoder which computes a feature vector from the input, and a decoder which reconstructs the input from the features. A large number of such layers can be stacked and trained sequentially, thereby learning a deep hierarchy of features with increasing levels of abstraction. The training of each layer can be seen as shaping an energy landscape with low valleys around the training samples and high plateaus everywhere else. Forming these high plateaus constitute the so-called Partition Function problem. A particular class of methods for deep energy-based unsupervised learning will be described that solves the Partition Function problem by imposing sparsity constraints on the features. The method can learn multiple levels of sparse and overcomplete representations of data. When applied to natural image patches, the method produces hierarchies of filters similar to those found in the mammalian visual cortex. An application to category-level object recognition with invariance to pose and illumination will be described (with a live demo). Another application to vision-based navigation for off-road mobile robots will be described (with videos). The system autonomously learns to discriminate obstacles from traversable areas at long range.

YouTube - The Next Generation of Neural Networks

by ogrisel (via)
In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or object recognition, by learning many layers of non-linear features. The results were disappointing for two reasons: There was never enough labeled data to learn millions of complicated features and the learning was much too slow in deep neural networks with many layers of features. These problems can now be overcome by learning one layer of features at a time and by changing the goal of learning. Instead of trying to predict the labels, the learning algorithm tries to create a generative model that produces data which looks just like the unlabeled training data. These new neural networks outperform other machine learning methods when labeled data is scarce but unlabeled data is plentiful. An application to very fast document retrieval will be described.

ZSFA -- Vellum

by greut

Vellum is a simple build tool like make but written in Python using a simple yet flexible YAML based format. Rather than attempt a full AI engine just to get some software built, I went with the simpler algorithm of a “graph”.

Don Quixote Time Series Software

by ogrisel (via)
Don Quixote is a new business software that uses artificial intelligence and powerful statistical methodology to achieve high forecasting accuracy. No matter if you forecast market shares, sales, profits, demand for services or material, Don Quixote will make your work faster, easier and more accurate and will improve your understanding of the nature of time series.

2007

JEliza - Die Opensource KI

by rike_ (via)
Das Computerprogramm JEliza ist die leistungsstärkste Deutsch sprechende künstliche Intelligenz, die den Prinzipien freier Software folgt. Es handelt sich dabei um einen Gesprächssimulator, also eine künstliche Intelligenz, mit der Unterhaltungen ermöglicht werden. JEliza benutzt ein semantisches Netz, um alle Gesprächsverläufe zu speichern und lernt so dazu.

Robot Powered by Moth’s Brain

by flakki
A new paper studies the effects of robots exhibiting roach-like behaviour on real cockroaches.

Logothèque, en particulier en format vectoriel

by 4004 & 18 others
Logothèque, en particulier en format vectoriel illustrator

Das Perzeptron - Einführung in neuronale Netze und KI

by helmeloh
Das Perzeptron ist ein vereinfachtes künstliches neuronales Netz (Frank Rosenblatt 1958). Rosenblatt hat es so einfach realisiert, dass man es mathematisch mit einer dreistufigen Verarbeitung von Matritzen erfassen konnte. Aber auch weil die Parameter de

Machine to Transcendent Mind

by YukuanBlog
這本書最合我胃口的是第二章〈小心!前有機器車〉,探討作者對機器自走車的實務經驗。裡面提到作者Hans Moravec在 Mobile Robot Laboratory 接受 Denning Mobile Robotics 委託,研究如何以二十四個聲納組成的障礙偵測裝置,量測、取得的距離資料,完成自主機器車導航的任務。

Active users

vrossign
last mark : 01/07/2010 08:43

franktno1
last mark : 19/11/2009 06:27

greut
last mark : 03/10/2009 13:22

ycc2106
last mark : 25/04/2009 07:00

jey
last mark : 22/01/2009 18:19

stan
last mark : 07/12/2008 21:04

ogrisel
last mark : 23/10/2008 15:06

j_c
last mark : 03/10/2008 20:04

kemar
last mark : 01/09/2008 10:05

signalsurf
last mark : 19/05/2008 12:43

rike_
last mark : 29/12/2007 10:16

flakki
last mark : 27/12/2007 00:53

4004
last mark : 24/12/2007 17:52

delavigne
last mark : 13/11/2007 08:35

helmeloh
last mark : 11/11/2007 08:36

YukuanBlog
last mark : 17/09/2007 14:54

psylle
last mark : 13/09/2007 13:24