2022
TencentARC/GFPGAN: GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
by srcmaxGFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
Flowframes - Fast Video Interpolation for any GPU by N00MKRAD
by srcmaxFlowframes is a simple but powerful app that utilizes advanced AI frameworks to interpolate videos in order to increase their framerate in the most natural looking way possible.
2021
Pixop - AI video enhancement and upscaling in the cloud
by srcmax & 1 otherPixop's cloud service, intuitive interface and AI/ML-powered algorithms make video enhancement a breeze. No plugins, no subscription fees, no hassle. Built to empower creators and rightsholders to update their digital archives for easy monetization.
2020
Conversiobot
by equisHere is a unique chatbot software that will explode and grow your social following and get incredible engagement, without the need to spend 1000's of dollars or investing time you simply don’t have and without needing any special skills or knowledge. All you need to do is activate this latest Artificial intelligence technology and join big Fortune 500 companies like: Facebook, Spotify, Starbucks, Staples, The Wall Street Journal, Pizza Hut, Amtrak, Disney, H&M & Mastercard. These companies all use similar AI technology.
AgencyReel 2.0 Review
by equisAI based App for anyone looking to start, grow, and run a serious marketing services agency from scratch
2019
2010
2009
life : Built with Processing
by ycc2106reproducing virtual bugs Annotated link http://www.diigo.com/bookmark/http%3A%2F%2Fwww.betaruce.com%2Fjava%2Flife%2Fapplet
2008
Conditional Random Fields
by ogrisel (via)Conditional random fields (CRFs) are a probabilistic framework for labeling and segmenting structured data, such as sequences, trees and lattices. The underlying idea is that of defining a conditional probability distribution over label sequences given a particular observation sequence, rather than a joint distribution over both label and observation sequences. The primary advantage of CRFs over hidden Markov models is their conditional nature, resulting in the relaxation of the independence assumptions required by HMMs in order to ensure tractable inference. Additionally, CRFs avoid the label bias problem, a weakness exhibited by maximum entropy Markov models (MEMMs) and other conditional Markov models based on directed graphical models. CRFs outperform both MEMMs and HMMs on a number of real-world tasks in many fields, including bioinformatics, computational linguistics and speech recognition.
An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation [PDF]
by ogrisel (via)Recently, several learning algorithms relying on models with deep architectures have
been proposed. Though they have demonstrated impressive performance, to date, they
have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine
learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show
promise in solving harder learning problems that exhibit many factors of variation. These
models are compared with well-established algorithms such as Support Vector Machines
and single hidden-layer feed-forward neural networks.
YouTube - Visual Perception with Deep Learning
by ogrisel (via)A long-term goal of Machine Learning research is to solve highly
complex "intelligent" tasks, such as visual perception auditory
perception, and language understanding. To reach that goal, the ML
community must solve two problems: the Deep Learning Problem, and the
Partition Function Problem.
There is considerable theoretical and empirical evidence that complex
tasks, such as invariant object recognition in vision, require "deep"
architectures, composed of multiple layers of trainable non-linear
modules. The Deep Learning Problem is related to the difficulty of
training such deep architectures.
Several methods have recently been proposed to train (or pre-train)
deep architectures in an unsupervised fashion. Each layer of the deep
architecture is composed of an encoder which computes a feature vector
from the input, and a decoder which reconstructs the input from the
features. A large number of such layers can be stacked and trained
sequentially, thereby learning a deep hierarchy of features with
increasing levels of abstraction. The training of each layer can be
seen as shaping an energy landscape with low valleys around the
training samples and high plateaus everywhere else. Forming these
high plateaus constitute the so-called Partition Function problem.
A particular class of methods for deep energy-based unsupervised
learning will be described that solves the Partition Function problem
by imposing sparsity constraints on the features. The method can learn
multiple levels of sparse and overcomplete representations of
data. When applied to natural image patches, the method produces
hierarchies of filters similar to those found in the mammalian visual
cortex.
An application to category-level object recognition with invariance to
pose and illumination will be described (with a live demo). Another
application to vision-based navigation for off-road mobile robots will
be described (with videos). The system autonomously learns to
discriminate obstacles from traversable areas at long range.
YouTube - The Next Generation of Neural Networks
by ogrisel (via)In the 1980's, new learning algorithms for neural networks promised to
solve difficult classification tasks, like speech or object recognition,
by learning many layers of non-linear features. The results were
disappointing for two reasons: There was never enough labeled data to
learn millions of complicated features and the learning was much too slow
in deep neural networks with many layers of features. These problems can
now be overcome by learning one layer of features at a time and by
changing the goal of learning. Instead of trying to predict the labels,
the learning algorithm tries to create a generative model that produces
data which looks just like the unlabeled training data. These new neural
networks outperform other machine learning methods when labeled data is
scarce but unlabeled data is plentiful. An application to very fast
document retrieval will be described.
ZSFA -- Vellum
by greutVellum is a simple build tool like make but written in Python using a simple yet flexible YAML based format. Rather than attempt a full AI engine just to get some software built, I went with the simpler algorithm of a “graph”.