Recent Changes - Search:





edit SideBar


Thoughts - research notes go here. GOOD!

Artificial Neural Networks

A form of supervised learning. This means the input(s) and optimal output are already known, so the algorithm's job is to find the best/optimal "way" to "get to" the optimal output from the input(s).

This perfectly fits our scenario. The inputs, I'm assuming, would be things like the global variables that change the mechanics as well as the string that generates the pillars. Our optimal output would just be a perfect score. One of the main criticisms of ANNs - that they get too complicated - doesn't really apply to us since the game is 2D sidescroller-like rather than some open-world MMO or something with tons of rapidly changing inputs.

Such a model contains: sets of adaptable "weights" between "neurons" according to some learning algorithm, and the ability to approximate non-linear functions from the input(s) to the output.

ANNs can allow for "learning" via the use of a cost function, C(f(x)), which takes in functions and assigns them a value for their optimality. The criteria for optimality obviously differs from task to task. Such a function can help us find the optimal function ("way" of "getting to" the desired output) we'll call f* which has the property: C(f*) <= C(f) for all f in F where F represents "possible functions" for the task. The actual "learning" occurs by adjusting "weights" in the ANN.

It seems to me that our cost function would simply be taking the absolute value of (perfect score - actual score). I believe the main thing to be wary of in cost functions is making sure we don't find a local minimum rather than the global minimum. However, it seems like this might not actually be a problem due to the simplicity of our cost function. This may be a naive thought however due to ignorance. Only time will tell!!

As said, the "learning" depends on the learning algorithm chosen. There are several different, popular methods for training ANNs.


A method of training ANNs via evolutionary algorithms. This means processes such as reproduction, mutation, and selection as described by biological evolution according to natural selection are used to optimize the ANN at hand.

Basic implementation of emulating evolution:
1) Generate initial population randomly
2) Evaluate individuals' fitness according to some appropriate function
3) Repeat the following until done (time limit, sufficient fitness attained)
-- a) Select best-fit individuals for breeding
-- b) Produce offspring via crossover and random mutation
-- c) Evaluate offsprings' fitness
-- d) Replace least-fit population with new offspring

It seems like the fitness function used in neuroevolution might be an implicit implementation of the cost function mentioned earlier. It seems like a maximum in the fitness function means the corresponding individual did "well" at the task at hand; therefore, this individual (which is just an anthropomorphized function of getting form the inputs to the outputs) was efficient in some sense and would therefore correspond to a minimum in the cost function. I could easily be wrong, though; maybe a max in the fitness function somehow correlates with a minimum in the cost function but these two things need to be treated as separate functions (maybe due to differing domains).

This basic implementation is applied to the make-up (the connections and weights) of the artificial neural network being trained. This application can be either direct and specific or indirect and general.

Drawbacks include fitness function evaluation, search space size, convergence toward local optima, and handling of dynamic data.

All of these, except maybe local optima, don't seem like they will be actual threats since our game is quite simple in comparison to some limit-pushing situations faced by ANNs trained by neuroevolution.

There exists different methods of neuroevolution with different features. One big distinction is whether the ANN is trained via changes in its weights and the connection between neurons (its topology) or changes in just its weights while having a static topology. (ie. conventional NE vs. NEAT) Another distinction is how the agent (a pseudo-anthropomorphization of ANNs) receives information about its virtual environment. Alternately, this distinction is about how the state of the agent's "world" is represented.

Good paper on applied neuroevolution:

Other papers: and


Preliminaries: Two main types of neurons in ANNs
1. Perceptrons - a series of binary inputs coupled with weights that produce a single output of 1 iff the sum of weights*inputs is greateer than some threshold 2. Sigmoid neuron - a series of inputs in the interval [0, 1] coupled with weights that produces the output of the sigmoid function s(z) where z = sum of weights*inputs + bias

Since the sigmoid function is continuous and smooth, small changes in input result in small changes in output. Also, we can linearly approximate changes in output. Also, the sigmoid function is a specific instance of the general notion of an activation function. It is used mainly because of the ease of taking the derivative of the natural exponential function.

A frequently used cost function amongst ANNs is the mean squared error function: C(w, b) = 1/2n * sum(y(x) - a)^2. W is the set of all weights, b is the set of all bases, n is the number of training inputs, y(x) is the vector-valued output for x that is desired, a is the vector-valued output we actually get from the network. We want to minimize this cost function by picking the appropriate w and b.

Edit - History - Print - Recent Changes - Search
Page last modified on July 15, 2016, at 10:09 AM