As developers, we can create neural networks, there are many types of networks and incredible uses.
A neural network is basically a neurons set connected to each other and the form or complexity of connection will vary its type.
In AMXX it seems that there is no API to create a neural network, that is what we are going for.
Let's start with an example and then more advanced explanations.
PHP Code:
#include <amxmodx>
#include neural
public plugin_init() {
register_plugin("Neural network example", "1.0.0", "LuKks");
//layers
neural_layer(8, 3); //(neurons, inputs) -> hidden/input layer
//neural_layer(8); //(neurons) -> hidden layer
//neural_layer(8); //(neurons) -> can add more hidden layers
neural_layer(1); //(outputs) -> output layer is the last one
//inside the include, at the beginning, there are maximums setted
//for example, 6 layers maximum, can modify it
//this is required if you will use neural_learn or neural_think instead of _raw
neural_input_range(0, 0.0, 255.0); //0 -> first input
neural_input_range(1, 0.0, 255.0); //1 -> second input
neural_input_range(2, 0.0, 255.0); //2 -> third input
neural_output_range(0, 0.0, 1.0); //0 -> first output
//0.0-1.0 is the default but anyway to be clear
new Float:rate = 0.25; //learning rate; depends on the layers, neurons, iterations, etc
new Float:outputs[NEURAL_MAX_SIZE];
//benchmark();
//we query to the network if 255, 255 and 255 is light or dark
outputs = neural_think(Float:{ 255.0, 255.0, 255.0 });
server_print("255, 255, 255 [1.0] -> %f", outputs[0]); //random value
//we query to the network if 0, 0 and 0 is light or dark
outputs = neural_think(Float:{ 0.0, 0.0, 0.0 });
server_print("0, 0, 0 [0.0] -> %f", outputs[0]); //random value
for(new i, Float:mse; i < 5001; i++) { //iterations
mse = 0.0;
//two ways to pass data
//raw
mse += neural_learn_raw(Float:{ 1.0, 0.0, 0.0 }, Float:{ 1.0 }, rate); //255, 0, 0
//automatic (using the range predefined), between 8% and 27% less efficient
mse += neural_learn(Float:{ 0.0, 255.0, 0.0 }, Float:{ 1.0 }, rate);
//a simple network can't learn any pattern, only linear ones
//a multilayer network can solve non linear patterns!
//OR is linear and XOR isn't -> https://vgy.me/dSbEu0.png
//if have a lot outputs, you can ->
/*for(new i; i < layers[layers_id - 1][N_max_neurons]; i++) {
server_print("output %d -> %f", i, outputs[i]);
}*/
//also can use
//neural_save("zombie-npc");
//neural_load("zombie-npc");
//after load, you can't create more layers (neural_layer) and nothing involved to config
//if have a neural created and you load a new neural, it will be overwritted, so you can do it
}
public benchmark() {
new time_start, time_end;
//raw
time(_, _, time_start);
for(new i; i < 200000; i++) { //this takes 7s
//then this takes ~0.000035s (1/3 of a millisecond)
//but my cpu is extremely low: AMD A-10 7860K
neural_think_raw(Float:{ 1.0, 1.0, 1.0 });
}
time(_, _, time_end);
//with this results I'm able to make real time thinking without problem
}
At first the network did not know that 255, 255 and 255 was a light color (actually totally white).
Then we show him some color patterns, so that way he was able to handle colors that we did not teach him.
By the way, the outputs at the end of learning, do not get to be exact. It has his explanation.
But first, what is a layer?
Spoiler
Well, one simple network could be seen in the following way:
You don't see the word "layer" because the same input layer is the only one, that's why it's simple.
A neural network with multiple layers is composed of an input layer, 1 or more hidden layers and an output layer.
All layers can have different amounts of neurons.
For example, a network could be; input layer (which is also hidden) with 3 inputs and 3 neurons, a hidden layer with 4 other neurons and an output layer with 2 neurons (equivalent to 2 outputs).
Visually, it would be ->
Just to show that there may be more hidden layers, here we have 6 inputs, a hidden layer with 4 neurons, another hidden layer with 3 neurons and an output layer with a neuron.
Remember what an NPC is? Non-Player Character
Spoiler
You could create a neural network (not simple) so that it manages all the NPCs of your game or mod in a realistic way.
And it would not be very difficult to do it, I mean, to teach the network you should just play as if you were the NPC and make him learn your movements.
Neural networks require a lot of processing (always depends on the case) but once trained, it is not a large computation that is used to use them and they are exportable to any other environment.
So, once you have your expert network in the situation, you can save him memory (number of layers, neurons, him weights and bias, etc) to use it even in another programming language or in any other part.
In addition, you can always collect the information first and then perform the learning on another platform (or same), for example, some desktop program (maximum performance).
This is just a beginning, as I said before, there are many types of neural networks. https://i.imgur.com/fJ0Rs4R.png
This type (deep feed forward) is the one that I used more, for other types I have always helped myself from libraries or systems already done because for me it is not worth learning and re-creating so much functionality.
The current systems always served me as learning and to get greater reasoning.
A neural network claims to be equal to or better than a human.
A neural network will not distracted, doesn't rest, doesn't look the other way, etc.
A network has a margin of error (it's not probability, that is different). How does the margin of error works? Well, you saw it right at the beginning with the simple network (and again with multi-layer).
Spoiler
0.999930 instead of 1.0
0.000372 instead of 0.0
A multi-layer network could reach exact values in that case but in most cases you will not want that because for real uses, getting to an error of zero is very fucked up (or impossible), usually a very low error like 0.000372 for 0.0 is ok but the case depends.
A network that intends to perform surgical operations can't have so much margin of error. So much? In other words, can you have a small margin in that case? well, the humans don't move with 100% accuracy, as long as the network is the same or better there should not be a problem, right?
Why is not there a neural network that simulates a human brain?
Spoiler
An adult human brain (not so much, as long as it's not newborn) have so many neurons and too many synapses, that is, neural connections that make it impossible for a technical matter to equalize the capacity of necessary computation.
Surely there are attempts with less capacity but not there to be close but still it was achieved and is achieving much with the current. We have to keep a close eye on the advances in quantum computing (although I don't know if that will work, I have not read about it).
See this comparison of number of neurons.
Bee = 960.000
Cat = 300.000.000
Chimpanzee = 6.200.000.000
Human = 100.000.000.000
Source of quantities (sorry, it's in Spanish): psicologiaymente.com
How do neural networks learn?
Spoiler
As you know, a simple network has only one layer with a certain number of neurons (ignore the bias, a technical question of mathematics/algebra).
A neuron has multiple weights but in a simple network let's say that "each weight simulates each neuron", is just a way of saying.
So, let's say that this simple network has 2 inputs and two neurons, so, the weights of a neuron would be ->
Weights = [0.21749, -0.18305]
Why these values? they are really random, all neurons start with random values.
Therefore, the network at the beginning gives you the wrong results.
How is the current output calculated for this neuron?
Let's say that our two inputs are 0.5 and 0.5 ->
Output = 0.5 * Weights [0] + 0.5 * Weights [1]
Output -> 0.01722
We wanted to do 0.5 + 0.5 and the network currently tells us that it is 0.01722.
You have to teach him that it is a little higher value, how is it corrected?
Modifying the weights with a learning rate, for example 0.01.
In addition to the learning rate, there is also the margin of error with respect to current entries.
Error = 1.0 - 0.01722
That is -> desired value - current value
[size=x-small]I had written the whole procedure but it did not convince me, I feel that it would be something complicated to understand textually.[/size]
To simplify the process, what happens is that with some mathematical operations, all the weights of the current neuron are modified to approximate values corresponding to a lower error (also considering the learning rate).
Therefore, having multiple neurons a network can learn multiple knowledge, since some neurons learn more or less depending on their current error.
That way there will come a time when weights will be adjusted in values that at the time of entering similar inputs will correspond to correct outputs.
In case you did not notice, it's basically brute force.
Same learning, different weights?
Spoiler
It's interesting that you see the following, I created the neural network of the multi-layer example twice and I saved both, so, both networks produce practically the same result but that is how they look internally (open both images in two tabs to better see the difference) -> https://vgy.me/Xp47Zz.jpg https://vgy.me/MXfnoF.jpg
It's interesting to see that very different weights produce similar results.
Why is it difficult to handle so many neurons?
Spoiler
You will imagine that having a thousand neurons, it will not be easy to find the right weights, even if you are giving the simplest pattern that you can think of (to say it in some way).
Like I said, it's brute force so it will take as long as your processing power.
The humans, after interacting with something multiple times you'll understand it more and more.
"Practice makes a master" is a typical phrase (at least on Latin America) but technically correct.
As I said at the beginning, there are many types of networks. For example, a multi-layer perceptron neural network with back-propagation can handle patterns, image recognition and surely many other uses, probably for translation of languages according to my memory.
Surely driving a car can also, is that there are so many types of networks that sometimes there is a better solution but it is understood.
Learn about neural networks (Google, YouTube, etc):
Spoiler
You can always Google to delve into every detail, although I recommend YouTube because it's more visual, it will be difficult to understand everything to pure text and mathematics (well, it will depends of your weights, haha).
To understand how a neural network would do to recognize images -> https://www.youtube.com/watch?v=aircAruvnKk
It occurred to me to look for the videos that I saw about two years ago, it's like a small course -> https://www.youtube.com/watch?v=jaEIv_E29sk
Now that I remember, I like an example that gives, if you don't see it, it was something like ->
In psychology a professional has many patient profiles, so, a neural network could learn everything about patients and in that way, the network can predict information for current and future patients.
More or less I modified it a lot to what he said but it is understood.
Regarding the examples, you can always make a comment and ask me.
If it's a problem that you are having with a plugin maybe you should create a separate publication (send me a private message with the link to the post because I don't check the new ones, thank you) but if it is a specific question to the code or the topic, I think you could directly comment it so the rest see your question as well.
Last changes
- Optimizations: mainly a lot of reduced loops.
- Added: a lot of explanations in the include (.inc).
- Added: neural_learn_raw and _think_raw.
- Renamed: previous functions like neural_learn_values now are neural_learn and same with _think.
- Added: neural_input_range and _output_range to define individually ranges for every value (more easy to use).
- Removed: neural_simple.inc.
- Changed: neural_multi.inc now is just neural.inc
Changes
- Example changed to a more practical and understandable one.
- Added some functions to easy usage: neural_values, neural_learn_values and neural_think_values.
- Optimization correction but nothing remarkable.
- Removed comments about neural_simple since it's not very useful but learning so I keep the include attached.
Initial
- I removed the version with dynamic arrays (multiple reasons, especially performance and code readability).
- To the neural_multi version I added two useful functions: neural_normalize and neural_range.
- I also want to do neural_denormalize, the function is created but it has no content, if someone wants to do the inverse of normalize, I would appreciate it.
- Added neural_save and neural_load, requires fvault.
- I changed the enums with the prefix N_ * so that they do not conflict with their plugins (N_id, N_error, etc.).
- Note: I do not plan to add the same functionalities to neural_simple because it's too limited, I consider it a library to learn in a basic way how a simple neural network works internally, you can always ask.
You should create a plugin to demonstrate this. While it is interesting, it will get buried quickly since many will not know what to do with it. You got lucky and SpawnerF saved it.
Interesting topic. I agree with Bugsy that you should create a demostration plugin or something useful to show it's potential and motivate devs to use it.
Last changes
- Example changed to a more practical and understandable one.
- Added some functions to easy usage: neural_values, neural_learn_values and neural_think_values.
- Optimization correction but nothing remarkable.
- Removed comments about neural_simple since it's not very useful but learning so I keep the include attached.
Ouch! Some serious topic here. Great job on creating the library. Currently exploring it, alongside doing course for neural networks, soon will do something with it!