Pengantar Neuroph

1. Perkenalan

Artikel ini membahas Neuroph - pustaka sumber terbuka untuk membuat jaringan saraf dan memanfaatkan pembelajaran mesin.

Dalam artikel ini, kita akan melihat konsep inti dan beberapa contoh tentang cara menggabungkan semuanya.

2. Neuroph

Kita dapat berinteraksi dengan Neuroph menggunakan:

  • alat berbasis GUI
  • perpustakaan Java

Kedua pendekatan bergantung pada hierarki kelas yang mendasari yang membangun jaringan saraf tiruan dari lapisan neuron .

Kami akan fokus pada sisi program tetapi akan merujuk ke beberapa kelas bersama dari pendekatan berbasis GUI Neuroph untuk membantu menjelaskan apa yang kami lakukan.

Untuk lebih lanjut tentang pendekatan berbasis GUI, lihat dokumentasi Neuroph.

2.1. Dependensi

Jika untuk menggunakan Neuroph, kita perlu menambahkan entri Maven berikut:

 org.beykery neuroph 2.92 

Versi terbaru dapat ditemukan di Maven Central.

3. Kelas dan Konsep Utama

Semua blok penyusun konseptual dasar yang digunakan memiliki kelas Java yang sesuai.

Neuron terhubung ke Layers yang kemudian dikelompokkan ke dalam NeuralNetworks . NeuralNetworks kemudian dilatih menggunakan LearningRules dan DataSets .

3.1. Neuron

Kelas Neuron memiliki empat atribut utama:

  1. inputConnection: koneksi berbobot antara Neuron
  2. inputFunction: menentukan bobot dan jumlah vektor yang diterapkan ke data koneksi masuk
  3. transferFunction: menentukan bobot dan jumlah vektor yang diterapkan ke data keluar

  4. output: nilai output yang dihasilkan dari penerapan transferFunctions dan inputFunctions ke inputConnection

Bersama-sama keempat atribut utama tersebut membentuk perilaku:

output = transferFunction(inputFunction(inputConnections));

3.2. Lapisan

Lapisan pada dasarnya adalah pengelompokan Neuron sedemikian rupa sehingga setiap Neuro n dalam Lapisan (biasanya) hanya terhubung dengan Neuron di Lapisan sebelumnya dan berikutnya.

Lapisan , oleh karena itu, meneruskan informasi di antara mereka melalui fungsi tertimbang yang ada di Neuronnya .

Neuron dapat ditambahkan ke lapisan:

Layer layer = new Layer(); layer.addNeuron(n);

3.3. Jaringan syaraf

Superkelas NeuralNetwork tingkat atas disubkelas menjadi beberapa jenis jaringan saraf tiruan yang sudah dikenal termasuk jaringan saraf konvolusional (subkelas ConvolutionalNetwork ), jaringan saraf Hopfield (subkelas Hopfield ), dan jaringan saraf perceptron multilayer (subkelas MultilayerPerceptron ).

Semua Jaringan Neural terdiri dari Lapisan yang biasanya disusun menjadi trikotomi:

  1. lapisan masukan
  2. lapisan tersembunyi
  3. lapisan keluaran

If we are using the constructor of a subclass of NeuralNetwork (such as Perceptron), we can pass the Layers, the number of Neurons for each Layer, and their index using this simple method:

NeuralNetwork ann = new Perceptron(2, 4, 1);

Sometimes we'll want to do this manually (and it's good to see what's going on underneath the hood). The basic operation to add a Layer to a NeuralNetwork is accomplished like this:

NeuralNetwork ann = new NeuralNetwork(); Layer layer = new Layer(); ann.addLayer(0, layer); ann.setInputNeurons(layer.getNeurons()); 

The first argument specifies the index of the Layer in the NeuralNetwork; the second argument specifies the Layer itself. Layers added manually should be connected using the ConnectionFactory class:

ann.addLayer(0, inputLayer); ann.addLayer(1, hiddenLayerOne); ConnectionFactory.fullConnect(ann.getLayerAt(0), ann.getLayerAt(1));

The first and last Layer should also be connected:

ConnectionFactory.fullConnect(ann.getLayerAt(0), ann.getLayerAt(ann.getLayersCount() - 1), false); ann.setOutputNeurons(ann.getLayerAt( ann.getLayersCount() - 1).getNeurons());

Remember that the strength and power of a NeuralNetwork are largely dependent on:

  1. the number of Layers in the NeuralNetwork
  2. the number of Neurons in each Layer (and the weighted functions between them), and
  3. the effectiveness of the training algorithms/accuracy of the DataSet

3.4. Training Our NeuralNetwork

NeuralNetworks are trained using the DataSet and LearningRule classes.

DataSet is used for representing and supplying the information to be learned or used to train the NeuralNetwork. DataSets are characterized by their input size, outputsize, and rows (DataSetRow).

int inputSize = 2; int outputSize = 1; DataSet ds = new DataSet(inputSize, outputSize); DataSetRow rOne = new DataSetRow(new double[] {0, 0}, new double[] {0}); ds.addRow(rOne); DataSetRow rTwo = new DataSetRow(new double[] {1, 1}, new double[] {0}); ds.addRow(rTwo);

LearningRule specifies the way the DataSet is taught or trained by the NeuralNetwork. Subclasses of LearningRule include BackPropagation and SupervisedLearning.

NeuralNetwork ann = new NeuralNetwork(); //... BackPropagation backPropagation = new BackPropagation(); backPropagation.setMaxIterations(1000); ann.learn(ds, backPropagation);

4. Putting It All Together

Now let's put those building blocks together into a real example. We're going to start by combining several layers together into the familiar input layer, hidden layer, and output layer pattern exemplified by most neural network architectures.

4.1. Layers

We'll assemble our NeuralNetwork by combining four layers. Our goal is to build a (2, 4, 4, 1) NeuralNetwork.

Let's first define our input layer:

Layer inputLayer = new Layer(); inputLayer.addNeuron(new Neuron()); inputLayer.addNeuron(new Neuron());

Next, we implement hidden layer one:

Layer hiddenLayerOne = new Layer(); hiddenLayerOne.addNeuron(new Neuron()); hiddenLayerOne.addNeuron(new Neuron()); hiddenLayerOne.addNeuron(new Neuron()); hiddenLayerOne.addNeuron(new Neuron());

And hidden layer two:

Layer hiddenLayerTwo = new Layer(); hiddenLayerTwo.addNeuron(new Neuron()); hiddenLayerTwo.addNeuron(new Neuron()); hiddenLayerTwo.addNeuron(new Neuron()); hiddenLayerTwo.addNeuron(new Neuron());

Finally, we define our output layer:

Layer outputLayer = new Layer(); outputLayer.addNeuron(new Neuron()); 

4.2. NeuralNetwork

Next, we can put them together into a NeuralNetwork:

NeuralNetwork ann = new NeuralNetwork(); ann.addLayer(0, inputLayer); ann.addLayer(1, hiddenLayerOne); ConnectionFactory.fullConnect(ann.getLayerAt(0), ann.getLayerAt(1)); ann.addLayer(2, hiddenLayerTwo); ConnectionFactory.fullConnect(ann.getLayerAt(1), ann.getLayerAt(2)); ann.addLayer(3, outputLayer); ConnectionFactory.fullConnect(ann.getLayerAt(2), ann.getLayerAt(3)); ConnectionFactory.fullConnect(ann.getLayerAt(0), ann.getLayerAt(ann.getLayersCount()-1), false); ann.setInputNeurons(inputLayer.getNeurons()); ann.setOutputNeurons(outputLayer.getNeurons());

4.3. Training

For training purposes, let's put together a DataSet by specifying the size of both the input and resulting output vector:

int inputSize = 2; int outputSize = 1; DataSet ds = new DataSet(inputSize, outputSize);

We add an elementary row to our DataSet adhering to the input and output constraints defined above – our goal in this example is to teach our network to do basic XOR (exclusive or) operations:

DataSetRow rOne = new DataSetRow(new double[] {0, 1}, new double[] {1}); ds.addRow(rOne); DataSetRow rTwo = new DataSetRow(new double[] {1, 1}, new double[] {0}); ds.addRow(rTwo); DataSetRow rThree = new DataSetRow(new double[] {0, 0}, new double[] {0}); ds.addRow(rThree); DataSetRow rFour = new DataSetRow(new double[] {1, 0}, new double[] {1}); ds.addRow(rFour);

Next, let's train our NeuralNetwork with the built in BackPropogation LearningRule:

BackPropagation backPropagation = new BackPropagation(); backPropagation.setMaxIterations(1000); ann.learn(ds, backPropagation); 

4.4. Testing

Now that our NeuralNetwork is trained up let's test it out. For each pair of logical values passed into our DataSet as a DataSetRow, we run the following kind of test:

ann.setInput(0, 1); ann.calculate(); double[] networkOutputOne = ann.getOutput(); 

An important thing to remember is that NeuralNetworks only output a value on the inclusive interval of 0 and 1. To output some other value, we must normalize and denormalize our data.

In this case, for logical operations, 0 and 1 are perfect for the job. The output will be:

Testing: 1, 0 Expected: 1.0 Result: 1.0 Testing: 0, 1 Expected: 1.0 Result: 1.0 Testing: 1, 1 Expected: 0.0 Result: 0.0 Testing: 0, 0 Expected: 0.0 Result: 0.0 

We see that our NeuralNetwork successfully predicts the right answer!

5. Conclusion

We've just reviewed the basic concepts and classes used by Neuroph.

Further information on this library is available here, and the code examples used in this article can be found over on GitHub.