Skip to content

tylee-fdcl/nn

Repository files navigation

FDCL_NN

Neural network library based on the following formulation:

Installation

Dependency

This library is based on the Eigen library for linear algebra.

Compile

To run the example provided in the package, run

make
./test_fdcl_nn

Usuage

Supported Layer

  • Multi-layer perceptron layer (LAYER_PC)
  • Softmax layer (LAYER_SF)

Example for Classifier

The following example shows how to create a classifer composed of one perceptron layer and one softmax layer, to associate 5x5 binary images of number 1, 2, .. ,5 to the corresponding number.

#include "fdcl_nn.h"

using namespace std;
using namespace Eigen;

int main()
{
	VectorXd x,y;
	fdcl_nn nn;
	std::vector<int> N, type_layer;
	
	N={25, 15, 5};
	type_layer={LAYER_PC, LAYER_SF};
	nn.init(N,type_layer);

	nn.dJ_check();

	nn.init_data(5);

	nn.X_data[0] <<	
		0, 1, 1, 0, 0,
		0, 0, 1, 0, 0,
		0, 0, 1, 0, 0,
		0, 0, 1, 0, 0,
		0, 1, 1, 1, 0;

	nn.X_data[1] <<	
		1, 1, 1, 1, 0,
		0, 0, 0, 0, 1,
		0, 1, 1, 1, 0,
		1, 0, 0, 0, 0,
		1, 1, 1, 1, 1;

	nn.X_data[2] <<	
		1, 1, 1, 1, 0,
		0, 0, 0, 0, 1,
		0, 1, 1, 1, 0,
		0, 0, 0, 0, 1,
		1, 1, 1, 1, 0;

	nn.X_data[3] <<	
		0, 0, 0, 1, 0,
		0, 0, 1, 1, 0,
		0, 1, 0, 1, 0,
		1, 1, 1, 1, 1,
		0, 0, 0, 1, 0;		
		
	nn.X_data[4] <<	
		1, 1, 1, 1, 1,
		1, 0, 0, 0, 0,
		1, 1, 1, 1, 0,
		0, 0, 0, 0, 1,
		1, 1, 1, 1, 0;		
	
	nn.Y_data[0] << 1, 0, 0, 0, 0;
	nn.Y_data[1] << 0, 1, 0, 0, 0;
	nn.Y_data[2] << 0, 0, 1, 0, 0;
	nn.Y_data[3] << 0, 0, 0, 1, 0;
	nn.Y_data[4] << 0, 0, 0, 0, 1;

	nn.grad_descent(15000);
	
	cout << endl;
	for (int i=0; i<5; i++)
		cout << nn.f(nn.X_data[i]) << endl << endl;


	return 0;
}

More specifically, a neural network is construted by specifying the number of input and output at each layer in series, and the type of each later, as given by

	N={25, 15, 5};
	type_layer={LAYER_PC, LAYER_SF};
	nn.init(N,type_layer);

One can verify the computed gradient against a numerical approximation by

	nn.dJ_check();

After the learning data are assigned, perform a gradient descent optimizaiton by

	nn.grad_descent(15000);

where 15000 specify the number of epochs.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published