Skip to content

Commit

Permalink
Merge pull request #6 from fszewczyk/neural-net
Browse files Browse the repository at this point in the history
Examples
  • Loading branch information
fszewczyk authored Nov 8, 2023
2 parents 08743a3 + 4aec860 commit cd55556
Show file tree
Hide file tree
Showing 22 changed files with 264 additions and 94 deletions.
18 changes: 0 additions & 18 deletions .github/workflows/docs.yml

This file was deleted.

13 changes: 3 additions & 10 deletions .github/workflows/linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,11 @@ jobs:
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ include/ShkyeraTensor.hpp --std=c++20
g++ include/ShkyeraTensor.hpp --std=c++17
- name: Build examples
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ examples/algebra.cpp --std=c++20
g++ examples/dataset.cpp --std=c++20
- name: Test
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ tests/mainTest.cpp --std=c++20
./a.out
g++ examples/scalars.cpp --std=c++17
g++ examples/xor_nn.cpp --std=c++17
13 changes: 3 additions & 10 deletions .github/workflows/macos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,11 @@ jobs:
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ include/ShkyeraTensor.hpp --std=c++20
g++ include/ShkyeraTensor.hpp --std=c++17
- name: Build examples
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ examples/algebra.cpp --std=c++20
g++ examples/dataset.cpp --std=c++20
- name: Test
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ tests/mainTest.cpp --std=c++20
./a.out
g++ examples/scalars.cpp --std=c++17
g++ examples/xor_nn.cpp --std=c++17
11 changes: 3 additions & 8 deletions .github/workflows/windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,10 @@ jobs:
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ include/ShkyeraTensor.hpp --std=c++20
g++ include/ShkyeraTensor.hpp --std=c++17
- name: Build examples
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ examples/algebra.cpp --std=c++20
g++ examples/dataset.cpp --std=c++20
- name: Test
env:
CXX: ${{matrix.conf.compiler}}
run: |
g++ tests/mainTest.cpp --std=c++20 -o out.exe
g++ examples/scalars.cpp --std=c++17
g++ examples/xor_nn.cpp --std=c++17
17 changes: 16 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
<div align="center">

<h1>Shkyera Tensor</h1>
<h1>Shkyera Grad</h1>
<i>micrograd, but in C++ and with more functionality.</i>

</div>

This is a small header-only library of a scalar-valued autograd based on [Andrej Karpathy's micrograd](https://github.com/karpathy/micrograd). It provides a high-level, PyTorch-like API for creating simple neural networks.

## Usage

Make sure your compiler supports C++17. Shkyera Grad is a header-only library, so the only thing you need to do is to include it in your project.

```cpp
#include "include/ShkyeraGrad.hpp"
```

Check out the [examples](examples/README.md) for a quick start on Shkyera Grad.
31 changes: 31 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
## Shkyera Grad Examples

To compile an example, simply run the following command:

```
g++ --std=c++17 xor_nn.cpp
```

Remember to replace the file name with the appropriate name :)

## Scalars

Provides a brief overview of operating on scalars.

## XOR Neural Network

Small neural network that learns the xor function. In particular, given a vector of two values, it predicts a single value according to the XOR function. The training runs with a learning rate of 0.1 for 100 epochs using MSE loss.

After running this example, the output should look somewhat like this:

```
Epoch: 1 Loss: 1.57581
Epoch: 2 Loss: 1.46817
(...)
Epoch: 99 Loss: 0.0386917
Epoch: 100 Loss: 0.0371898
Vector(size=2, data={Value(data=0) Value(data=0) }) -> Value(data=0.115728)| True: Value(data=0)
Vector(size=2, data={Value(data=1) Value(data=0) }) -> Value(data=0.93215) | True: Value(data=1)
Vector(size=2, data={Value(data=0) Value(data=1) }) -> Value(data=0.937625)| True: Value(data=1)
Vector(size=2, data={Value(data=0) Value(data=0) }) -> Value(data=0.115728)| True: Value(data=0)
```
20 changes: 20 additions & 0 deletions examples/scalars.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#include "../include/ShkyeraGrad.hpp"

int main() {
using namespace shkyera;

auto a = Value<Type::float32>::create(1);
auto b = Value<Type::float32>::create(2);
auto c = (a + b)->tanh();
auto d = (a - b + c)->sigmoid();
a = (b * c)->exp();
b = (c / d)->relu();
auto e = b->pow(d);

// Calculating the gradients
e->backward();

for (auto val : {a, b, c, d, e}) {
std::cout << "Value: " << val << " Gradient de/dval: " << val->getGradient() << std::endl;
}
}
47 changes: 47 additions & 0 deletions examples/xor_nn.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#include "../include/ShkyeraGrad.hpp"

int main() {
using namespace shkyera;

// clang-format off
std::vector<Vec32> xs;
std::vector<Vec32> ys;

// ---------- INPUT ----------- | -------- OUTPUT --------- //
xs.push_back(Vec32::of({0, 0})); ys.push_back(Vec32::of({0}));
xs.push_back(Vec32::of({1, 0})); ys.push_back(Vec32::of({1}));
xs.push_back(Vec32::of({0, 1})); ys.push_back(Vec32::of({1}));
xs.push_back(Vec32::of({0, 0})); ys.push_back(Vec32::of({0}));

auto mlp = SequentialBuilder<Type::float32>::begin()
.add(Layer32::create(2, 15, Activation::relu<Type::float32>))
.add(Layer32::create(15, 5, Activation::relu<Type::float32>))
.add(Layer32::create(5, 1, Activation::sigmoid<Type::float32>))
.build();
// clang-format on

Optimizer32 optimizer = Optimizer<Type::float32>(mlp->parameters(), 0.1);
Loss::Function32 lossFunction = Loss::MSE<Type::float32>;

// ------ TRAINING THE NETWORK ------- //
for (size_t epoch = 0; epoch < 100; epoch++) {
auto epochLoss = Val32::create(0);

optimizer.reset();
for (size_t sample = 0; sample < xs.size(); ++sample) {
Vec32 pred = mlp->forward(xs[sample]);
auto loss = lossFunction(pred, ys[sample]);

epochLoss = epochLoss + loss;
}
optimizer.step();

std::cout << "Epoch: " << epoch + 1 << " Loss: " << epochLoss->getValue() << std::endl;
}

// ------ VERIFYING THAT IT WORKS ------//
for (size_t sample = 0; sample < xs.size(); ++sample) {
Vec32 pred = mlp->forward(xs[sample]);
std::cout << xs[sample] << " -> " << pred[0] << "\t| True: " << ys[sample][0] << std::endl;
}
}
19 changes: 19 additions & 0 deletions include/ShkyeraGrad.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include "src/core/Type.hpp"
#include "src/core/Value.hpp"
#include "src/core/Vector.hpp"
#include "src/nn/Activation.hpp"
#include "src/nn/Layer.hpp"
#include "src/nn/Loss.hpp"
#include "src/nn/Module.hpp"
#include "src/nn/Neuron.hpp"
#include "src/nn/Optimizer.hpp"
#include "src/nn/Sequential.hpp"
12 changes: 0 additions & 12 deletions include/ShkyeraTensor.hpp

This file was deleted.

7 changes: 7 additions & 0 deletions include/src/core/Type.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

namespace shkyera::Type {
Expand Down
7 changes: 7 additions & 0 deletions include/src/core/Utils.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include <random>
Expand Down
24 changes: 24 additions & 0 deletions include/src/core/Value.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include <cmath>
Expand Down Expand Up @@ -35,6 +42,7 @@ template <typename T> class Value : public std::enable_shared_from_this<Value<T>
static ValuePtr<T> create(T data);

void backward();
T getValue();
T getGradient();

ValuePtr<T> tanh();
Expand All @@ -49,13 +57,22 @@ template <typename T> class Value : public std::enable_shared_from_this<Value<T>
template <typename U> friend ValuePtr<U> operator/(ValuePtr<U> a, ValuePtr<U> b);
template <typename U> friend ValuePtr<U> operator-(ValuePtr<U> a);

template <typename U> friend bool operator>(ValuePtr<U> a, ValuePtr<U> b);
template <typename U> friend bool operator>=(ValuePtr<U> a, ValuePtr<U> b);
template <typename U> friend bool operator<(ValuePtr<U> a, ValuePtr<U> b);
template <typename U> friend bool operator<=(ValuePtr<U> a, ValuePtr<U> b);
template <typename U> friend bool operator==(ValuePtr<U> a, ValuePtr<U> b);
template <typename U> friend bool operator!=(ValuePtr<U> a, ValuePtr<U> b);

template <typename U> friend std::ostream &operator<<(std::ostream &os, const ValuePtr<U> &value);
};

template <typename T> Value<T>::Value(T data) : _data(data) {}

template <typename T> ValuePtr<T> Value<T>::create(T data) { return std::shared_ptr<Value<T>>(new Value<T>(data)); }

template <typename T> T Value<T>::getValue() { return _data; }

template <typename T> T Value<T>::getGradient() { return _gradient; }

template <typename T> ValuePtr<T> operator+(ValuePtr<T> a, ValuePtr<T> b) {
Expand Down Expand Up @@ -86,6 +103,13 @@ template <typename T> ValuePtr<T> operator/(ValuePtr<T> a, ValuePtr<T> b) { retu

template <typename T> ValuePtr<T> operator-(ValuePtr<T> a) { return Value<T>::create(-1) * a; }

template <typename T> bool operator<(ValuePtr<T> a, ValuePtr<T> b) { return a->getValue() < b->getValue(); }
template <typename T> bool operator<=(ValuePtr<T> a, ValuePtr<T> b) { return a->getValue() <= b->getValue(); }
template <typename T> bool operator>(ValuePtr<T> a, ValuePtr<T> b) { return a->getValue() > b->getValue(); }
template <typename T> bool operator>=(ValuePtr<T> a, ValuePtr<T> b) { return a->getValue() >= b->getValue(); }
template <typename T> bool operator==(ValuePtr<T> a, ValuePtr<T> b) { return a->getValue() == b->getValue(); }
template <typename T> bool operator!=(ValuePtr<T> a, ValuePtr<T> b) { return a->getValue() != b->getValue(); }

template <typename T> ValuePtr<T> Value<T>::tanh() {
auto thisValue = this->shared_from_this();

Expand Down
7 changes: 7 additions & 0 deletions include/src/core/Vector.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include <exception>
Expand Down
7 changes: 7 additions & 0 deletions include/src/nn/Activation.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include "../core/Type.hpp"
Expand Down
7 changes: 7 additions & 0 deletions include/src/nn/Layer.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include "../core/Type.hpp"
Expand Down
31 changes: 31 additions & 0 deletions include/src/nn/Loss.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include "../core/Value.hpp"
Expand All @@ -21,6 +28,30 @@ Function<T> MSE = [](Vector<T> a, Vector<T> b) {
loss = loss + ((a[i] - b[i])->pow(Value<T>::create(2)));
}

if (a.size() > 0)
loss = loss / Value<T>::create(a.size());

loss->backward();

return loss;
};

template <typename T>
Function<T> MAE = [](Vector<T> a, Vector<T> b) {
if (a.size() != b.size()) {
throw std::invalid_argument("Vectors need to be of the same size to compute the MAE loss. Sizes are " +
std::to_string(a.size()) + " and " + std::to_string(b.size()) + ".");
}

ValuePtr<T> loss = Value<T>::create(0);
for (size_t i = 0; i < a.size(); ++i) {
ValuePtr<T> difference = a[i] > b[i] ? a[i] - b[i] : b[i] - a[i];
loss = loss + difference;
}

if (a.size() > 0)
loss = loss / Value<T>::create(a.size());

loss->backward();

return loss;
Expand Down
7 changes: 7 additions & 0 deletions include/src/nn/Module.hpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
/**
* Copyright © 2023 Franciszek Szewczyk. None of the rights reserved.
* This code is released under the Beerware License. If you find this code useful or you appreciate the work, you are
* encouraged to buy the author a beer in return.
* Contact the author at [email protected] for inquiries and support.
*/

#pragma once

#include "../core/Vector.hpp"
Expand Down
Loading

0 comments on commit cd55556

Please sign in to comment.