Skip to content

Commit

Permalink
README update
Browse files Browse the repository at this point in the history
  • Loading branch information
fszewczyk committed Nov 8, 2023
1 parent 3e87157 commit f315fea
Showing 1 changed file with 30 additions and 23 deletions.
53 changes: 30 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,19 +20,18 @@ It supports multiple optimizers, such as Adam or SGD, all the most common activa

## Usage

Make sure your compiler supports C++17. Shkyera Grad is a header-only library, so the only thing you need to do is to include it in your project.
Check out oour [Get Started Guide](https://fszewczyk.github.io/shkyera-grad/md_docs_tutorials_GetStarted.html) to learn the basics of _Shkyera Engine_.

```cpp
#include "include/ShkyeraGrad.hpp"
```
## Showcase

Check out the [examples](examples/README.md) for a quick start on Shkyera Grad. In the meantime, here's a neural network that learns the XOR function.
Here's a small example showcasing a feed-forward network learning the XOR function. Check out the `examples/` folder for more examples.

```cpp
#include "include/ShkyeraGrad.hpp"
#include "shkyera-grad/include/ShkyeraGrad.hpp"

int main() {
using namespace shkyera;
using T = Type::float32;

std::vector<Vec32> xs;
std::vector<Vec32> ys;
Expand All @@ -41,31 +40,39 @@ int main() {
xs.push_back(Vec32::of({0, 0})); ys.push_back(Vec32::of({0}));
xs.push_back(Vec32::of({1, 0})); ys.push_back(Vec32::of({1}));
xs.push_back(Vec32::of({0, 1})); ys.push_back(Vec32::of({1}));
xs.push_back(Vec32::of({0, 0})); ys.push_back(Vec32::of({0}));
xs.push_back(Vec32::of({1, 1})); ys.push_back(Vec32::of({0}));

auto mlp = SequentialBuilder<Type::float32>::begin()
.add(Layer32::create(2, 15, Activation::relu<Type::float32>))
.add(Layer32::create(15, 5, Activation::relu<Type::float32>))
.add(Layer32::create(5, 1, Activation::sigmoid<Type::float32>))
.build();
auto network = SequentialBuilder<Type::float32>::begin()
.add(Linear32::create(2, 15))
.add(ReLU32::create())
.add(Linear32::create(15, 5))
.add(ReLU32::create())
.add(Linear32::create(5, 1))
.add(Sigmoid32::create())
.build();

Optimizer32 optimizer = Optimizer<Type::float32>(mlp->parameters(), 0.1);
Loss::Function32 lossFunction = Loss::MSE<Type::float32>;
auto optimizer = Adam32(network->parameters(), 0.05);
auto lossFunction = Loss::MSE<T>;

// ------ TRAINING THE NETWORK ------- //
for (size_t epoch = 0; epoch < 100; epoch++) {
for (size_t epoch = 0; epoch < 100; epoch++) { // We train for 100 epochs
auto epochLoss = Val32::create(0);

optimizer.reset();
for (size_t sample = 0; sample < xs.size(); ++sample) {
Vec32 pred = mlp->forward(xs[sample]);
auto loss = lossFunction(pred, ys[sample]);
optimizer.reset(); // Reset the gradients
for (size_t sample = 0; sample < xs.size(); ++sample) { // We go through each sample
Vec32 pred = network->forward(xs[sample]); // We get some prediction
auto loss = lossFunction(pred, ys[sample]); // And calculate its error

epochLoss = epochLoss + loss;
epochLoss = epochLoss + loss; // Store the loss for feedback
}
optimizer.step();
optimizer.step(); // Update the parameters

auto averageLoss = epochLoss / Val32::create(xs.size());
std::cout << "Epoch: " << epoch + 1 << " Loss: " << averageLoss->getValue() << std::endl;
}

std::cout << "Epoch: " << epoch + 1 << " Loss: " << epochLoss->getValue() << std::endl;
for (size_t sample = 0; sample < xs.size(); ++sample) { // Go through each example
Vec32 pred = network->forward(xs[sample]); // Predict result
std::cout << xs[sample] << " -> " << pred[0] << "\t| True: " << ys[sample][0] << std::endl;
}
}
```

0 comments on commit f315fea

Please sign in to comment.