Skip to content

Commit

Permalink
Cheatsheet
Browse files Browse the repository at this point in the history
  • Loading branch information
fszewczyk committed Nov 8, 2023
1 parent ed205bb commit ce303aa
Showing 1 changed file with 70 additions and 0 deletions.
70 changes: 70 additions & 0 deletions docs/tutorials/Cheatsheet.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,73 @@
# Cheatsheet

This page contains all the info you need to develop your models using Shkyera Grad.

## Types

Almost all of the classes in _Shkyera Grad_ are implemented using templates. To simplify creation of these objects, we introduced a standard way to instantiate objects with floating-point template parameters, i.e.

```cpp
Linear32 = Linear<float>
Optimizer32 = Optimizer<Type::float32>>
Loss::MSE64 = Loss::MSE<double>
Adam64 = Adam<Type::f64>

{Class}32 = {Class}<Type::float32> = {Class}<float>
{Class}64 = {Class}<Type::float64> = {Class}<double>
```
## Layers
Here's a full list of available layers:
```cpp
auto linear = Linear32::create(inputSize, outputSize);
auto dropout = Dropout32::create(inputSize, outputSize, dropoutRate);
```

## Optimizers

These are all implemented optimizers:

```cpp
auto simple = Optimizer32(network->parameters(), learningRate);
auto sgdWithMomentum = SGD32(network->parameters(), learningRate, momentum = 0.9);
auto adam = Adam32(network->parameters(), learningRate, beta1 = 0.9, beta2=0.999, epsilon=1e-8);
```

## Loss functions

Optimization can be performed according to these predefined loss functions:

```cpp
auto L1 = Loss::MAE32;
auto L2 = Loss::MSE32;
auto crossEntropy = Loss::CrossEntropy32;
```

## Generic Training Loop

Simply copy-pase this code to quickly train your network:

```cpp
using T = Type::float32; // feel free to change it to float64

auto optimizer = Adam<T>(network->parameters(), 0.05);
auto lossFunction = Loss::MSE<T>;

for (size_t epoch = 0; epoch < 100; epoch++) {
auto epochLoss = Value<T>::create(0);

optimizer.reset();
for (size_t sample = 0; sample < xs.size(); ++sample) {
Vector<T> pred = network->forward(xs[sample]);
auto loss = lossFunction(pred, ys[sample]);

epochLoss = epochLoss + loss;
}
optimizer.step();

auto averageLoss = epochLoss / Value<T>::create(xs.size());
std::cout << "Epoch: " << epoch + 1 << " Loss: " << averageLoss->getValue() << std::endl;
}
```

0 comments on commit ce303aa

Please sign in to comment.