Skip to content

Commit

Permalink
Merge branch 'master' into Capsnet
Browse files Browse the repository at this point in the history
  • Loading branch information
namish800 committed Dec 5, 2018
2 parents cf109e7 + bd7ecad commit 25ad873
Show file tree
Hide file tree
Showing 59 changed files with 4,319 additions and 133 deletions.
21 changes: 11 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
[![Join the chat at https://gitter.im/Cloud-CV/Fabrik](https://badges.gitter.im/Cloud-CV/Fabrik.svg)](https://gitter.im/Cloud-CV/Fabrik?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Build Status](https://travis-ci.org/Cloud-CV/Fabrik.svg?branch=master)](https://travis-ci.org/Cloud-CV/Fabrik)
[![Coverage Status](https://coveralls.io/repos/github/Cloud-CV/Fabrik/badge.svg?branch=master)](https://coveralls.io/github/Cloud-CV/Fabrik?branch=master)
[![Documentation Status](https://readthedocs.org/projects/markdown-guide/badge/?version=latest)](http://fabrik.readthedocs.io/en/latest/)

Fabrik is an online collaborative platform to build, visualize and train deep learning models via a simple drag-and-drop interface. It allows researchers to collectively develop and debug models using a web GUI that supports importing, editing and exporting networks to popular frameworks like Caffe, Keras, and TensorFlow.

Expand Down Expand Up @@ -43,7 +44,7 @@ To install Docker for Mac [click here](https://docs.docker.com/docker-for-mac/in
docker-compose up --build
```

### Setup Authenticaton for Docker Environment
### Setup Authentication for Docker Environment
1. Go to Github Developer Applications and create a new application. [here](https://github.com/settings/developers)

2. For local deployments,the following should be used in the options:
Expand Down Expand Up @@ -126,13 +127,13 @@ To install Docker for Mac [click here](https://docs.docker.com/docker-for-mac/in
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
```

* Change celery broker url and result backend hostname to ``` localhost ``` in ide/celery_app.py, line 8.
* Change celery broker URL and result backend hostname to ``` localhost ``` in ide/celery_app.py, line 8.

```
app = Celery('app', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0', include=['ide.tasks'])
```

5. If you already have Caffe, Keras and Tensorflow installed on your computer, skip this step.
5. If you already have Caffe, Keras and TensorFlow installed on your computer, skip this step.
* For Linux users
* Install Caffe, Keras and Tensorflow

Expand Down Expand Up @@ -166,7 +167,7 @@ To install Docker for Mac [click here](https://docs.docker.com/docker-for-mac/in
pip install -r requirements/dev.txt
```

* Others:
* For others:

```
pip install -r requirements/common.txt
Expand Down Expand Up @@ -225,7 +226,7 @@ To install Docker for Mac [click here](https://docs.docker.com/docker-for-mac/in

You should now be able to access Fabrik at <http://localhost:8000>.

### Setup Authenticaton for Virtual Environment
### Setup Authentication for Virtual Environment
1. Go to Github Developer Applications and create a new application. [here](https://github.com/settings/developers)

2. For local deployments, the following should be used in the options:
Expand Down Expand Up @@ -260,12 +261,12 @@ To install Docker for Mac [click here](https://docs.docker.com/docker-for-mac/in

* Add the sites available to the right side, so github is allowed for the current site. This should be `localhost:8000` for local deployment.

* Copy and paste your ``` Client ID ``` and ``` Secret Key ``` into the apppropriate fields and Save.
* Copy and paste your ``` Client ID ``` and ``` Secret Key ``` into the appropriate fields and Save.

9. From the django admin home page, go to `Sites` under the `Sites` category and update ``` Domain name ``` to ``` localhost:8000 ```.

Note: For testing, you will only need one authentication backend. However, if you want to try out Google's authentication,
then, you will need to follow the same steps as above, but switch out the Github for google.
then, you will need to follow the same steps as above, but switch out the Github for Google.


### Usage
Expand All @@ -275,9 +276,9 @@ python manage.py runserver
```

### Example
* Use `example/tensorflow/GoogleNet.pbtxt` for tensorflow import
* Use `example/caffe/GoogleNet.prototxt` for caffe import
* Use `example/keras/vgg16.json` for keras import
* Use `example/tensorflow/GoogleNet.pbtxt` for TensorFlow import
* Use `example/caffe/GoogleNet.prototxt` for Caffe import
* Use `example/keras/vgg16.json` for Keras import

### Tested models

Expand Down
Binary file added docs/graphvis_research/CaffeVis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/graphvis_research/KerasVis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/graphvis_research/PureVis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
51 changes: 51 additions & 0 deletions docs/graphvis_research/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Research about adding support for exporting model graphs from Fabrik
Attached code requires [common dependencies](../requirements/common.txt), plus `networkx` and `pydot` Python packages.
## Problem
Currently there's no tools for drawing Fabrik neural network diagram directly, without need to do it by hand. This research observes some ways to implement such function.
## Observations
During research, I managed to found some ways. They even can be divided into two groups.
### Based on deep learning frameworks
These methods share the common weakness: they cannot draw unsupported layers. For example, Keras cannot draw LRN layer. Also they could be implemented in backend only.

Note that all tools can implemented with algorithms of conversion Fabrik net to framework model directly, without creating model files.
#### Keras
Keras have its own utilities, described in its [documentation](https://keras.io/visualization/). All methods are based on [Pydot](https://github.com/pydot/pydot) library, a Python interface of [Graphviz](http://graphviz.org/). One of the utilities is used in the `print_keras_model.py`. Below there is VQI model representation drawn by Keras.

![](KerasVis.png)
To get similar with this or other model type:
```
python print_keras_model.py ../example/keras/<desired_json_model> <desired_image_name>
```
#### Caffe
Caffe has its own script for visualisation. It actually uses pydot, too. Type `python ~/caffe/caffe/python/draw_net.py --help` to see usage help. Below is vizualised AlexNet.

![](CaffeVis.png)
```
python ~/caffe/caffe/python/draw_net.py ../example/caffe/<desired_prototxt_model> <desired_image_name>
```
#### Tensorflow
Tensorflow has Tensorboard for graph visualisations. Still cannot see the way how to use it for creating an image, not interactive page.

Also Tensorflow method cannot be used for recurrent layers due to weird representation of them in `.pbtxt`.
### Based on Fabrik's frontend
These ones mostly for frontend representation. Also they depends only on Fabrik represen
#### Creating an extension
If we decided to create an extension for Fabrik, we could obtain DOM of the graph that already represented and convert it to image. There are a [JS library](https://github.com/tsayen/dom-to-image) for doing such things. Resulted image will look like a large screenshot of Fabrik net.
#### Implementing using JSON representation
If we dig inside Fabrik a little deeper, we find out that Fabrik stores neural network inside state as JS object. There are obtained sample net representation in `state_net.json`. It's Lenet MNIST with some layers deleted.

The only step to do is to draw graph based on this data. There are lots of ways, including [NN-SVG](https://github.com/zfrenchee/NN-SVG). Also a lot of different [JS libraries](https://stackoverflow.com/questions/7034/graph-visualization-library-in-javascript) and [other tools](https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams). In order to keep it simple, I created `draw_graph.py` that outputs pictured neural network with layer types and shapes. It uses [networkx](https://networkx.github.io/) for storing graph and pydot for visualisation, so it looks like Caffe's and Keras' network diagrams.

![](PureVis.png)
## Conclusion
Framework-based are easy to implement, but have a lot of disadvantages. Also these cannot be customized (Caffe looks prettier because of color though). DOM-based also slow, non-customizable and is workaround, not real solution. However, JSON representation-based can be fast and output any form that we want, depending on library we desire.

## References
- [Keras](https://keras.io/)
- [Caffe](http://caffe.berkeleyvision.org/)
- [Tensorflow](https://www.tensorflow.org/) and [Tensorboard](https://www.tensorflow.org/guide/graph_viz)
- [Pydot](https://pypi.org/project/pydot/) and [Graphviz](https://www.graphviz.org/)
- [DOM-to-image](https://github.com/tsayen/dom-to-image)
- [NN-SVG](https://github.com/zfrenchee/NN-SVG)
- [Graph library list 1](https://stackoverflow.com/questions/7034/graph-visualization-library-in-javascript), [Graph library list 2](https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams)
- [Networkx](https://networkx.github.io/)
22 changes: 22 additions & 0 deletions docs/graphvis_research/draw_graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import networkx as nx
import json

with open('state_net.json', 'r') as f:
network = json.loads(f.read())

network_map = {}
for node, params in network.items():
new_name = (node + ' ' + params['info']['type'] + "\n" +
str(tuple(params["shape"]["output"])))
network_map[node] = new_name

graph = nx.DiGraph()
for node, params in network.items():
output_nodes = params['connection']['output']
for o_node in output_nodes:
graph.add_edge(network_map[node], network_map[o_node])

dotgraph = nx.nx_pydot.to_pydot(graph)
dotgraph.set('rankdir', 'LR')
dotgraph.set('dpi', 300)
dotgraph.write('PureVis.png', format='png')
18 changes: 18 additions & 0 deletions docs/graphvis_research/print_keras_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
from keras.models import model_from_json
from keras.utils import plot_model
import sys

try:
json_file = sys.argv[1]
output_file = sys.argv[2]
except KeyError:
print("Usage: python print_keras_model.py <json file name> <image name>")

with open(json_file, 'r') as f:
loaded_model = model_from_json(f.read())

plot_model(loaded_model,
to_file=json_file + '.png',
rankdir='LR',
show_shapes=True,
show_layer_names=False)
1 change: 1 addition & 0 deletions docs/graphvis_research/state_net.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"l6":{"info":{"phase":null,"type":"InnerProduct","parameters":10500,"class":""},"state":{"top":"566px","class":"","left":"358px"},"shape":{"input":[20,0,0],"output":[500]},"connection":{"input":["l3"],"output":["l7"]},"params":{"bias_filler":["constant",false],"bias_regularizer":["None",false],"kernel_constraint":["None",false],"bias_constraint":["None",false],"activity_regularizer":["None",false],"num_output":[500,false],"weight_filler":["xavier",false],"kernel_regularizer":["None",false],"caffe":[true,false],"use_bias":[true,false]},"props":{"name":"l6"}},"l7":{"info":{"phase":null,"type":"ReLU","parameters":0},"state":{"top":"607px","class":"","left":"358px"},"shape":{"input":[500],"output":[500]},"connection":{"input":["l6"],"output":[]},"params":{"negative_slope":[0,false],"caffe":[true,false],"inplace":[true,false]},"props":{"name":"l7"}},"l2":{"info":{"phase":null,"type":"Convolution","parameters":null},"state":{"top":"242px","class":"","left":"358px"},"shape":{"input":[],"output":[20,0,0]},"connection":{"input":["l0","l1"],"output":["l3"]},"params":{"layer_type":["2D",false],"stride_d":[1,true],"pad_h":[0,false],"kernel_constraint":["None",false],"activity_regularizer":["None",false],"stride_h":[1,false],"pad_d":[0,true],"weight_filler":["xavier",false],"stride_w":[1,false],"dilation_d":[1,true],"use_bias":[true,false],"pad_w":[0,false],"kernel_w":[5,false],"bias_filler":["constant",false],"bias_regularizer":["None",false],"bias_constraint":["None",false],"dilation_w":[1,false],"num_output":[20,false],"kernel_d":["",true],"caffe":[true,false],"dilation_h":[1,false],"kernel_regularizer":["None",false],"kernel_h":[5,false]},"props":{"name":"l2"}},"l3":{"info":{"phase":null,"type":"Pooling","parameters":0},"state":{"top":"323px","class":"","left":"358px"},"shape":{"input":[20,0,0],"output":[20,0,0]},"connection":{"input":["l2"],"output":["l6"]},"params":{"layer_type":["2D",false],"kernel_w":[2,false],"stride_d":[1,true],"pad_h":[0,false],"stride_h":[2,false],"pad_d":[0,true],"padding":["SAME",false],"stride_w":[2,false],"kernel_d":["",true],"caffe":[true,false],"kernel_h":[2,false],"pad_w":[0,false],"pool":["MAX",false]},"props":{"name":"l3"}},"l0":{"info":{"phase":0,"type":"Data","parameters":0,"class":""},"state":{"top":"161px","class":"","left":"358px"},"shape":{"input":[],"output":[]},"connection":{"input":[],"output":["l2"]},"params":{"scale":[0.00390625,false],"mean_value":["",false],"mean_file":["",false],"batch_size":[64,false],"source":["examples/mnist/mnist_train_lmdb",false],"force_color":[false,false],"force_gray":[false,false],"rand_skip":[0,false],"prefetch":[4,false],"mirror":[false,false],"caffe":[true,false],"backend":["LMDB",false],"crop_size":[0,false]},"props":{"name":"l0"}},"l1":{"info":{"phase":1,"type":"Data","parameters":0},"state":{"top":"81px","class":"","left":"358px"},"shape":{"input":[],"output":[]},"connection":{"input":[],"output":["l2"]},"params":{"scale":[0.00390625,false],"mean_value":["",false],"mean_file":["",false],"batch_size":[100,false],"source":["examples/mnist/mnist_test_lmdb",false],"force_color":[false,false],"force_gray":[false,false],"rand_skip":[0,false],"prefetch":[4,false],"mirror":[false,false],"caffe":[true,false],"backend":["LMDB",false],"crop_size":[0,false]},"props":{"name":"l1"}},"l9":{"info":{"phase":1,"type":"Accuracy","parameters":0},"state":{"top":"769px","class":"","left":"458px"},"shape":{"input":[10],"output":[10]},"connection":{"input":[],"output":[]},"params":{"top_k":[1,false],"caffe":[true,false],"axis":[1,false]},"props":{"name":"l9"}}}
1 change: 1 addition & 0 deletions docs/source/tested_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
* DeepYeast [\[Source\]](http://kodu.ut.ee/~leopoldp/2016_DeepYeast/code/caffe_model/)[\[Visualise\]](http://fabrik.cloudcv.org/caffe/load?id=20180102135425bzkzy)
* SpeechNet [\[Source\]](https://github.com/pannous/caffe-speech-recognition)[\[Visualise\]](http://fabrik.cloudcv.org/caffe/load?id=20180102135032ctsho)
* SENet [\[Source\]](https://github.com/hujie-frank/SENet) [\[Visualise\]](http://fabrik.cloudcv.org/caffe/load?id=20180106091323ectck)
* ZFNet [\[Source\]](https://github.com/dandxy89/ImageModels/blob/master/ZFNet.ipynb)

### Detection

Expand Down
36 changes: 36 additions & 0 deletions example/caffe/code_samples/caffe_sample.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import subprocess
import sys


# Get the command line arguments
model_file = ''
try:
model_file = sys.argv[1]
except IndexError:
print('Usage: python caffe_sample.py PATH_TO_MODEL')
exit()

solver = [
'net: "{}"'.format(model_file),
'test_iter: 200',
'test_interval: 500',
'base_lr: 1e-5',
'lr_policy: "step"',
'gamma: 0.1',
'stepsize: 5000',
'display: 20',
'max_iter: 450000',
'momentum: 0.9',
'weight_decay: 0.0005',
'snapshot: 2000',
'snapshot_prefix: "model/caffe_sample"',
'solver_mode: GPU',
]

# Create solver.prototxt
with open('solver.prototxt', 'w') as file:
for line in solver:
file.write(line + '\n')

# Train the model
subprocess.call(['caffe', 'train', '-gpu', '0', '-solver', 'solver.prototxt'])
Loading

0 comments on commit 25ad873

Please sign in to comment.