Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about slide window #3

Open
zhaoyu611 opened this issue Jan 4, 2017 · 1 comment
Open

Question about slide window #3

zhaoyu611 opened this issue Jan 4, 2017 · 1 comment

Comments

@zhaoyu611
Copy link

Hi, I appreciate your job for HAR. I firstly read your paper and then google your DeepConvLSTM. I fork your repo, and want to reproduction your result. But there is some question I did not get you. In DeepConvLSTM.ipynb, why are you just slide window for test data, but not for traning data?
It is locate in
# Sensor data is segmented using a sliding window mechanism X_test, y_test = opp_sliding_window(X_test, y_test, SLIDING_WINDOW_LENGTH, SLIDING_WINDOW_STEP) print(" ..after sliding window (testing): inputs {0}, targets {1}".format(X_test.shape, y_test.shape))

I look forward to your reply. And I will give an acknowledgement in my paper for your help.

@sussexwearlab
Copy link
Collaborator

Hi @zhaoyu611,

The notebook comprises only classification results, therefore there is not reference to the training data. Instead the model presented uses the parameters included in weights/DeepConvLSTM_oppChallenge_gestures.pkl (network weights). To take a look at the training code you can go to #1 . The training data is segmented exactly as the test data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants