-
Notifications
You must be signed in to change notification settings - Fork 131
/
final-project-sp18.html
240 lines (181 loc) · 28.9 KB
/
final-project-sp18.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>CMPT 733: Big Data Science (Spring 2018) </title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<style>
body {
padding-top: 20px;
}
.constainer {
margin-top: 20px;
}
.top-buffer { margin-top:40px; }
a {
color: #00BFFF;
}
a:visited {
color: #00BFFF;
}
mark {
background: #FF9;
}
b {
font-weight: 700;
}
</style>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-112163654-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-112163654-1');
</script>
<!-- HTML5 shim, for IE6-8 support of HTML5 elements -->
<!--[if lt IE 9]>
<script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></sc\
ript>
<![endif]-->
</head>
<body>
<div class="container">
<h2 id="cmpt843"><a href = "https://sfu-db.github.io/bigdata-cmpt733" target="_blank">CMPT 733: Big Data Science (Spring 2018)</a></h2>
<h3 id="project-showcase"><b>Project Showcase</b></h3>
<hr>
<div class="container">
<div class="row">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/EcED4MCR-s4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Prioritizing Aid from Above</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/jilliana_bgerspac_brieh/cmpt733finalproject" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/AndersonGH-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/AndersonGH-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Jillian Anderson, Brian Gerspacher, Brie Hoffman</i></small>
<p class="text-muted"> <small>Our project aims to use computer vision and machine learning to automatically assess the damage caused by cyclones in the South Pacific. By training a convolutional neural network to detect and count different kinds of trees present in aerial images, we seek to improve the ability of aid organizations to respond efficiently in the immediate aftermath of a natural disaster. Training data was provided as part of the challenge and we used it to train an object detection system using the Darknet framework and the YOLOv2 CNN architecture. We trained and tuned our models using the GPUs on the computers in ASB10928 throughout March and early April 2018, evaluating our results using the metric of mean average precision (mAP). In the end, our best results achieved a mAP of 0.52 using a trained from scratch model. This model was integrated into a user-facing web application. These results were submitted to Patrick Meier on April 16 as part of WeRobotic’s Open AI Challenge.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/slVJ6vCAlEI" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Vancouver Housing Market Decoder</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/junbob/VHMC" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/ZhouBG-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/ZhouBG-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Yuyi Zhou, Junbo Bao, and Yabin Guo</i></small>
<p class="text-muted"> <small>Vancouver Housing Market Decoder (VHMD), a tool that embedded with high-quality machine learning models can predict estimated listing prices for sellers, predict estimated purchasing price for buyers and show everyone future market trends. By using VHMD, users can easily understand the Vancouver housing market and make informed decisions.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/f6sDBNr1lRY" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Socio-Political Analysis in Regions of World (SPAROW)</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/namitas/bigdata2-sparow.git" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/SahaBS-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/SahaBS-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Anindita Saha, Arul Bharathi, Namita Shah</i></small>
<p class="text-muted"> <small>The project SPAROW (Socio-Political Analysis in Regions of World) gives a quantitative analysis on how socio-political conditions impacts the HDI (Human Development Index) of a country especially when it faces conflicts. The project also includes the news media impact on social political events and can also help NGOs and resource planners to predict the HDI of a country. The datasets used for this project were collected from various sources- V-DEM, ACLED and UNDP and were integrated to create a master dataset. News articles were collected from New York Times API. The integrated data was used to perform Exploratory Data Analysis (EDA) to gather interesting and relevant insights. The predictive modelling went hand in hand with the EDA and it could be seen that the features that were significant in predicting the HDI actually gave some really interesting insights in EDA. Finally, a web application was developed which consolidates all the analysis and can be useful in getting country specific insights as well as insights about the whole world in relation to the socio-political events that occurred between 2005 and 2015.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/WHZEmDe2IAE" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Visualizing and Forecasting the Cryptocurrency Ecosystem</strong> [<a href = "https://github.com/LinuxIsCool/733Project" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/AndersonLS-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/AndersonLS-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Shawn Anderson, Ka Hang Jacky Lok, Vijayavimohitha Sridha</i></small>
<p class="text-muted"> <small>In this work we collect cryptocurrency data from coinmarketcap.com, GitHub, Twitter, and wikipedia. We use the data to produce visualizations of the cryptocurrency ecosystem and to perform deep learning price forecasting.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/GhtRSfRp3BM" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Detecting Parkinson's Disease from Typing Behaviour</strong> [<a href = "https://gitlab.com/jmmoloney/detecting_parkinsons" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/ImrieM-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/ImrieM-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Kyle Imrie, Jessica Moloney</i></small>
<p class="text-muted"> <small>The project prepared covers the topic of predicting Parkinson’s Disease using data collected from everyday typing activities. Multiple individual models were trained using bagging, in order to reduce variability, and tuned with cross validation using the Scikit-learn library. These results were fed into an ensemble model that aggregated predictions. An F-score of 0.83 and a recall of 1.00 were achieved on our best model.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/zHBH1JyWfR8" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Book Recommendation and Intelligence Engine (B.R.I.E.)</strong> [<a href = "https://github.com/sethu1504/BookWorm" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/AnnnamalaiDK-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/AnnnamalaiDK-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Sethuraman Annnamalai, Lakshayy Dua, Supreet Kaur Takkar</i></small>
<p class="text-muted"> <small>A large portion of the reading community depends on either of word of mouth, bestseller lists or e-commerce websites to find the next book that they wish to read. However, these can be biased and unsatisfactory as they do not take a reader’s personal genre types into consideration or they are just based on finding similar books. There isn’t any dedicated data science product available today that caters to the needs of everyone involved in the publishing industry. Book Recommendation and Intelligence Engine (B.R.I.E.) has been created as a full-fledged interactive application to address all these needs.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/Kuwzw7RqQCU" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Topic Modeling and Sentiment Analysis on Canadian News Articles and Comments</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/733_TopicModeling/TopicModeling_and_SentimentAnalysis_on_NewsArticles_and_Comments" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/BhatLS-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/BhatLS-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Chithra Bhat, Ruoting Liang, Tianpei Shen</i></small>
<p class="text-muted"> <small>In summary, our project focused on topic modeling and sentiment analysis on Canadian news articles and comments. We used Latent Dirichlet Allocation and Non-negative Matrix Factorization to build topic models on new articles to observe the ‘Topic Trends’ in Canada over last 5 years and discover the surrounding topics of comments under a given article. Additionally, we cleaned up the comment environment by removing nonconstructive and toxic comments. We also did positive/negative sentiment analysis of comments to get an overview of the public opinions. Finally, we built the web application with interactive graphs to visualization all the learning results.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/sBWaoGf9V_0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Topic, Entity and Sentiment Discerning System</strong> [<a href = "https://github.com/pushsinha/TmNerSa" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/ChenSB-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/ChenSB-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Andy Chen, Pushkar Sinha, Maria Babaeva</i></small>
<p class="text-muted"> <small>Text analysis with NLP concepts (NER, Sentiment analysis) and topic modeling with visualizations of the above approaches and creating a base with intermediate data and code for further much complex visualizations.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/iJgTVziAamg" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Detecting Misstatements in Financial Statements</strong> [<a href = "https://github.com/chiu/accounting-ml-project" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/ChiuSS-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/ChiuSS-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Vincent Chiu, Vishal Shukla, Kanika Sanduja</i></small>
<p class="text-muted"> <small>We created a program using the random forest model that is able to detect misstatements with 82.7% accuracy. Other metrics include 83.6% misstatement precision and 81.7% misstatement recall. This model can be beneficial to financial institutions for three main reasons: First, it is easy to use for personnel without programming backgrounds, such as auditors or investors, making the model highly accessible. Second, it utilizes a wide range of data sources (three diverse datasets) which provides a balanced view of the financial status of an organization, making the model robust. Third, we developed the model on Spark, which can scale up to even larger datasets having many features and records, making the model scalable.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/snaC6vOh3t4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Hawk: Object Detection in Aerial Imagery</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/amkrtchy/Hawk" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/VachherM-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/VachherM-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Mayank Vachher, Anna Mkrtchyan</i></small>
<p class="text-muted"> <small>Disasters in the south pacific are an unfortunate reality, and their consequences can be devastating for the local population. WeRobotics, together with the OpenAerialMap and the World Bank, attempts to signicantly accelerate the analysis of aerial imagery before and after major humanitarian disasters. Their "Open AI Challenge: Aerial Imagery of South Panic Islands" has a goal to develop machine learning classifiers for this task. We propose a classifier based on pre-trained Faster-RCNN deep neural network, the state-of-the-art neural network for the object detection. Our data science pipeline converts dataset provided in the challenge, a single high resolution aerial image that covers roughly 50 square-kilometre area along with the geometric locations and classes of the objects of interests, to suitable training dataset for the proposed classifier. After training, our classifier is able to detect coconut trees with > 91% precision and > 97% recall.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/2yhQRYMP2Gw" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Fall Detection using Wearable Sensor Data</strong> [<a href = "https://github.com/jemd93/BigData2" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/FelhbergMM-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/FelhbergMM-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Gustavo Felhberg, Jorge Marcano, Muhammad R Myhaimin</i></small>
<p class="text-muted"> <small>Project with the objective of detecting falls based on data obtained from sensors on the waist,thighs,ankles, sternum and head of subjects with the main objectives being: (1) Doing data analysis and visualization in order to find interesting insight from the data. (2) Creating Machine Learning Models to detect falls. (3) Using these models to detect falls in real time.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/3GTAyYJ9-OE" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>BOOMERANG: Greater Vancouver House Price Analysis</strong> [<a href = "https://github.com/joanneyoon/boomerang" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/MoonY-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/MoonY-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Hyelim Moon, Joanne Yoon</i></small>
<p class="text-muted"> <small>Boomerang is an all-in-one Greater Vancouver property value assessment program that swifts through past, present and future to deliver the answers to your fingertips like a boomerang. By web scraping, we have collected realistic, up-to-data data. By referring to municipal open data, we obtained historical house values. We then merged data from multiple sources, and used machine learning, statistics, and analytics skills to assess the value of each house and area. Since Surrey and Vancouver has many schools, we compared their houses' relationship with nearby schools and statistically analyzed correlation of these features and property prices. We displayed our findings on the web using Google Cloud Services. It includes a prediction tool to estimate a property's future price and analysis at postal area and feature level.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/mSm0ifBBYD0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>RightFluencer</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/mkselvak/rightfluencer/tree/rightfluencer-final" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/GhoshTK-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/GhoshTK-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Arin Ghosh, Karanjit Singh Tiwana, Manoj Karthick Selva Kumar</i></small>
<p class="text-muted"> <small>RightFluencer is a web application and dashboard that allows you to find the right social media influencer for your product and category by analyzing their posts, images and videos. The dashboard analyzes the social media profiles of many influencers and finds the best influencer for your product based on not just their metrics but based on their niche/expertise. RightFluencer provides a search engine where a brand marketer can enter a product and category to find right influencer for that product by analyzing their Instagram, Facebook, Twitter and YouTube profiles. The marketers can then get more detailed information about the influencer and visualize the metrics and insights related to the influencer. RightFluencer also allows influencers to gain deeper insights about their online presence and understand their strongholds and weaknesses.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/7U_KoJzwpSw" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Micro-Ventures -- Predicting the Success of Potential Startups for Micro-Investments</strong> [<a href = "https://github.com/immad-imtiaz/cmpt733" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/IimtiazMR-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/IimtiazMR-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Immad Imtiaz, Ravi Bisla, Shariful Islam</i></small>
<p class="text-muted"> <small>In this project, we use publicly available data to predict the success of a startup in its early ages to help micro-investors to make a more informed decision about their investment. We use logistic regression for classifying the companies that goes beyond series C. We also perform topic modeling on the articles found from techcrunch. We observe that, for most of the categories our model achieve true positive rate from 60% to 80% while the false positive rates remains as low as 1% for most of the cases. While using the topics found from the pool of techcrunch articles as additional features, the performance of the models in terms of the true positive rate was enhanced for the companies that fall into technology category.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/IGpUwIlBNh4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Topic Modelling based Recommender System (for Zomato)</strong> [<a href = "https://csil-git1.cs.surrey.sfu.ca/keerthana-sneha-siddharth/zomato" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/KanojiyaJB-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/KanojiyaJB-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Siddharth Kanojiya, Keerthana Jayaprakash, Sneha Bezawada</i></small>
<p class="text-muted"> <small>We relied on the intuition that if we can populate the sparse user-item rating matrix by using latent features from reviews rather than just relying on explicit ratings then we can improve the recommendations. So we performed EDA on user reviews to analyze user's preferences in terms of food, service, take-away, delivery and other factors and also studied the restaurants data to find interesting insights about the cuisines, cost, location etc. After studying the relationships between the facts derived from above step, we performed LDA Topic modelling to assist the Collaborative filtering between users, simply put, if two people talk about same topics they could be similar. As the final step, we wanted to deploy this model such that it can be easily integrated into an existing food restaurant portal system without hampering its current user experience, specifically the response time. As a result, we used Spark and Celery to run jobs in background and in parallel. Furthermore, visualizing the datasets from restaurants, reviews and food inspections gave us interesting insights about the food and restaurants people like in Greater Vancouver area, which can be viewed here - http://35.227.63.2:5005/.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/J2JUbnYK2V4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Detecting Misstated Financial Statements with Deep Learning and Interactive Dashboard</strong> [<a href = "https://github.com/nilichen/cmpt733-project" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/NiT-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/NiT-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Katrina Ni, Leiling Tao</i></small>
<p class="text-muted"> <small>In this project, our goal is to automate the process of pre-screening potential misstated financial statements. We constructed a complete data pipeline to process and clean financial statement data, engineered relevant features for two neural network models (autoencoder and LSTM), and visualized model output interactively. Based on half a million financial statements from 1980-2018, our model was able to reach a recall score of 0.7 for misstated statements, a 40% of improvement compared to a random forest classifier while retaining the same precision score. Written in Plotly Dash, our final product is a web UI of a interactive dashboard incorporating yearly trends of each accounting term and the model output, which is designed for domain experts to understand the results of our model and visually explore potential features of interests.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/LMBcbTyPREo" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Identification of Toxic Comments in On-line Platforms</strong> [<a href = "https://github.com/ehsan1m/CMPT-733-Big-Data-2-Project" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/SaleemSM-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/SaleemSM-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Mehvish Saleem, Ramanpreet Singh, and Ehsan Montazeri</i></small>
<p class="text-muted"> <small>In this project, we used NLP and supervised machine learning techniques to come up with a model for detecting toxic comments. We trained our models on datasets from Wikipedia and SOCC and explored TF-IDF, Doc2Vec, and word embeddings to featurize them. We tried several machine learning models, and found GRU RNNs to perform the best on the validation set. We used the model on the data we scraped from multiple sports, news, and entertainment Facebook pages. Among the comments classified as toxic, we identified those that contain racism, sexism, and homophobia. News pages were found to be most toxic, whereas news and entertainment were similar. The most prevalent type of toxicity in news and sports was racism and in entertainment sexism. With the help of statistical hypothesis tests, this analysis can safely be extended to the whole Facebook data.</small></p> <br/>
</div>
</div>
<div class="row top-buffer">
<div class="col-md-4"> <iframe width="336" height="189" src="https://www.youtube.com/embed/ChHFgBlw09c" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> </div>
<div class="col-md-8"> <strong>Fall Detection Using Wearable Sensor Data</strong> [<a href = "https://github.com/theIps/Fall-Detection-Using-Wearable-Sensor-Data" target="_blank"/>Code</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/reports/SinghKS-report.pdf" target="_blank"/>Report</a>, <a href = "https://github.com/sfu-db/bigdata-cmpt733/blob/cmpt733-2018sp/posters/SinghKS-poster.pdf" target="_blank"/>Poster</a>]<br>
<small><i>Inderpreet Singh, Amandeep Singh Kap</i></small>
<p class="text-muted"> <small>This project aims at detecting fall in real time so that in the event of fall, caretakers can be informed and the impact of fall on older adults can be minimized. Data collected from trials conducted on 10 different specimen in the lab was used to train various Machine Learning classifiers. The project covers the Exploratory Data Analysis of the collected data to find out the optimal time window which has to be fed into the Machine Learning classifiers to get the best classification results. We have also explored the usage of these classifiers on the streaming data which has more practical significance. For future scope, we have considered applying the concept of Active Learning for improving the model in real-time.</small></p> <br/>
</div>
</div>
</div>
<div class="row"><h4> </h4><hr><p class="text-center"> © Jiannan Wang 2019</p></div>
</div>
</body>
</html>