-
Notifications
You must be signed in to change notification settings - Fork 1
/
DocumentationWikiForm.txt
160 lines (109 loc) · 6.36 KB
/
DocumentationWikiForm.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
'''
==AudioGuardian v1.0 (Beta)<br>==
'''
----
'''
== Team Members ==
'''
Eric Hertz<br>
Brian Garfinkel<br>
Jeff Wang<br>
Torrey Umland<br>
Yin Lok Ho<br>
Mentor: Jeff Wilhelm
----
== '''Description''' ==
AudioGuardian is a mobile Java application (J2ME) for use on Java-enabled mobile phones. The user's phone
records surrounding noise and triggers alerts when it identifies interesting noises such as car horns, fire
alarms, gunshots, and sirens. This application was created for people who are hearing impaired and might have
difficulty knowing when they are in a dangerous situation. The project records emergency signals that people
take for granted and makes them available to those who are hearing impaired. It was created as a part of
Project:Possibility and SS12:UCLA 2009.
----
'''
== Challenges ==
'''
The application uses a discrete Fourier transform algorithm to convert a signal in the time domain into a
signal in the frequency domain. This distribution allows the application to identify specific sounds by
matching the measured frequency against a known sample frequency. The algorithm behind this was extremely
complicated to implement, because every microphone has a different DC offset. The application has a
calibration function that will allow the user to set the DC offset function for their specific microphone.
This alleviates any problems encountered by differences between hardware platforms. Porting Java to J2ME was
also difficult to implement. J2ME is not well documented on the Internet, so there was some difficulty in
learning how to program a mobile phone. There were some difficulties in processing the audio stream into a
byte array, since the application has to extract the sound data without having the file header in the stream
as well. The application has to remove the file header and then store the audio stream into a data structure,
otherwise the application will encounter some issues with displaying the graph of the Fourier transform. There
were also some challenges in using multithreading. The application runs in a loop that listens for commands.
However, the command thread cannot be interrupted to perform recording tasks. Thefore, the application has to
run any recording operations in a separate thread. There were some issues with integrating the separate
threads so that they worked in sync with one another.
----
'''
== Design ==
'''
This application is implemented in J2ME, which is a subset of Java specifically designed for mobile phones.
AAnalyzer class<br>
This class analyzes the amplitude of the sound file and returns the maximum amplitude at any point in the
sample file. It can also return the number of samples with levels above a given threshold value. It can get
the average amplitude of the entire sample. <br><br>
AudioGuardian class<br>
This class is the driver class for the entire application. It implements a CommandListener that runs continually <br>
in a thread and executes commands as the user inputs them. It also creates new threads as needed for recording<br>
operations. It generates the GUI for the entire application and runs through the menus. <br><br>
CAboutScreen class<br>
This class displays the credit screen using the Form super class. <br><br>
Calibrator class<br>
This class reads in an arbitrary amount of sound data and takes the average of the amplitudes. It then
computes the base offset for the microphone. This is useful for the application since there are many
differences between different microphones. Without this, the graph would have no way to normalize itself. <br><br>
CCalibrationScreen class<br>
This class displays the screen that the user sees while calibration is going on. <br><br>
CMatchableSound class<br>
This class holds the attributes of the sounds that the application is trying to match. It has a method that
will perform matching between a given sound file and a recorded stream. <br><br>
COptionScreen class<br>
This class displays the "Config" screen. It holds all the different settings that the user can select.<br><br>
COptionDb class<br>
This class encapsulates the RecordStore and provides a much nicer interface for working with RecordStores.<br><br>
CStartupScreen class<br>
This class displays the logo on start up. <br><br>
DFT class<br>
This class performs the discrete Fourier transform. It returns an array of DFT pairs. Each DFT pair contains a
frequency and its corresponding amplitude (also known as a DFTpair).<br><br>
DFTpair class (extends Comparable)<br>
This is the interface that defines what a DFT pair is.<br><br>
Float11 class<br>
This is a math class that provides functions not present in the standard J2ME API. <br><br>
FloorLevel class<br>
This class defines a Double object that is used to store the base microphone offset.<br><br>
Graph class<br>
This class displays the waveform of the sound signal. It also displays a bar graph of the discrete Fourier
transform. It also provides functions to allow the user to find the highest level frequency and an assorted
array of frequencies based on the sound input.<br><br>
Monitor class<br>
This class listens for sound input and stores that sound input into a DFT array. It sorts the sound file by
amplitude.<br><br>
QuickSort class<br>
This class defines a QuickSort algorithm for use by the Monitor class.<br><br>
----
'''
== Schedule of Progress ==
'''
Day One: Saturday, January 31, 2009 <br>
8:30 AM: Arrive at Boelter and prepare for the project <br>
9:00 AM - 1 PM: Work on design and basic implementation<br>
1 PM - 1:30 PM: Eat lunch<br>
1:30 PM - 6 PM: Finish basic implementations and work on defining the discrete Fourier transform<br>
6 PM - 6:30 PM: Eat dinner<br>
6:30 PM - 11:59 PM: Work on advanced implementations and GUI<br>
<br>
Day Two: Sunday, February 1, 2009<br>
12:00 AM - 3:45 AM: Preliminary integration, work on GUI, extension of the project<br>
3:45 AM: Walk back to dorms to sleep<br>
9:00 AM: Arrive at Boelter to continue work<br>
9:00 AM - 1 PM: Work on integration of the code, work on extending the scope of the project<br>
1 PM - 1:30 PM: Eat lunch<br>
1:30 PM - 6:00 PM: Finish integrating code, write documentation, prepare presentation and demo<br>
6:00 PM - 6:30 PM: Eat dinner<br>
6:30 PM - Close: Presentations and demonstrations<br>