SkimLit Project: Replacing pretrained token embeddings with custom token embeddings actually improved results! #510
sgkouzias
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
By restructuring the token model as:
token_inputs = Input(shape=(1,), dtype=tf.string, name='token_inputs')
token_vectors = text_vectorizer(token_inputs)
token_embeddings = token_embed(token_vectors)
x = Conv1D(64, kernel_size=5, padding='same', activation='relu')(token_embeddings)
token_outputs = GlobalAveragePooling1D()(x)
token_model = tf.keras.Model(inputs=token_inputs, outputs=token_outputs)
I get the following results:
{'accuracy': 87.16735072156759,
'precision': 0.8721944632776405,
'recall': 0.8716735072156759,
'f1': 0.8692547073478577}
Beta Was this translation helpful? Give feedback.
All reactions