Skip to content

Latest commit

 

History

History
57 lines (48 loc) · 1.27 KB

scaletta.md

File metadata and controls

57 lines (48 loc) · 1.27 KB

scaletta

  • if, why, where?
  • Classification task
  • Rise of LLM
  • Rnn
  • Attention
  • Bert gpt
  • Fw: roberta
  • Demand of explainability
  • Explaining methods (lime, gradient)
  • Bonr in text classification
  • fw
  • Depression, dataset
  • born

Today i’m presentin you my thesis work.

LLM

Classification task, we can deal in different manners Machine learning automated tools Neural networks are nowdays the Large Language models (gpt, bert) This is a cat, we’re asking ourself why this is a cat Since the introduction more and more complex models, we are worried about explainability Can we extract patterns? Training bias dangerous? Bisect our enemy, r Fight against our superhero *(role perspective is purely random intended only for narrative purpose, no bias on roleplay (maybe)) Fo Task of of depression explainability is a must Depression not easy, mainly oral or writing ML require a lot of data

why large? too big to train > transfer learnin is the key point > lstm sequentially, embeddings parallel

simplification based lime gradient based, saliency maps

born

dati

dataset liwc born cls token

global

text to lexical distribution lexical_weights of a perceptron w ridge regression of lexical_weights_w <> embeddings = P coeff matrix P_coeff <>

local

saliency cls attention