osiria commited on
Commit
faefc92
1 Parent(s): ecc6c08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md CHANGED
@@ -1,3 +1,133 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - it
5
+ datasets:
6
+ - squad_it
7
+ widget:
8
+ - text: Quale libro fu scritto da Alessandro Manzoni?
9
+ context: Alessandro Manzoni pubblicò la prima versione de I Promessi Sposi nel 1827
10
+ - text: In quali competizioni gareggia la Ferrari?
11
+ context: La Scuderia Ferrari è una squadra corse italiana di Formula 1 con sede a Maranello
12
+ - text: Quale sport è riferito alla Serie A?
13
+ context: Il campionato di Serie A è la massima divisione professionistica del campionato italiano di calcio maschile
14
+ model-index:
15
+ - name: osiria/minilm-italian-l6-h384-question-answering
16
+ results:
17
+ - task:
18
+ type: question-answering
19
+ name: Question Answering
20
+ dataset:
21
+ name: squad_it
22
+ type: squad_it
23
+ metrics:
24
+ - type: exact-match
25
+ value: 0.6028
26
+ name: Exact Match
27
+ - type: f1
28
+ value: 0.7204
29
+ name: F1
30
+ pipeline_tag: question-answering
31
  ---
32
+
33
+ --------------------------------------------------------------------------------------------------
34
+
35
+ <body>
36
+ <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
37
+ <br>
38
+ <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">    Task: Question Answering</span>
39
+ <br>
40
+ <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: MiniLM</span>
41
+ <br>
42
+ <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span>
43
+ <br>
44
+ <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span>
45
+ <br>
46
+ <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
47
+ </body>
48
+
49
+ --------------------------------------------------------------------------------------------------
50
+
51
+ <h3>Model description</h3>
52
+
53
+ This is a <b>MiniLMv2</b> <b>[1]</b> model for the <b>Italian</b> language, fine-tuned for <b>Extractive Question Answering</b> on the [SQuAD-IT](https://huggingface.co/datasets/squad_it) dataset <b>[2]</b>.
54
+
55
+ <h3>Training and Performances</h3>
56
+
57
+ The model is trained to perform question answering, given a context and a question (under the assumption that the context contains the answer to the question). It has been fine-tuned for Extractive Question Answering, using the SQuAD-IT dataset, for 2 epochs with a linearly decaying learning rate starting from 3e-5, maximum sequence length of 384 and document stride of 128.
58
+ <br>The dataset includes 54.159 training instances and 7.609 test instances
59
+
60
+ <b>update: version 2.0</b>
61
+
62
+ The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5)
63
+
64
+ In order to maximize the benefits of the multilingual procedure, [L6xH384 mMiniLMv2](https://github.com/microsoft/unilm/tree/master/minilm) is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in [minilm-l6-h384-italian-cased](https://huggingface.co/osiria/minilm-l6-h384-italian-cased) to obtain a mono-lingual model size
65
+
66
+
67
+ The performances on the test set are reported in the following table:
68
+
69
+ (<b>version 2.0</b> performances)
70
+
71
+ | EM | F1 |
72
+ | ------ | ------ |
73
+ | 60.28 | 72.04 |
74
+
75
+ Testing notebook: https://huggingface.co/osiria/minilm-italian-l6-h384-question-answering/blob/main/osiria_minilm_l6_h384_italian_qa_evaluation.ipynb
76
+
77
+ <h3>Quick usage</h3>
78
+
79
+ In order to get the best possible outputs from the model, it is recommended to use the following pipeline
80
+
81
+ ```python
82
+ from transformers import AutoTokenizer, AutoModelForQuestionAnswering
83
+ import re
84
+ import string
85
+ from transformers.pipelines import QuestionAnsweringPipeline
86
+
87
+ tokenizer = AutoTokenizer.from_pretrained("osiria/minilm-italian-l6-h384-question-answering")
88
+ model = AutoModelForQuestionAnswering.from_pretrained("osiria/minilm-italian-l6-h384-question-answering")
89
+
90
+ class OsiriaQA(QuestionAnsweringPipeline):
91
+
92
+ def __init__(self, punctuation = ',;.:!?()[\]{}', **kwargs):
93
+
94
+ QuestionAnsweringPipeline.__init__(self, **kwargs)
95
+ self.post_regex_left = "^[\s" + punctuation + "]+"
96
+ self.post_regex_right = "[\s" + punctuation + "]+$"
97
+
98
+ def postprocess(self, output):
99
+
100
+ output = QuestionAnsweringPipeline.postprocess(self, model_outputs=output)
101
+ output_length = len(output["answer"])
102
+ output["answer"] = re.sub(self.post_regex_left, "", output["answer"])
103
+ output["start"] = output["start"] + (output_length - len(output["answer"]))
104
+ output_length = len(output["answer"])
105
+ output["answer"] = re.sub(self.post_regex_right, "", output["answer"])
106
+ output["end"] = output["end"] - (output_length - len(output["answer"]))
107
+
108
+ return output
109
+
110
+ pipeline_qa = OsiriaQA(model = model, tokenizer = tokenizer)
111
+ pipeline_qa(context = "Alessandro Manzoni è nato a Milano nel 1785",
112
+ question = "Dove è nato Manzoni?")
113
+
114
+ # {'score': 0.9492858052253723, 'start': 28, 'end': 34, 'answer': 'Milano'}
115
+ ```
116
+
117
+ You can also try the model online using this web app: https://huggingface.co/spaces/osiria/minilm-l6-h384-italian-question-answering
118
+
119
+ <h3>References</h3>
120
+
121
+ [1] https://arxiv.org/abs/2012.15828
122
+
123
+ [2] https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29
124
+
125
+ <h3>Limitations</h3>
126
+
127
+ This model was trained on the English SQuAD v2 and on SQuAD-IT, which is mainly a machine translated version of the original SQuAD v1.1. This means that the quality of the training set is limited by the machine translation.
128
+ Moreover, the model is meant to answer questions under the assumption that the required information is actually contained in the given context (which is the underlying assumption of SQuAD v1.1).
129
+ If the assumption is violated, the model will try to return an answer in any case, which is going to be incorrect.
130
+
131
+ <h3>License</h3>
132
+
133
+ The model is released under <b>MIT</b> license