id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_101400
(2016) and Marvin and Linzen (2018) is to evaluate whether language models' assign higher probability to the acceptable sentence in a minimal pair.
in the classifier, the sentence embedding is passed through a sigmoid output layer (optionally preceded by a single hidden layer) giving a scalar representing the probability of a positive classification (either the sentence is real or acceptable, depending on the task).
neutral
train_101401
We hence focused our experiments in cross-lingual and lightly supervised parsing setups.
in this setup, to keep with the low resource language spirit, we do not learn the noise parameter (σ) but rather use fixed noise parameters for the perturbated models (see below).
neutral
train_101402
In such cases it is likely that a diverse list of high-quality solutions will be valuable.
ideally, such a list should be high-quality and diverse, in order to explore candidate structures that add information over the structure returned by the argmax inference problem.
neutral
train_101403
We linearly project the last representation h t using F ∈ R d model ×d model for querying W .
we extend Transformer (Vaswani et al., 2017) for supporting insertion operations, where the generation order is directly captured as relative positions through self-attention inspired by Shaw et al.
neutral
train_101404
Self-Attention One of the major challenges that prevents the vanilla Transformer from generating sequences following arbitrary orders is that the absolute-position-based positional encodings are inefficient (as mentioned in Section 3.2), in that absolute positions are changed during decoding, invalidating the previous hidden states.
the SYN orders were generated according to the dependency parse obtained by a dependency parse parser from Spacy (Honnibal and Montani, 2017) following a parent-to-children left-to-right order.
neutral
train_101405
We then choose an existing word y k (0 ≤ k ≤ t)) from y 0:t and insert y t+1 to its left or right.
model Variants Table 5 shows the results of the ablation studies using the machine translation task.
neutral
train_101406
(2005) argued that the definition needed to be made more precise, so as to circumscribe the extent to which ''world knowledge'' should be allowed to factor into inferences, and to explicitly differentiate between distinct forms of textual inference (e.g., entailment vs. conventional implicature vs. conversational implicature).
figure 4 shows several examples of sentences for which the annotations exhibit clear bimodal distributions.
neutral
train_101407
The task of RTE/NLI is fundamentally concerned with drawing conclusions about the world on the basis of limited information, but specifically in the setting when both the information and the conclusions are expressed in natural language.
we thus explore whether providing additional context will yield less-divergent human judgments.
neutral
train_101408
Raters label pairs in batches of 20, meaning we have a minimum of 20 ratings per rater.
whether or not humans' judgments can be summarized by a single aggregate label or value might be a moot question, since state-of-the-art models do not, in practice, predict a single value but rather a distribution over values.
neutral
train_101409
1 Since the task's introduction, there has been no formal consensus around which of the two approaches offers the better cost-benefit tradeoff: precise (at risk of being impractical), or organic (at risk of being ill-defined).
the prevailing assumption across annotation methods is that there is a single ''true'' inference about h given p that we should train models to predict, and that this label can be approximated by aggregating multiple (possibly noisy) human ratings as is typical in many other labelling tasks (Snow et al., 2008;Callison-Burch and Dredze, 2010).
neutral
train_101410
Our goal is to build S xy such that its sentences containing x are ''highly characteristic'' of x's shared meaning with y, and vice versa.
our primary question is whether automatically extracted PSTS sentences for a paraphrase pair truly reflect the paraphrase meaning.
neutral
train_101411
Our method for extracting sentences for PSTS is inspired by bilingual pivoting (Bannard and Callison-Burch, 2005), which discovers samelanguage paraphrases by ''pivoting'' over bilingual parallel corpora.
we use BERT in its configuration for sentence pair classification tasks, where the input consists of two tokenized sentences (c t and c w ), preceded by a [CLS] token and separated by a [SEP] token.
neutral