id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
1baddfeea7d11fc02cc26ff698a601_2
As defined in <cite>[4]</cite> , a trans-dimensional random field model represents the joint probability of the pair (l, x l ) as where n l /n is the empirical probability of length l. f ( T is the feature vector, which is usually defined to be position-independent and length-independent, e.g. is the normalization constant of length l. By making explicit the role of length in model definition, it is clear that the model in (1) is a mixture of random fields on sentences of different lengths (namely on subspaces of different dimensions), and hence will be called a trans-dimensional random field (TRF).
background
1baddfeea7d11fc02cc26ff698a601_3
In the joint SA training algorithm <cite>[4]</cite> , we define another form of mixture distribution as follows: where ζ = {ζ1, . . . , ζm} with ζ1 = 0 and ζ l is the hypothesized value of the log ratio of Z l (λ) with respect to Z1(λ), namely log . Z1(λ) is chosen as the reference value and can be calculated exactly.
extends
1baddfeea7d11fc02cc26ff698a601_4
In order to make use of Hessian information in parameter optimization, we use the online estimated Hessian diagonal elements to rescale the gradients in <cite>[4]</cite> .
uses background
1baddfeea7d11fc02cc26ff698a601_5
Step I: MCMC sampling: Generate a sample set B (t) with p(l, x l ; λ (t−1) , ζ (t−1) ) as the stationary distribution, using the trans-dimensional mixture sampling method (See Section 3.3 in <cite>[4]</cite> ).
uses background
1baddfeea7d11fc02cc26ff698a601_6
Fig.1 show an example of convergence curves of the SA training algorithm in <cite>[4]</cite> and the new improved SA.
background
1baddfeea7d11fc02cc26ff698a601_7
The improved SA algorithm (in Section 2.2) is used to train the TRF LMs, in conjunction with the trans-dimensional mixture sampling proposed in Section 3.3 of <cite>[4]</cite> .
uses background
1baddfeea7d11fc02cc26ff698a601_8
The learning rates of λ and ζ are set as suggested in <cite>[4]</cite> : where tc, t0 are constants and 0.5 < β λ , β ζ < 1.
uses background
1baddfeea7d11fc02cc26ff698a601_9
The class information is also used to accelerate the sampling, and more than one CPU cores are used to parallelize the algorithm, as described in <cite>[4]</cite> .
uses background
1baddfeea7d11fc02cc26ff698a601_10
In this section, speech recognition and 1000-best list rescoring experiments are conducted as configured in <cite>[4]</cite> . The maximum length of TRFs is m = 82, which is equal to the maximum length of the training sentences. The other configurations are: K = 300, β λ = 0.8, β ζ = 0.6, tc = 3000, t0 = 2000, tmax = 20, 000. L2 regularization with constant 4 × 10 −5 is used to avoid over-fitting. 6 CPU cores are used to parallelize the algorithm. The word error rates (WERs) and perplexities (PPLs) on WSJ'92 test set are shown in Tab.4.
uses
1baddfeea7d11fc02cc26ff698a601_13
Equally importantly, evaluations in this paper and also in <cite>[4]</cite> have shown that TRF LMs are able to perform as good as NN LMs (either RNN or FNN) on a variety of tasks.
background
1c0d971cf771f351b51661950f4b14_0
Contributions In this work, we alleviate the requirements: (1) We present the first model that is able to induce bilingual word embeddings from non-parallel data without any other readily available translation resources such as pre-given bilingual lexicons; (2) We demonstrate the utility of BWEs induced by this simple yet effective model in the BLI task from comparable Wikipedia data on benchmarking datasets for three language pairs<cite> (Vulić and Moens, 2013b</cite> ).
similarities
1c0d971cf771f351b51661950f4b14_1
Training Data We use comparable Wikipedia data introduced in (Vulić and Moens, 2013a;<cite> Vulić and Moens, 2013b</cite> ) available in three language pairs to induce bilingual word embeddings: (i) a collection of 13, 696 Spanish-English Wikipedia article pairs (ES-EN), (ii) a collection of 18, 898 ItalianEnglish Wikipedia article pairs (IT-EN), and (iii) a collection of 7, 612 Dutch-English Wikipedia article pairs (NL-EN).
similarities
1c0d971cf771f351b51661950f4b14_2
Following prior work (Haghighi et al., 2008; Prochasson and Fung, 2011;<cite> Vulić and Moens, 2013b)</cite> , we retain only nouns that occur at least 5 times in the corpus.
extends differences
1c0d971cf771f351b51661950f4b14_3
The seed lexicon is bootstrapped using the method from (Peirsman and Padó, 2011;<cite> Vulić and Moens, 2013b)</cite> .
similarities uses
1c0d971cf771f351b51661950f4b14_4
All parameters of the baseline BLI models (i.e., topic models and their settings, the number of dimensions K, feature pruning values, window size) are set to their optimal values according to suggestions in prior work (Steyvers and Griffiths, 2007; Vulić and Moens, 2013a;<cite> Vulić and Moens, 2013b</cite>; Kiela and Clark, 2014) .
similarities uses
1c0d971cf771f351b51661950f4b14_5
Due to space constraints, for (much) more details about the baselines we point to the relevant literature (Peirsman and Padó, 2011; Tamura et al., 2012; Vulić and Moens, 2013a;<cite> Vulić and Moens, 2013b)</cite> .
background
1c0d971cf771f351b51661950f4b14_6
Test Data For each language pair, we evaluate on standard 1,000 ground truth one-to-one translation pairs built for the three language pairs (ES/IT/NL-EN) (Vulić and Moens, 2013a;<cite> Vulić and Moens, 2013b)</cite> .
similarities
1c0d971cf771f351b51661950f4b14_7
lation in the other language (EN) according to the ground truth over the total number of ground truth translation pairs (=1000) (Gaussier et al., 2004; Tamura et al., 2012;<cite> Vulić and Moens, 2013b)</cite> .
background
1c0d971cf771f351b51661950f4b14_8
Finally, we may use the knowledge of BWEs obtained by BWESG from document-aligned data to learn bilingual correspondences (e.g., word translation pairs or lists of semantically similar words across languages) which may in turn be used for representation learning from large unaligned multilingual datasets as proposed in (Haghighi et al., 2008; Mikolov et al., 2013b;<cite> Vulić and Moens, 2013b)</cite> .
motivation
1c1b524d2bfe00c62a5a2e1a05ffc7_0
<cite>Vaswani et al. (2017)</cite> propose a new architecture that avoids recurrence and convolution completely.
background
1c1b524d2bfe00c62a5a2e1a05ffc7_1
In <cite>Vaswani et al. (2017)</cite> , the authors introduce the Transformer network, a novel architecture that avoids the recurrence equation and maps the input sequences into hidden states solely using attention.
background
1c1b524d2bfe00c62a5a2e1a05ffc7_2
In <cite>Vaswani et al. (2017)</cite> , the authors introduce the Transformer network, a novel architecture that avoids the recurrence equation and maps the input sequences into hidden states solely using attention. We propose a variant of the Transformer network which we call Weighted Transformer that uses self-attention branches in lieu of the multi-head attention.
extends
1c1b524d2bfe00c62a5a2e1a05ffc7_3
The Transformer network<cite> (Vaswani et al., 2017)</cite> avoids the recurrence completely and uses only self-attention.
background
1c1b524d2bfe00c62a5a2e1a05ffc7_4
The Transformer network<cite> (Vaswani et al., 2017)</cite> avoids the recurrence completely and uses only self-attention. We propose a modified Transformer network wherein the multi-head attention layer is replaced by a branched self-attention layer.
extends
1c1b524d2bfe00c62a5a2e1a05ffc7_5
<cite>Vaswani et al. (2017)</cite> proportionally reduce d k = d v = d model so that the computational load of the multi-head attention is the same as simple self-attention.
background
1c1b524d2bfe00c62a5a2e1a05ffc7_6
For the sake of brevity, we refer the reader to <cite>Vaswani et al. (2017)</cite> for additional details regarding the architecture.
background
1c1b524d2bfe00c62a5a2e1a05ffc7_7
<cite>Vaswani et al. (2017)</cite> state three reasons for the preference: (a) computational complexity of each layer, (b) concurrency, and (c) path length between long-range dependencies.
background
1c1b524d2bfe00c62a5a2e1a05ffc7_8
In Equations (3) and (4), we described the attention layer proposed in <cite>Vaswani et al. (2017)</cite> comprising the multi-head attention sub-layer and a FFN sub-layer.
uses
1c1b524d2bfe00c62a5a2e1a05ffc7_9
As in <cite>Vaswani et al. (2017)</cite> , we used the Adam optimizer (Kingma & Ba, 2014) with (β 1 , β 2 ) = (0.9, 0.98) and = 10 −9 .
similarities
1c1b524d2bfe00c62a5a2e1a05ffc7_10
Further, we do not use any averaging strategies employed in <cite>Vaswani et al. (2017)</cite> and simply return the final model for testing purposes.
differences
1c1b524d2bfe00c62a5a2e1a05ffc7_13
Our proposed model outperforms the state-of-the-art models including the Transformer<cite> (Vaswani et al., 2017)</cite> .
differences
1c86f563ababf5ec3c67cbf259252b_0
In recent years, a number of successful approaches have been proposed for both extractive (Nallapati et al., 2017; Narayan et al., 2018) and abstractive (See et al., 2017;<cite> Chen and Bansal, 2018)</cite> summarization paradigms.
background
1c86f563ababf5ec3c67cbf259252b_1
State-ofthe-art approaches are typically trained to generate summaries either in a fully end-to-end fashion (See et al., 2017) , processing the entire article at once; or hierarchically, first extracting content and then paraphrasing it sentence-by-sentence <cite>(Chen and Bansal, 2018)</cite> .
background
1c86f563ababf5ec3c67cbf259252b_2
Our approach is similar to <cite>(Chen and Bansal, 2018)</cite> , except that they use parallel data to train their extractors and abstractors.
similarities differences
1c86f563ababf5ec3c67cbf259252b_3
We follow the preprocessing pipeline of <cite>(Chen and Bansal, 2018)</cite> , splitting the dataset into 287k/11k/11k pairs for training/validation/testing.
uses
1c86f563ababf5ec3c67cbf259252b_4
We pick this model size to be comparable to recent work (See et al., 2017;<cite> Chen and Bansal, 2018)</cite> .
similarities uses
1c86f563ababf5ec3c67cbf259252b_5
EXT-ABS is the hierarchical model from <cite>(Chen and Bansal, 2018)</cite> , consisting of a supervised LSTM extractor and separate abstractor, both of which are individually trained on the CNN/DM dataset by aligning summary to article sentences. Our work best resembles EXT-ABS except that we do not rely on any parallel data.
uses similarities differences
1c86f563ababf5ec3c67cbf259252b_6
EXT-ABS is the hierarchical model from <cite>(Chen and Bansal, 2018)</cite> , consisting of a supervised LSTM extractor and separate abstractor, both of which are individually trained on the CNN/DM dataset by aligning summary to article sentences.
background
1dd3adcb79c8bc4b5187b85d836ceb_0
This approach has been shown to be accurate, relatively efficient, and robust using both generative and discriminative models (Roark, 2001; Roark, 2004; <cite>Collins and Roark, 2004</cite>) .
uses
1dd3adcb79c8bc4b5187b85d836ceb_1
Beam-search parsing using an unnormalized discriminative model, as in <cite>Collins and Roark (2004)</cite> , requires a slightly different search strategy than the original generative model described in Roark (2001; 2004) .
uses
1dd3adcb79c8bc4b5187b85d836ceb_2
A generative parsing model can be used on its own, and it was shown in <cite>Collins and Roark (2004)</cite> that a discriminative parsing model can be used on its own.
background
1dd3adcb79c8bc4b5187b85d836ceb_3
Beam-search parsing using an unnormalized discriminative model, as in <cite>Collins and Roark (2004)</cite> , requires a slightly different search strategy than the original generative model described in Roark (2001; 2004) .
uses
1dd3adcb79c8bc4b5187b85d836ceb_4
A generative parsing model can be used on its own, and it was shown in <cite>Collins and Roark (2004)</cite> that a discriminative parsing model can be used on its own.
background
1deb67be8226867fe6b9514cdecdec_0
Recent successes in statistical syntactic parsing based on supervised learning techniques trained on a large corpus of syntactic trees (Collins, 1999; Charniak, 2000;<cite> Henderson, 2003)</cite> have brought forth the hope that the same approaches could be applied to the more ambitious goal of recovering the propositional content and the frame semantics of a sentence.
background
1deb67be8226867fe6b9514cdecdec_1
We present work to test the hypothesis that a current statistical parser <cite>(Henderson, 2003)</cite> can output richer information robustly, that is without any significant degradation of the parser's accuracy on the original parsing task, by explicitly modelling semantic role labels as the interface between syntax and semantics.
extends uses
1deb67be8226867fe6b9514cdecdec_2
To achieve the complex task of assigning semantic role labels while parsing, we use a family of statistical parsers, the Simple Synchrony Network (SSN) parsers <cite>(Henderson, 2003)</cite> , which do not make any explicit independence assumptions, and are therefore likely to adapt without much modification to the current problem.
motivation uses
1deb67be8226867fe6b9514cdecdec_3
<cite>(Henderson, 2003)</cite> exploits this bias by directly inputting information which is considered relevant at a given step to the history representation of the constituent on the top of the stack before that step.
background
1deb67be8226867fe6b9514cdecdec_4
However, the recency preference exhibited by recursively defined neural networks biases learning towards information which flows through fewer history representations. <cite>(Henderson, 2003)</cite> exploits this bias by directly inputting information which is considered relevant at a given step to the history representation of the constituent on the top of the stack before that step.
motivation
1deb67be8226867fe6b9514cdecdec_5
According to the original SSN model in <cite>(Henderson, 2003)</cite> , only the information carried over by the leftmost child and the most recent child of a constituent directly flows to that constituent.
background
1deb67be8226867fe6b9514cdecdec_7
The third line of Table 1 gives the performance on the simpler PTB parsing task of the original SSN parser <cite>(Henderson, 2003)</cite> , that was trained on the PTB data sets contrary to our SSN model trained on the PropBank data sets. These results clearly indicate that our model can perform the PTB parsing task at levels of per-3 Such pairs consists of a tag and a word token.
similarities
1ebbddc6c6740aea71ade2ed915de4_0
To avoid complexities of asynchronous parallel training with shared parameter server (Dean et al., 2012) , the architecture in Fig.2 and Fig. 3 instead can be trained using the alternating training approach proposed in <cite>(Luong et al., 2016)</cite> , where each task is optimized for a fixed number of parameter updates (or mini-batches) before switching to the next task (which is a different language pair).
background
1ebbddc6c6740aea71ade2ed915de4_1
Neural translation attention mechanism (Bahdanau, Cho & Bengio, 2014) has been shown to be highly beneficial for bi-lingual neural translation of long sentences, but it is not compatible with the multi-task multilingual translation models (Dong et al., 2015;<cite> Luong et al, 2016)</cite> described in the previous Section and character-level translation models (Barzdins & Gosko, 2016) described in this Section.
background
1f48420f55771e243c73babf54632f_0
This tutorial introduces the advances in deep Bayesian learning with abundant applications for natural language understanding ranging from speech recognition (Saon and Chien, 2012; Chan et al., 2016) to document summarization (Chang and Chien, 2009 ), text classification (Blei et al., 2003; Zhang et al., 2015) , text segmentation (Chien and Chueh, 2012) , information extraction (Narasimhan et al., 2016) , image caption generation (Vinyals et al., 2015; Xu et al., 2015) , sentence generation (<cite>Li et al., 2016b</cite>) , dialogue control (Zhao and Eskenazi, 2016; Li et al., 2016a) , sentiment classification, recommendation system, question answering (Sukhbaatar et al., 2015) and machine translation , to name a few.
uses
201aa2a740b5d45f273ee298595f5a_0
Recently, multiple studies have focussed on providing a fine-grained analysis of the nature of concrete vs. abstract words from a corpus-based perspective (Bhaskar et al., 2017; Frassinelli et al., 2017; <cite>Naumann et al., 2018)</cite> .
background
201aa2a740b5d45f273ee298595f5a_1
Specifically,<cite> Naumann et al. (2018)</cite> performed their analyses across parts-of-speech by comparing the behaviour of nouns, verbs and adjectives in large-scale corpora.
background
201aa2a740b5d45f273ee298595f5a_3
Moreover, as discussed in previous studies by<cite> Naumann et al. (2018)</cite> and Pollock (2018) , mid-range concreteness scores indicate words that are difficult to categorise unambiguously regarding their concreteness.
background
201aa2a740b5d45f273ee298595f5a_4
This result is perfectly in line with the more general analysis by<cite> Naumann et al. (2018)</cite> .
similarities
201aa2a740b5d45f273ee298595f5a_5
The general pattern already described in<cite> Naumann et al. (2018)</cite> is confirmed by our quantitative analysis: overall, concrete verbs predominantly subcategorise concrete nouns as subjects and direct objects, while abstract verbs predominantly subcategorise abstract nouns as subjects and direct objects.
similarities
211b889125682f2596f708be1e83b9_0
Previous attempts to annotate QA-SRL initially involved trained annotators (He et al., 2015) but later resorted to crowdsourcing (<cite>Fitzgerald et al., 2018</cite>) to achieve scalability.
background
211b889125682f2596f708be1e83b9_1
Previous attempts to annotate QA-SRL initially involved trained annotators (He et al., 2015) but later resorted to crowdsourcing (<cite>Fitzgerald et al., 2018</cite>) to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As <cite>Fitzgerald et al. (2018)</cite> acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers.
motivation
211b889125682f2596f708be1e83b9_2
As <cite>Fitzgerald et al. (2018)</cite> acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers.
motivation
211b889125682f2596f708be1e83b9_3
Previous attempts to annotate QA-SRL initially involved trained annotators (He et al., 2015) but later resorted to crowdsourcing (<cite>Fitzgerald et al., 2018</cite>) to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL.
motivation
211b889125682f2596f708be1e83b9_4
Previous attempts to annotate QA-SRL initially involved trained annotators (He et al., 2015) but later resorted to crowdsourcing (<cite>Fitzgerald et al., 2018</cite>) to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers.
motivation
211b889125682f2596f708be1e83b9_5
To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing <cite>parser</cite> (<cite>Fitzgerald et al., 2018</cite>) as a baseline.
uses
211b889125682f2596f708be1e83b9_6
In subsequent work, <cite>Fitzgerald et al. (2018)</cite> constructed a large-scale corpus and used it to train a <cite>parser</cite>.
background
211b889125682f2596f708be1e83b9_7
In subsequent work, <cite>Fitzgerald et al. (2018)</cite> constructed a large-scale corpus and used it to train a <cite>parser</cite>. 1 <cite>They</cite> crowdsourced 133K verbs with 2.0 QA pairs per verb on average.
background
211b889125682f2596f708be1e83b9_8
Corpora The original 2015 QA-SRL dataset (He et al., 2015) was annotated by non-expert workers after completing a brief training procedure. In subsequent work, <cite>Fitzgerald et al. (2018)</cite> constructed a large-scale corpus and used it to train a <cite>parser</cite>. As both 2015 and <cite>2018 datasets</cite> use a single question generator, both struggle with maintaining coverage.
motivation
211b889125682f2596f708be1e83b9_9
Corpora The original 2015 QA-SRL dataset (He et al., 2015) was annotated by non-expert workers after completing a brief training procedure. In subsequent work, <cite>Fitzgerald et al. (2018)</cite> constructed a large-scale corpus and used it to train a <cite>parser</cite>. As both 2015 and <cite>2018 datasets</cite> use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and nonredundant annotation, the <cite>2018 dataset</cite> provides the raw annotations of all annotators. We found that these characteristics of the dataset impede its utility for future development of parsers.
motivation
211b889125682f2596f708be1e83b9_10
In subsequent work, <cite>Fitzgerald et al. (2018)</cite> constructed a large-scale corpus and used it to train a <cite>parser</cite>. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and nonredundant annotation, the <cite>2018 dataset</cite> provides the raw annotations of all annotators.
motivation
211b889125682f2596f708be1e83b9_11
Annotation We adopt the annotation machinery of (<cite>Fitzgerald et al., 2018</cite>) implemented using Amazon's Mechanical Turk, 2 and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments.
uses
211b889125682f2596f708be1e83b9_12
QA generating annotators are paid the same as in <cite>Fitzgerald et al. (2018)</cite> , while the consolidator is rewarded 5¢ per verb and 3¢ per question.
uses
211b889125682f2596f708be1e83b9_13
Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in (<cite>Fitzgerald et al., 2018</cite>) , arguments are matched using a span matching criterion of intersection over union ≥ 0.5 .
extends
211b889125682f2596f708be1e83b9_14
Unlabeled Argument Detection (UA) Inspired by the method presented in (<cite>Fitzgerald et al., 2018</cite>) , arguments are matched using a span matching criterion of intersection over union ≥ 0.5 .
extends background
211b889125682f2596f708be1e83b9_15
All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in (<cite>Fitzgerald et al., 2018</cite>) .
uses
211b889125682f2596f708be1e83b9_16
As we will see, our evaluation heuristics, adapted from those in <cite>Fitzgerald et al. (2018)</cite> , significantly underestimate agreement between annotations, hence reflecting performance lower bounds.
uses
211b889125682f2596f708be1e83b9_17
We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or <cite>the parser</cite> in (<cite>Fitzgerald et al., 2018</cite>) , which predicts argument spans independently of each other.
uses similarities
211b889125682f2596f708be1e83b9_18
To illustrate the effectiveness of our new goldstandard, we use its Wikinews development set to evaluate the currently available <cite>parser</cite> from (<cite>Fitzgerald et al., 2018</cite>) . While <cite>the parser</cite> correctly predicts 82% of non-implied roles, it skips half of the implied ones.
motivation
211b889125682f2596f708be1e83b9_19
To illustrate the effectiveness of our new goldstandard, we use its Wikinews development set to evaluate the currently available <cite>parser</cite> from (<cite>Fitzgerald et al., 2018</cite>) . For each predicate, <cite>the parser</cite> classifies every span for being an argument, independently of the other spans.
uses background
211b889125682f2596f708be1e83b9_20
To illustrate the effectiveness of our new goldstandard, we use its Wikinews development set to evaluate the currently available <cite>parser</cite> from (<cite>Fitzgerald et al., 2018</cite>) . As expected, <cite>the parser</cite>'s recall against our gold is substantially lower than the 84.2 recall reported in (<cite>Fitzgerald et al., 2018</cite>) against Dense, due to the limited recall of Dense relative to our gold set.
uses background differences
211b889125682f2596f708be1e83b9_21
To illustrate the effectiveness of our new goldstandard, we use its Wikinews development set to evaluate the currently available <cite>parser</cite> from (<cite>Fitzgerald et al., 2018</cite>) . Based on this inspection, <cite>the parser</cite> completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied.
motivation
211b889125682f2596f708be1e83b9_22
As mentioned in the paper body, the <cite>Fitzgerald et al. parser</cite> generates redundant role questions and answers.
differences background motivation
22253d7b7cd43697b99909e09e7ebb_0
In <cite>[17]</cite> a dynamical model of language change was proposed, based on a spin glass model for syntactic parameters and language interactions.
background
22253d7b7cd43697b99909e09e7ebb_1
In the case of syntactic parameters behaving as independent variables, in the low temperature regime (see <cite>[17]</cite> for a discussion of the interpretation of the temperature parameter in this model) the dynamics converges rapidly towards an equilibrium state where all the spin variables corresponding to a given syntactic feature for the various languages align to the value most prevalent in the initial configuration.
background
22253d7b7cd43697b99909e09e7ebb_2
Using syntactic data from [6] , [7] , which record explicit entailment relation between different parameter, it was shown in <cite>[17]</cite> , for small graph examples, that in the presence of relations the dynamics settles on equilibrium states that are not necessarily given by completely aligned spins.
background
22253d7b7cd43697b99909e09e7ebb_3
When we interpret the dynamics of the model considered in <cite>[17]</cite> in terms of codes and the space of code parameters, the initial datum of the set of languages L at the vertices of the graph, with its given list of syntactic binary variables, determines a code C L .
uses
22253d7b7cd43697b99909e09e7ebb_4
In the presence of entailment relations between different syntactic variables, it was shown in <cite>[17]</cite> that the Hamiltonian should be modified by a term that introduces the relations as a Lagrange multiplier.
background
22253d7b7cd43697b99909e09e7ebb_5
Given an initial condition x 0 ∈ {0, 1} n L and the datum (J e ) e∈E(G L ) of the strengths of the interaction energies along the edges, the same method used in <cite>[17]</cite> , based on the standard Metropolis-Hastings algorithm, can be used to study the dynamics in this setting, with a similar behavior.
uses background
22253d7b7cd43697b99909e09e7ebb_6
Consider the very small example, with just two entailed syntactic variables and four languages, discussed in <cite>[17]</cite> , where the chosen languages are L = { 1 , 2 , 3 , 4 } = {English, Welsh, Russian, Bulgarian} and the two syntactic parameters are {x 1 , x 2 } = {StrongDeixis, StrongAnaphoricity}. Since we have an entailment relation, the possible values of the variables x i are now ternary, x i ( ) ∈ {0, −1, +1}, that is, we consider here codes C ⊂ F n 3 .
background
22253d7b7cd43697b99909e09e7ebb_7
One can see already in a very simple example, and using the dynamical system in the form described in <cite>[17]</cite> , that the dynamics in the space of code parameters now does not need to move towards the δ = 0 line.
background
22253d7b7cd43697b99909e09e7ebb_8
We consider in this case the same dynamical system used in <cite>[17]</cite> to model the case with entailment, which is a modification of the Ising model to a coupling of an Ising and a Potts model with q = 3 at the vertices of the graph.
uses
22253d7b7cd43697b99909e09e7ebb_9
In the cases with high temperature and either high or low entailment energy, it is shown in <cite>[17]</cite> that one can have equilibrium states like for the high entailment energy case, or for the low entailment energy case.
background
22253d7b7cd43697b99909e09e7ebb_10
The example mentioned above is too simple and artificial to be significant, but we can analyze a more general situation, where we consider the full syntactic data of [6] , [7] , with all the entailment relations taken into account, and the same interaction energies along the edges as in <cite>[17]</cite> , taken from the data of [16] , which can be regarded as roughly proportional to a measure of the amount of bilingualism.
uses
22253d7b7cd43697b99909e09e7ebb_11
Dynamics in the space of code parameters: average distance. a lot more complicated than the simple examples discussed in <cite>[17]</cite> .
differences
22dc2a38e29a1f5ac55c9ac220782b_0
Recently, <cite>Vaswani et al. (2017)</cite> proposed the Transformer architecture for machine translation.
motivation background
22dc2a38e29a1f5ac55c9ac220782b_1
The concept of self-attention (Cheng et al., 2016; Parikh et al., 2016) , central to our proposed approach, has shown great promises in natural language processing; It produced state-of-the-art results for machine translation<cite> (Vaswani et al., 2017)</cite> .
similarities
22dc2a38e29a1f5ac55c9ac220782b_2
3 SANet: Self-Attention Network Inspired by the Transformer architecture<cite> (Vaswani et al., 2017)</cite> which performed machine translation without recurrent or convolutional layers, we propose the Self-Attention Network (SANet) architecture targeting instead text classification.
differences
22dc2a38e29a1f5ac55c9ac220782b_3
One key difference between our approach and <cite>Vaswani et al. (2017)</cite> 's is that we only perform input-input attention with self-attention, as we do not have sequences as output but a text classification.
differences
22dc2a38e29a1f5ac55c9ac220782b_4
<cite>Vaswani et al. (2017)</cite> defined attention as a function with as input a triplet containing queries Q, keys K with associated values V .
background
22dc2a38e29a1f5ac55c9ac220782b_5
In the case of self-attention, Q, K and V are linear projections of X. Thus, we define the dot-product<cite> (Vaswani et al., 2017)</cite> .
background
22dc2a38e29a1f5ac55c9ac220782b_6
We use the positional encoding vectors that were defined by <cite>Vaswani et al. (2017)</cite> as follows.
similarities