This is an excerpt from my master thesis titled: “Semi-supervised morphological reinflection using rectified random variables”
Languages use suffixes and prefixes to convey context, stress, intonation, and grammatical meaning (like subject-verb agreement). Such suffixes and prefixes form a more general class of entities which are the meaningful sub-parts of a word; these are called as morphemes. A language’s morphology refers to the rules and processes through which morphemes are combined; this allows a word to express its syntactic categories and semantic meaning. For example, in English, a verb can have three tenses: past, present, and future. …
We implement a neural graph-based dependency parser inspired by those of Kiperwasser and Goldberg, 2016  and Dozat and Manning, 2017 . We train and test our parser on the English and Hindi Treebanks from the Universal Dependencies Project, achieving an unlabeled attachment score (UAS) of 84.80% and a labeled attachment score (LAS) of 78.61% on the English corpus, and a UAS of 91.92% and an LAS of 83.94% on the Hindi corpus. Note: Our UAS scores might be slightly inflated, since the arc from the root to itself always appears in both the prediction and training set. In the shorter data sets, this inflation is more pronounced since there are fewer arcs in each sentence. …
We explore Multiplicative Normalizing Flows in Bayesian neural networks with different prior distributions over the network weights. The prior over the parameters can not only influence how the network behaves, but can also affect the uncertainty calibration and computational efficiency. The Bayesian framework also offers an added advantage of compression when sparsity inducing priors are used. We experiment with uniform, Cauchy, log uniform, Gaussian, and standard Gumbel priors on predictive accuracy and predictive uncertainty.
In recent years Deep Neural Networks (DNN) have revolutionized the field of artificial intelligence. However, vanilla DNNs have two major shortcomings: (a) over-fitting when limited data is available and, (b) overconfidence in predictions. …
In the latest release (v1.1), PyTorch officially introduced the much-loved visualization tool, Tensorboard. Although it is an experimental version and might change in the near future, I think this is awesome.
Upgrade PyTorch version 1.0.x
pip install --upgrade torch
pip install tb-nightly
That’s it. You are good to go. Import the SummaryWriter and begin visualization :)
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir='./logs')# Track variables (e.g. loss, kld, etc.) that change
writer.add_scalar('KL Divergence', kl_div)
writer.add_scalar('Total Loss', loss)
writer.add_text('Decoder output', 'Some output')
To view the output, in terminal execute the following,
Fire up your browser and point to http://127.0.0.1:6006
For details, check out the documentation.
This problem really got me thinking, at first glance problem seems pretty do-able, but after two wrong submissions, I realized, my dynamic programming (DP) state is completely wrong! Finally when I solved it, it made my day.
If you haven’t tried, do so. You’ll have fun.
OK, straight forward, the question doesn’t ask us to maximize Harry’s path value. We must make sure he is alive! (😛). The question is to assign minimum amount of strength at the beginning of the grid so that Harry can choose a path and collect the Sorcerer’s stone (while being alive, this seems redundant). The regular DP — maximizing the path value doesn’t work. …