Home

After the successful completion of NALOMA’20 (NAtural LOgic Meets MAchine Learning), NALOMA’21 seeks to continue the series and attract exciting contributions. The workshop aims to bridge the gap between ML/DL and symbolic/logic-based approaches to NLI, and it is perhaps the only workshop organized to do so. It will take place on June 16, 2021, during IWCS 2021 organized by the University of Groningen but taking place fully online due to the pandemic.

NALOMA’21 is set out to address two main issues of the NLI community. First, the approaches and systems currently used to address NLI are too one-dimensional, and no fruitful dialog between them is promoted. One strand of research focuses on training large DL models that can achieve what has been identified as “human performance”. With the world-knowledge that is encapsulated in such models and their robust nature, such approaches can deal with diverse and large data in an efficient way. However, it has been repeatedly shown that such models lack generalization power and are far from solving NLI. When presented with differently biased data or with complex inferences containing hard linguistic phenomena, they struggle to reach the baseline. Explicitly detecting and solving these weaknesses is only partly possible, e.g., through appropriate datasets, because such models act like black-boxes with low explainability. Another strand of research targets more traditional approaches to reasoning, employing some kind of logic or semantic formalism. Such approaches excel in precision, especially of complex inferences with hard linguistic phenomena, e.g., negation, quantifiers, modals, etc. However, they suffer from inadequate world-knowledge and lower robustness, making it hard for them to compete with the state-of-the-art models. Overall, current methods to NLI are too one-dimensional: they are either purely DL or purely symbolic but do not attempt to combine the two worlds.

A second issue concerns datasets. Existing NLI datasets are either complex enough but too small to be used for proper learning, e.g., the FraCas or the RTE datasets, or large enough but too easy to be claimed to represent human inference, e.g.,\ SICK, SNLI, MNLI, etc. Especially the larger datasets additionally suffer from artifacts and inconsistent or misleading annotations. There have been efforts to correct some of these mistakes, but often such efforts lead to different versions of the corpora, raising comparability issues. Even more interesting is the fact that such inconsistencies often derive from the nature of the NLI task itself, which is prone to inherent disagreements, reflecting the inherent variability of the human reasoning process. Thus, there is a need for a refinement of the NLI task, the establishment of some common notions and the creation of suitable corpora, not only including more diverse data and reliable annotations, but also accounting for the inherent variability of the task. Last but not least, the datasets are mainly in English, and are therefore likely to be missing a lot of linguistically interesting phenomena.

The NALOMA workshop addresses both of these issues: the one-dimensionality of the existing approaches and the dataset weaknesses. It aims to bridge the gap between ML/DL and symbolic/logic-based approaches and it contributes to current efforts to provide data which is more reliable, more representative-of-human-inference and more linguistically diverse. NALOMA seeks to raise awareness on the data-related issues to NLI and discuss appropriate solutions. It is especially suitable for researchers interested in evaluating existing corpora and proposing new ones. The workshop wishes to place a special focus on the refinement of the NLI task and on ways to address its inherent variability.

Call for papers

This workshop invites submissions on any (theoretical or computational) topic concerning NLI, including but not limited to:

  • hybrid NLI systems integrating symbolic/logic-based methods with ML/DL approaches (particularly, approaches combining Natural Logic with ML/DL)
  • explainable models of NLI
  • opening the “black box” of NLI models
  • probabilistic semantics for NLI
  • downstream applications of NLI
  • creation, evaluation, and criticism of NLI datasets,
  • theoretical notions and refinement of the NLI task to address inherent disagreements
  • comparison and contrast between human-level and machine-level work in NLI
  • using symbolic/logic-based methods for data cleaning and augmentation
  • NLI for other languages than English.

We invite two types of submission:

  • Archival (long or short) papers should report on complete, original and unpublished research. Accepted papers will be published in the workshop proceedings and appear in the ACL anthology.
  • Extended abstracts may report on work in progress or work that was recently published/accepted at a different venue. Extended abstracts will not be included in the workshop proceedings. Thus, the unpublished work will retain the status and can be submitted to another venue. This webpage will link to the accepted extended abstracts.

Both accepted papers and extended abstracts are expected to be presented at the workshop. Extended abstracts will be presented as talks or posters at the discretion of the program committee.

Authors must submit anonymized extended abstracts or papers by March 26 April 4. Both extended abstracts and papers must be formatted according to the IWCS style-files or the Overleaf template. The extended abstracts should not contain an abstract section and may consist of up to 2 pages of content, plus unlimited references. Short and long papers may consist of up to 4 and 8 pages of content, respectively, plus unlimited references. Camera-ready versions of papers will be given one additional page of content so that reviewers’ comments can be taken into account.

Both extended abstracts and follow-up papers should be submitted via SoftConf.

Invited Speakers

Vered Shwartz, Allen Institute for AI (AI2) and University of Washington

Benjamin Van Durme, Johns Hopkins University and Microsoft Semantic Machines.

Important Dates

Submission of papers & extended abstracts: March 26 April 4

Notification: April 19

Final versions due: May 7

Workshop: June 16