LearnAut 2019

Call for papers

Learning models defining recursive computations, like automata and formal grammars, are the core of the field called Grammatical Inference (GI). The expressive power of these models and the complexity of the associated computational problems are major research topics within mathematical logic and computer science, spanning the communities that the Logic in Computer Science (LICS) conference brings together. Historically, there has been little interaction between the GI and LICS communities, though recently some important results started to bridge the gap between both worlds, including applications of learning to formal verification and model checking, and (co-)algebraic formulations of automata and grammar learning algorithms.

The goal of this workshop is to bring together experts on logic who could benefit from grammatical inference tools, and researchers in grammatical inference who could find in logic and verification new fruitful applications for their methods.

We invite submissions of recent work, including preliminary research, related to the theme of the workshop. Similarly to how main machine learning conferences and workshops are organized, all accepted abstracts will be part of a poster session held during the workshop. Additionally, the Program Committee will select a subset of the abstracts for oral presentation. At least one author of each accepted abstract is expected to represent it at the workshop. Note that participation to the poster session is on a voluntary basis for papers selected for oral presentation. High-quality submissions will be strongly encouraged to submit an extended version to an upcoming special issue of the Machine Learning Journal.

Topics of interest

  • Computational complexity of learning problems involving automata and formal languages.
  • Algorithms and frameworks for learning models representing language classes inside and outside the Chomsky hierarchy, including tree and graph grammars.
  • Learning problems involving models with additional structure, including numeric weights, inputs/outputs such as transducers, register automata, timed automata, Markov reward and decision processes, and semi-hidden Markov models.
  • Logical and relational aspects of learning and grammatical inference.
  • Theoretical studies of learnable classes of languages/representations.
  • Relations between automata and recurrent neural networks.
  • Active learning of finite state machines and formal languages.
  • Methods for estimating probability distributions over strings, trees, graphs, or any data used as input for symbolic models.
  • Applications of learning to formal verification and (statistical) model checking.
  • Metrics and other error measures between automata or formal languages.

Submission instructions

Submissions in the form of extended abstracts must be at most 8 single-column pages long (plus at most two for bibliography and possible appendixes) and must be submitted in the JMLR/PMLR format. The LaTeX style file is available on CTAN.

We do accept submissions of work recently published or currently under review.

Submissions are handled through EasyChair.

Important dates

  • Submission deadline: March 30th April 6th, 2019
  • Notification of acceptance: April 25th, 2019
  • Early registration: April 22nd, 2019
  • Workshop: June 23rd, 2019