This package implements maximum-margin based algorithms for learning Markov Network classifiers from examples. The package supports linearly parameterized MN classifiers with arbitrarily complex pair-wise interactions between labels. The inference of the labels leads to solving
$$\hat{\mathbf{y}} \in {\rm Arg}\max_{\mathbf{y}\in\cal{Y}^{\cal{V}}} \left(\sum_{v\in\cal{V}} \mathbf{w}{y_v}^T \mathbf{x}v + \sum{v,v'\in\cal{E}}g(y_v,y{v'})\right )$$
where
MANET implements M3N algorithm [1][2] for learning MN classifiers from examples
MANET converts learning of MN classifier into a convex problem via using either Linear Programming Margin-Recaling (LP-MR) loss or Markov Network Adversarial (MANA) loss. The convex problem is tractable by standard SGD or ADAM that are both implemented.
MANET comes with inference algorithm for case that
- Python 3.x
- Linux; Tested on Ubuntu 18.04 LTS.
Install required python packages:
pip install numpy pyyaml pandas argparse tqdm cffi scipy scikit-learn matplotlib
Compile CFFI interface for C-implementation of the ADAG max-sum solver:
cd manet/adag_solver/
./build_adag_module.sh
Modify your .profile or .bashrc by adding path to the compiled ADAG solver and to the MANET root directory:
LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:MANET_ROOT_DIR/manet/adag_solver/"
PYTHONPATH="${PYTHONPATH}:MANET_ROOT_DIR/"
-
Learning to predict sequences. This is an example on learning MN classifier predicting label sequences from synthetic examples generated from HMC. It shows how to learn from both completely annotated examples and examples with missing labels. It illustrates all basic functions of the library.
-
MAP inference. This example shows how to solve MAP inference in Markov Networks. In case of generic neigborhood structure one can use ADAG solver, and in case of chains Viterbi algorithm.
-
Evaluation of M3N algorithm using different proxy losses. This is an implementation of experiments published in paper [2]. The goal is to evaluate performance of MN classifier learned by M3N algorithm with two different proxies: LP Margin-rescaling loss and MArkov Network Adversarial loss. The proxy losses are evaluated on synthetically generated sequences and on the problem of learning symbolic and visual Sudoku solver.
- [1] V.Franc, A.Yermakov. Learning Maximum Margin Markov Networks from examples with missing labels. ACML 2021.
- [2] V.Franc, D.Prusa, A.Yermakov. Consistent and Tractable Algorithm for Markov Network Learning. ECML PKDD 2022.
- [3] T.Werner. A Linear Programming Approach to Max-sum Problem. A Review. PAMI 2007.
- [4] T.Werner. LP Relaxation Approach to MAP Inference in Markov Random Fields
In case you use the learning algorithms from MANET, then please acknowledge [2].