The source code repository for the FactorBase system. The code in this repository implements the learn-and-join algorithm (see algorithm paper on ''Learning Graphical Models for Relational Data via Lattice Search'').
-
Input: A relational schema hosted on a MySQL server.
-
Output: A Bayesian network that shows probabilistic dependencies between the relationships and attributes represented in the database. Both network structure and parameters are computed by the system.
One of the key computational problems in relational learning and inference is to compute how many times a conjunctive condition is instantiated in a relational structure. FactorBase computes relational contingency tables, which store for a given set of first-order terms/predicates how many times different value combinations of the terms are instantiated in the input database. Given the general importance of this problem in pretty much any relational data problem, we provide stand-alone code for computing contingency tables that can be used independently of our Bayesian network learning system.
- Our project website contains various helpful information such as pointers to datasets, a gallery of learned models, and system tips.
- The tutorial explains the concepts underlying the algorithm and our other tools
- Our system paper titled '' FactorBase: Multi-Relational Model Learning with SQL All The Way'' explains the system components.
-
Import data into a Mysql server
We provide example datasets in the
examples
folder. For debugging and testing, we recommend starting with theUnielwin
dataset. This is a small today dataset.
We recommend using the MariaDB MySQL server. We have tested Factorbase on MariaDB version 10.4.28.
-
Install the program
First clone the project by running a command similar to the following:
git clone https://github.com/sfu-cl-lab/FactorBase.git
- Simple approach: use the compiled .jar file at the root directory factorbase-1.0.jar
- More complex approach: build from scratch. FactorBase and other tools in the project can all be built using the following command (make sure to have Maven installed):
cd FactorBase/code mvn install
After the above commands are successfully run, an executable JAR file for FactorBase can be found at:
factorbase/target/factorbase-<version>-SNAPSHOT.jar
Where the
<version>
field is the version of FactorBase that you have generated.Note: The Maven build will run some tests that expect a test database to be setup. The required the setup file could be found at here. If you just want to create the JAR file for FactorBase, then you can run the following command instead:
cd FactorBase/code mvn clean install -DskipTests
-
Update
config.cfg
with your own analysis according to format explained here By default the executable JAR file will look for the configuration file in the current directory (i.e. where you are running the command), if you would like to specify a different configuration file to use when running FactorBase you can use the parameter-Dconfig=<config-file>
. For example:java -Dconfig=../config.cfg -jar factorbase/target/factorbase-<version>-SNAPSHOT.jar
-
Point to the database that you want to analyse
Modify the configuration file with your own connection parameters according to the sample format explained in the image.
See our project website for an explanation of the options.
For the last row, you can set the global logger to this threee levels:
- debug: show all log messages;
- info: only show info, warning and error messages(no debug message), which is the default;
- off: show no log message;
-
Learn a Bayesian Network Structure
In the
FactorBase
folder, runjava -jar factorbase/target/factorbase-<version>-SNAPSHOT.jar
Where the
<version>
field is the version of FactorBase that you have generated.Note: For big databases, you may need to specify larger java heap size by
java -jar -Xmx8G factorbase/target/factorbase-<version>-SNAPSHOT.jar
-
Inspect the Bayesian Network (BN)
We follow the BayesStore design philosphy where statistical objects are treated as managed within the database.
- The network structure is stored in the table
Final_Path_BayesNets
of the<db>_BN
database where<db>
is the model database specified in your configuration file. For nodes with no parents, see the viewView_Final_Path_BayesNets
. - The conditional probability tables are stored in tables named
<nodename>_CP
of the<db>_BN
database where<db>
is the model database specified in your configuration file and<nodename>
is the name of the child node. - The file
Bif_<db>.xml
contains a Bayes net specification that can be loaded into a Bayes net tool (see next Section). This file is written to the directory that contains your config.cfg file.
- The network structure is stored in the table
===============
The learned BN structure can be exported from the database to support a number of other applications.
-
- Bayesian Interchange Format (BIF) Generator produces an .xml file that can be loaded into a standard Bayesian Network tool (like AIspace tool.) This is the best way to visualize the learned graph structure.
- Queries in the learned Bayesian networks can be used as a Statistical-Relational Model to estimate frequencies in the database as explained here and in our paper on Modelling Relational Statistics With Bayes Nets.
- The table shows the bayesian network xml files learned from some datasets. The sql file is the MySQL dump for the relation schema, while the output is the bayesian network in BIF/XML format.
datasets sql BIF/XML unielwin unielwin.sql Bif_unielwin.xml Mutagenesis_std Mutagenesis_std.sql Bif_Mutagenesis_std.xml MovieLens_std MovieLens_std.sql Bif_MovieLens_std.xml -
- Markov Logic Network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause)
- Convert the learned BN into MLN by running
java -jar MLNExporter.jar
. For more details see .
-
- The learned BN structure defines a set of features that can be used to transform the information in relational data into a single table format. The single table can then be loaded into standard machine learning tools. In the relational learning literature, this process is called Propositionalization. See also the tutorial on Relational Bayes Net Classifier.
-
Feature Generation for Classification. Given a target predicate (DB column), this tool produces a single-table data with relational features.
-
Feature Generation for Outlier detection. Given a target entity table/class, this tool produces a single-table data with relational features.
After running the learn-and-join algorithm, the learned Bayesian network can be leveraged for various applications.
-
Relational Classification. Given a tarN by
java -jar MLNExporter.jar
. For more details, see MLN_Generator. Get instance (cell in the database), compute a probability for each possible value. -
Data Cleaning Given a relational database, rank the database values according to their (im)probability.
-
Exception Mining Given a relational database and a target entity set, rank each entity according to how exceptional it is within its class. This tool implements our expected log-distance metric from our paper Model-based Outlier Detection for Object-Relational Data. Our approach fits within the general framework of exceptional model mining, also see the tutorial on Anomaly Detection.