At database creation Datahike supports features that can be configured based on the application's requirements. As of version 0.2.0
configuration for the storage backend, the schema flexibility, and time variance is supported. Be aware: all these features can be set at database creation but can not be changed afterwards. You can still migrate the data to a new configuration.
Configuring Datahike is now possible via the environ library made by weavejester. You can use environment variables, java system properties and passing a config-map as argument.
The sources are resolved in following order:
- Environment variables
- Java system properties
- Argument to load-config
That means passing a config as argument overwrites java system properties and using java system properties overwrite environment variables etc. Currently the configuration map looks like this per default:
{:store {:backend :mem ;keyword
:id "default"} ;string
:name (generated) ;string
:schema-flexibility :write ;keyword
:keep-history? true
:attribute-refs? false} ;boolean
If you are using a backend different from the builtins :mem
or :file
, please have a look at the README in the corresponding Github repository. The configuration is outsourced to the backends so you will find the configuration documentation there. An example for :mem
, :file
, and :jdbc
-backend you can see below. Please refer to the documentation of the environ library on how to use it. If you want to pass the config as environment variables or Java system properties you need to name them like following:
properties | envvar |
---|---|
datahike.store.backend | DATAHIKE_STORE_BACKEND |
datahike.store.username | DATAHIKE_STORE_USERNAME |
datahike.schema.flexibility | DATAHIKE_SCHEMA_FLEXIBILITY |
datahike.keep.history | DATAHIKE_KEEP_HISTORY |
datahike.attribute.refs | DATAHIKE_ATTRIBUTE_REFS |
datahike.name | DATAHIKE_NAME |
etc. |
Do not use :
in the keyword strings, it will be added automatically.
Each backend needs a different set of provided parameters. See definition
below for further information. For simple and fast creation
you can simply use the defaults which creates an in-memory database with ID "default"
, write schema flexibility, and history support:
(require '[datahike.api :as d])
(d/create-database)
At the moment we support two different backends from within Datahike: in-memory and file-based. Additionally, JDBC databases like PostgreSQL are supported via an external library: datahike-jdbc.
<backend>
:mem
id
: ID of the database- example:
{:store {:backend :mem
:id "mem-example"}}
- via environment variables:
DATAHIKE_STORE_BACKEND=mem
DATAHIKE_STORE_CONFIG='{:id "mem-example"}'
<backend>
:file
path
: absolute path to the storage folder- example:
{:store {:backend :file
:path "/tmp/file-example"}}
- via environment variables:
DATAHIKE_STORE_BACKEND=file
DATAHIKE_STORE_CONFIG='{:path "/tmp/file-example"}'
<backend>
:jdbc
dbtype
: JDBC supported databaseuser
: PostgreSQL userpassword
: password for PostgreSQL userdbname
: name of the PostgreSQL database- example:
{:store {:backend :jdbc
:dbtype "postgresql"
:user "datahike"
:password "datahike"
:dbname "datahike"}}
- via environment variables:
DATAHIKE_STORE_BACKEND=jdbc
DATAHIKE_STORE_CONFIG='{:dbtype "postgresql" :user "datahike" :password "datahike" :dbname "datahike"}'
By default Datahike generates a name for your database for you. If you want to set the name yourself just set a name for it in your config. It helps to specify the database you want to use, in case you are using multiple Datahike databases in your application (to be seen in datahike-server).
By default the Datahike api uses a schema on :write
approach with strict value
types that need to be defined in advance. If you are not sure how your data
model looks like and you want to transact any kind of data into the database you
can set :schema-flexibility
to read
. You may add basic schema definitions like :db/unique
,
:db/cardinality
or db.type/ref
where these kind of structure is needed.
(require '[datahike.api :as d])
(d/create-database {:schema-flexibility :read})
Have a look at the schema documentation for more information.
Datahike has the capability to inspect and query historical data within temporal
indices. If your application does not require any temporal data, you may
set :keep-history?
to false
.
(require '[datahike.api :as d])
(d/create-database {:keep-history? true})
Be aware: when deactivating the temporal index you may not use any temporal databases like history
, as-of
, or
since
.
Refer to the time variance documentation for more information.
Originally being a fork of the DataScript project, attributes in Datahike used to be stored always as simple keywords. This would cause trouble for users switching from Datomic as some queries were incompatible with Datahike due to the difference between the attribute storing systems. In Datomic, attributes are not stored directly as keywords, but attributes themselves are entities that can be refered to by their entity ID. While this makes some translations between attributes and their IDs necessary, the big advantage of this approach is the increased speed due to fast integer comparisons as opposed to slower keyword comparisons necessary if the attributes are stored directly.
You can enable this feature now as follows:
(require '[datahike.api :as d])
(d/create-database {:attribute-refs? true})
Starting from version 0.3.0
it is encouraged to use the new hashmap configuration since it is more flexible than the previously used URI scheme. Datahike still supports the old configuration so you don't need to migrate yourself. The differences for the configuration are as following:
- optional parameters are added in the configuration map instead of optional parameters
:temporal-index
renamed to:keep-history?
:schema-on-read
renamed to:schema-flexibility
with values:read
and:write
- store configuration for backends moved into
:store
atttribute :initial-tx
also added as attribute in configuration- the store configuration is now more flexible, so it fits better with its backends
- all backend configuration remains the same except for
:mem
- naming attribute for
:mem
backend is moved to:id
from:host
or:path
- optional
clojure.spec
validation has been added