Table of Contents
In the previous chapter we showed how to use ConfD with
read-only operational data. In this chapter we will use the
same APIs from libconfd.so
to implement
externally stored configuration data. This is the opposite
of CDB, if CDB is used to store the configuration data, this
section can be skipped.
We show how ConfD can use an external database as data source. The external data base can either be a full-fledged real data base or something as simple as a text file.
The configuration of the network device is modeled by a YANG module. It describes the data model of the device and ConfD needs to populate the XML data tree with actual data.
If the ConfD built-in XML database (CDB) is used to hold all configuration data, ConfD will automatically read and write into that database. If, on the other hand, the actual configuration data is kept outside of ConfD we need user supplied code to provide ConfD with the actual data of the configuration.
Many standard UNIX applications read their configuration from a static file. If we want to integrate such an application into our network device, it may not be feasible to rewrite the application so that it reads its configuration from the device configuration database. In general we want to change the code of the application as little as possible.
Examples of such applications are abundant. In general this applies to all open source applications generally found on UNIX machines.
In order to integrate such an application into ConfD we must first write a YANG module which models the part of the application (the part of the application's configuration file) which we wish to be able to configure. Following that we must write C code which can read, parse, manipulate and write the configuration file in question and finally we must connect that C code to ConfD.
We did precisely this exercise in Chapter 5, CDB - The ConfD XML Database, however the solution from that chapter had the actual configuration data in CDB, and the configuration file was generated. Thus, if the file was edited or otherwise changed externally, those changes would be overwritten the next time we regenerated the file. In this chapter we will show how to use the actual file as a database. I.e. no configuration data is ever kept inside ConfD, the data resides outside ConfD.
Similar to how we managed operational data, we need to define a data model and annotate the model with a callpoint.
Assume that we wish to model a set of 'server' structures as in the following YANG module:
Example 7.1. A list of server structures
module smp { namespace "http://tail-f.com/ns/example/smp"; prefix smp; import ietf-inet-types { prefix inet; } import tailf-common { prefix tailf; } /* A set of server structures */ container servers { tailf:callpoint simplecp; list server { key name; max-elements 64; leaf name { type string; } leaf ip { type inet:ipv4-address; mandatory true; } leaf port { type inet:port-number; mandatory true; } } } }
The callpoint called simplecp
instructs ConfD that
whenever it needs to populate the XML tree below
simplecp
, it must invoke callbacks in an external
program which has registered itself with the name
simplecp
. The external programs use the API in
libconfd.so
to register themselves under different
callpoints.
When we implemented the operational data callbacks
we had to implement a set of callbacks for each
callpoint. With external data we must do the same, but
some additional callbacks must also be implemented.
The data callbacks
get_next()
, get_elem()
,
get_object()
,
get_next_object()
,
find_next()
,
find_next_object()
,
num_instances()
, and finally
exists_optional()
work precisely the same
for external data as they do for operational data. Those callbacks
are thus described in the previous chapter.
Additionally the following data callback functions are required for external data:
create()
- This callback creates a
new list
entry. In the case of the smp.yang
module
above, this function needs to create a new empty "server"
entry. Once the entry is created, it will be
populated with values through a series of calls to
set_elem()
.
remove()
- This callback needs to
remove an entire list entry and all its subelements.
set_elem()
- This callback
sets the value of a leaf.
Again, similar to the chapter on operational data user sessions are created when a user logs in, and new transactions are created when an agent initiates an activity.
If we deal with operational data, the different phases
are not interesting, thus then we only had to implement
the init()
and a finish()
callback.
This section describes the states of a ConfD transaction
and also which user callbacks that need to be implemented in order
to participate in the transaction.
In a device where ConfD is used to manage the configuration data there can be multiple sources of data. To use ConfD terminology: there can be several different daemons that connect to ConfD under different callpoints. Some callpoints may also be served by CDB.
Furthermore, a set of write operations may involve
several of these daemons as well as CDB.
In order to ensure that all
participants perform the operations, ConfD orchestrates
a two-phase commit protocol towards the different
participants. Each NETCONF operation, such as
edit-config
or each call to commit
in the CLI will be clumped into a
ConfD transaction. If we store our data outside of ConfD
- as will be described in this chapter - we must implement
a number of callback functions in order to participate
in the various states of the transaction.
An individual daemon may (or may not) implement the callbacks for the two-phase commit protocol. If there is only one daemon and CDB is not used at all, the two-phase commit protocol may be skipped. The reason for this is that when there is only one participant, the two-phase commit protocol is irrelevant.
Each NETCONF operation, i.e. each
edit-config
and so forth, will execute as one
transaction.
Thus transactions originating from NETCONF will be fairly
short-lived entities whereas transactions originating from
the CLI or the Web UI will be longer.
A daemon that wishes to participate in the two-phase commit transaction must implement a number of callback functions.
init()
- As for operational
data, from the daemon's point of view the
init()
callback is invoked when a
transaction starts, but ConfD delays the actual invocation as
an optimization. For a daemon providing configuration data,
init()
is invoked just before the first
data-reading callback, or just before the
trans_lock()
callback (see
below), whichever comes first.
When a transaction has started, it is in a state we refer to as
READ
.
ConfD will, while the transaction is in the
READ
state,
execute a series of read operations towards (possibly) different
callpoints in the daemon.
Any write operations performed by the management station
are accumulated by ConfD and the daemon doesn't see them
while in the READ
state.
trans_lock()
- This callback
gets invoked by ConfD at the end of the transaction. ConfD
has accumulated a number of write operations and will now
initiate the final write phases.
Once the trans_lock()
callback has returned, the transaction is in the
VALIDATE
state. In the VALIDATE
state,
ConfD will (possibly) execute
a number of read operations in order to validate the new
configuration. Following the read operations for validations
comes the invocation of one of the
write_start()
or
trans_unlock()
callbacks.
trans_unlock()
- This callback
gets
invoked by ConfD if the validation failed or if the
validation was done separate from the commit (e.g. by
giving a validate command in the CLI). Depending
on where the transaction originated, the behavior after
a call to trans_unlock()
differs. If the
transaction originated from the CLI, the CLI reports to the
user that the configuration is invalid and the transaction
remains in the READ
state whereas if the
transaction originated from a NETCONF client, the NETCONF operation
fails and a NETCONF rpc
error is reported to the
NETCONF client/manager.
write_start()
- If the validation
succeeded, the write_start()
callback will
be called and the transaction enters the
WRITE
state. While in
WRITE
state, a number of
calls to the write callbacks set_elem()
,
create()
and
remove()
will be performed.
If the underlying database supports real atomic transactions, this is a good place to start such a transaction.
The application should not modify the real running data
here. If, later, the abort()
callback is called,
all write operations performed in this state must be undone.
prepare()
- Once all write operations are
executed, the prepare()
callback is executed. This
callback ensures that all participants have succeeded in writing
all elements. The purpose of the callback is merely to indicate
to ConfD that the daemon is ok, and has not yet encountered
any errors.
abort()
- If any of the participants return an error
or fail to reply in the prepare()
callback, the remaining
participants all get invoked in the abort()
callback. All data written so far in this transaction should be
disposed of.
commit()
- If all participants successfully
replied in their respective prepare()
callbacks, all
participants get invoked in their respective commit()
callbacks. This is the place to make all data written by
the write callbacks in WRITE
state permanent.
finish()
- And finally, the finish()
callback gets invoked at the end. This is a good place to
deallocate any local resources for the transaction.
The finish()
callback can be called from several
different states.
The following picture illustrates the conceptual state machine a ConfD transaction goes through.
All callbacks except the init()
callback
are optional. If a callback is not implemented, it is
the same as a succeeding empty implementation such as:
int mycallback(struct confd_trans_ctx *tctx) { return CONFD_OK; }
In the following examples, we will initially not use
these transactions at all. We will implement the init()
callback only and let the other transaction callbacks
be NULL
.
In this section we provide a commented example which manages actual configuration data. The idea is that ConfD runs the NETCONF agent and is entirely responsible for the candidate configuration and possibly runs the CLI and the Web UI. The application is responsible for maintaining and storing the configuration data.
An actual running version of this example can be found in
the examples directory of a ConfD release under
user_guide_examples/simple_no_trans
.
The example system stores "servers" with name, ip, and port on a file. Our YANG module will be very simple; we have:
Example 7.2. The smp.yang module
module smp { namespace "http://tail-f.com/ns/example/smp"; prefix smp; import ietf-inet-types { prefix inet; } import tailf-common { prefix tailf; } /* A set of server structures */ container servers { tailf:callpoint simplecp; list server { key name; max-elements 64; leaf name { type string; } leaf ip { type inet:ipv4-address; mandatory true; } leaf port { type inet:port-number; mandatory true; } } } }
To implement this we first need a small database. We choose to use a simple array of "server" structures, as in:
struct server { char name[256]; struct in_addr ip; unsigned int port; }; static struct server running_db[64]; static int num_servers = 0;
To create a new "server" in the database we add a new server structure to the array, as in:
static struct server *add_server(char *name) { int i, j; for (i=0; i < num_servers; i++) { if (strcmp(running_db[i].name, name) > 0) { /* found the position to add at, now shuffle the */ /* remaining elems in the array one step */ for (j = num_servers; j > i; j--) { running_db[j] = running_db[j-1]; } break; } } num_servers++; memset(&running_db[i], 0, sizeof(struct server)); strcpy(running_db[i].name, name); return &running_db[i]; } static struct server *new_server(char *name, char *ip, char *port) { struct server *sp = add_server(name); sp->ip.s_addr = inet_addr(ip); sp->port = atoi(port); return sp; }
We keep the array ordered according to the key (server name), since ConfD expects us to return entries in that order when traversing the list.
Note that at first glance this code looks like we may write
off the end of the running_db
array. But this is
not the case, since the server
list in the
data model is defined with max-elements 64;
. This means
that ConfD will guarantee that there are never more than 64
servers.
To search the database for a specific server we have:
/* Find a specific server */ static struct server *find_server(confd_value_t *v) { int i; for (i=0; i < num_servers; i++) { if (confd_svcmp(running_db[i].name, v) == 0) return &running_db[i]; } return NULL; }
Our find_server()
function utilizes a strcmp()
-like
function from libconfd.so
- the function
confd_svcmp()
compares a string char* value to
a confd_value_t value. The type of the confd_value_t
must obviously be either a string or a buffer.
The initialization code is very similar to the
ARP example in the chapter on operational data,
with the exception that we must also here
register functions to write new data. We need to register
callbacks to set_elem()
which set the value of a leaf
element such as /servers/server{www}/ip
.
We also need to register callback functions that can create
a new "server" entry and delete old "server" entries. Thus
we initialize our data callback structure
struct confd_data_cbs as:
data.get_elem = get_elem; data.get_next = get_next; data.set_elem = set_elem; data.create = create; data.remove = doremove;
The get_elem()
and get_next()
callbacks can be
implemented in a manner similar to how we implemented the corresponding
callbacks for the ARP example. For example:
Example 7.3. get_next() callback for smp.yang
static int get_next(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath, long next) { confd_value_t v; if (next == -1) { /* Get first key */ if (num_servers == 0) { /* Db is empty */ confd_data_reply_next_key(tctx, NULL, -1, -1); return CONFD_OK; } CONFD_SET_STR(&v, running_db[0].name); confd_data_reply_next_key(tctx, &v, 1, 1); return CONFD_OK; } if (next == num_servers) { /* Last elem */ confd_data_reply_next_key(tctx, NULL, -1, -1); return CONFD_OK; } CONFD_SET_STR(&v, running_db[next].name); confd_data_reply_next_key(tctx, &v, 1, next+1); return CONFD_OK; }
The create callback is easy. The keypath passed to the
create()
callback will have the new key (last in the string)
as first element (in the array). Recall that the keypaths are passed in reversed
order. For example when ConfD wants to create
a new server entry, named to for example "smtp", the keypath
will look like /servers/server{smtp}
.
The data model can optionally specify default values. In smp.yang
we didn't use that feature.
For example the "port" leaf was specified as:
leaf port { type inet:port-number; mandatory true; }
and not as
leaf port { type inet:port-number; default 0; }
Our C code needs to be able to create list entries in the database
without any of the actual values of the leafs given. All
keys will be given but none of the actual values of the other leafs (except
for the key leafs).
ConfD will set all the missing values using the set_elem()
callback. Our create()
callback looks like:
Example 7.4. create() callback for smp.yang
static int create(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath) { confd_value_t *key = &keypath->v[0][0]; add_server((char *)CONFD_GET_BUFPTR(key)); return CONFD_OK; }
In a similar manner, the remove()
callback
deletes a server entry.
Example 7.5. remove() callback for smp.yang
static int doremove(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath) { int i, j; confd_value_t *key = &keypath->v[0][0]; for (i=0; i < num_servers; i++) { if (confd_svcmp(running_db[i].name, key) == 0) { /* found the elem to remove, now shift the */ /* remaining elems in the array one step */ for (j=i+1; j < num_servers; j++) { running_db[j-1] = running_db[j]; } num_servers--; return CONFD_OK; } } return CONFD_OK; }
Finally here is the set_elem()
callback which is responsible
for setting a leaf value. The code is:
Example 7.6. set_elem() callback for smp.yang
static int set_elem(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath, confd_value_t *newval) { confd_value_t *tag = &(keypath->v[0][0]); struct server* s = find_server(&(keypath->v[1][0])); if (s == NULL) { confd_trans_seterr(tctx, "no such server found"); return CONFD_ERR; } switch (CONFD_GET_XMLTAG(tag)) { case smp_ip: s->ip = CONFD_GET_IPV4(newval); break; case smp_port: s->port = CONFD_GET_INT32(newval); break; default: return CONFD_ERR; } return CONFD_OK; }
Note that there is no switch
clause for
smp_name
- ConfD will never change key values by
invoking set_elem()
for key leafs. Changing
keys can only be done by a combination of
remove()
and create()
invocations, followed by set_elem()
invocations for the non-leaf keys in the created list entry.
In this section we introduce and use the transaction callbacks.
An actual running version of this example can be found in
the examples directory of a ConfD release under
user_guide_examples/simple_trans
.
An application is invoked in trans_lock()
when a
transaction is committed or when a transaction is
validated (e.g. by doing validate in the CLI), and the
transaction enters the VALIDATE
state.
When the application is invoked
in the trans_lock()
callback, the following
is guaranteed.
A sequence of callbacks will be invoked without
delays. ConfD has accumulated a number of write()
operations and will execute them in a sequence without
delays.
No callbacks to any other transactions towards the same
data store will be
executed between the invocation of trans_lock()
and the invocation of finish()
(or
trans_unlock()
). Thus all transactions
towards a given data store
are serialized once they reach the VALIDATE
state.
After validation, either trans_unlock()
or
write_start()
is invoked. trans_unlock()
is
called when the transaction is validated only, and
write_start()
is called when the validation was done
as the first part of the commit, and validation succeeded.
If the underlying database is a real database with real
support for transactions, it is a very good idea to
start such a native transaction in the call to
write_start()
. If that is not the case the
libconfd.so
library provides support which makes it possible
to accumulate the write operations without actually
writing them.
In this example we save the database to a file for persistence
Example 7.7. save() utility function
static int save(char *filename) { FILE *fp; int i; if ((fp = fopen(filename, "w")) == NULL) return CONFD_ERR; for (i=0; i < num_servers; i++) { fprintf(fp, "%s %s %d\n", running_db[i].name, inet_ntoa(running_db[i].ip), running_db[i].port); } fclose(fp); return CONFD_OK; }
We instantiate all the transaction callbacks and do the
appropriate thing in each callback. Since the database is
just a simple array, the variable running_db
, we
choose to let the library libconfd.so
accumulate the
individual write operations by returning
CONFD_ACCUMULATE
from the write callbacks
set_elem()
, create()
and remove()
.
The data will be copied into data structures in the library.
The purpose of doing this is that we do not
want to explicitly write into our local data structures in the
write routines - rather we wish to delay this and perform the
actual write operations in the prepare()
callback.
Example 7.8. write callbacks using accumulate
static int set_elem(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath, confd_value_t *newval) { return CONFD_ACCUMULATE; } static int create(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath) { return CONFD_ACCUMULATE; } static int doremove(struct confd_trans_ctx *tctx, confd_hkeypath_t *keypath) { return CONFD_ACCUMULATE; }
We are thus not doing anything at all in the
write callbacks, except returning the value
CONFD_ACCUMULATE
. Note that this will store
a complete copy of the keypath and also of the new value
if the operation is set_elem()
.
All the operations will be copied and kept in a linked
list in the transaction context (struct confd_trans_ctx).
In the PREPARED
state we will loop through
all the operations and perform them.
Remember the reason for implementing the two-phase
commit protocol. There may be multiple daemons
connected to ConfD and a series of write operations, i.e a transaction
may
span several daemons
. ConfD ensures that e.g. a
commit from the CLI is either written in all of the connected
daemons
or none - thus ensuring a consistent database.
Recall the picture depicting the state transitions:
The most complicated callback is
prepare()
:
Example 7.9. prepare() callback using the accumulated write ops
static int t_prepare(struct confd_trans_ctx *tctx) { struct server *s; struct confd_tr_item *item = tctx->accumulated; while (item) { confd_hkeypath_t *keypath = item->hkp; confd_value_t *leaf = &(keypath->v[0][0]); switch(item->op) { case C_SET_ELEM: s = find_server(&(keypath->v[1][0])); if (s == NULL) break; switch (CONFD_GET_XMLTAG(leaf)) { case smp_ip: s->ip = CONFD_GET_IPV4(item->val); break; case smp_port: s->port = CONFD_GET_INT32(item->val); break; } break; case C_CREATE: add_server((char *)CONFD_GET_BUFPTR(leaf)); break; case C_REMOVE: remove_server(leaf); break; default: return CONFD_ERR; } item = item->next; } return save("running.prep"); }
The above code loops through all the
struct confd_tr_item structs accumulated by the
library in the accumulated
field for the transaction
context.
The accumulated write structs are defined as:
enum confd_tr_op { C_SET_ELEM = 1, C_CREATE= 2, C_REMOVE = 3, C_SET_CASE = 4, C_SET_ATTR = 5, C_MOVE_AFTER = 6 };
struct confd_tr_item { char *callpoint; enum confd_tr_op op; confd_hkeypath_t *hkp; confd_value_t *val; confd_value_t *choice; /* only for set_case */ u_int32_t attr; /* only for set_attr */ struct confd_tr_item *next; };
If we had a real native database with real transaction
support, we wouldn't have used the accumulation
feature of the library at all - rather we would have
started a native transaction in the write_start()
callback.
Our example database is just an array and a file; thus we use the accumulation feature of the library.
In the prepare()
callback we finally save the
database to a file called running.prep
- thus preparing
to commit the changes we have made.
The corresponding
abort()
and commit()
callbacks are easy:
Example 7.10. commit() and abort()
static int t_commit(struct confd_trans_ctx *tctx) { if (rename("running.prep", "running.DB") == 0) return CONFD_OK; else return CONFD_ERR; } static int t_abort(struct confd_trans_ctx *tctx) { restore("running.DB"); unlink("running.prep"); return CONFD_OK; }
The restore()
reads a file and initializes
the database (our array) from that file:
Example 7.11. Code to restore our array from a file
static int restore(char *filename) { char buf[BUFSIZ]; FILE *fp; if ((fp = fopen(filename, "r")) == NULL) return CONFD_ERR; num_servers = 0; while (fgets(&buf[0], BUFSIZ, fp) != NULL) { char *name, *ip, *port; if ((name = strtok(buf, " \t\r\n")) != NULL && ((ip = strtok(NULL, " \t\r\n")) != NULL) && ((port = strtok(NULL, " \t\r\n")) != NULL)) { printf("Loaded %s\n", name); new_server(name, ip, port); } } return CONFD_OK; }
Writable operational data is indicated in the YANG model as
config false
marked with
tailf:writable true
. This is typically used when an
SNMP MIB has data that models an operation, like "reboot".
For other interfaces than SNMP, such an operation should be
modeled as an rpc or action.
Writable operational data must be implemented by callback
functions, just like external configuration data, as
described in Section 7.7, “External configuration data with transactions”.
When a transaction is started for operational data, the
dbname
field in struct
confd_trans_ctx
is
CONFD_OPERATIONAL
.
The NETCONF protocol has as one of its major features the
concept of candidate commit with a timeout. The manager
manipulates the candidate configuration and finally commits
the candidate. This means that the candidate configuration
is copied into the running
data base and
thus is active.
If the commit
operation is accompanied by a
timeout then the semantics is that if the application has
not received a confirming commit before the timeout, the
previous running configuration should be copied back into
running
. The idea here is that if a
configuration is somehow bad, an automatic rollback will
occur.
There are several different usage scenarios whereby this feature is supported with ConfD.
The by far easiest case is when the database is kept in the ConfD built-in XML database, CDB. When that is the case, candidate commit is supported directly by ConfD natively.
The next case is when the candidate configuration is
managed by ConfD but the running
configuration is kept outside ConfD. This is described
here. The application needs to register three
checkpoint
callbacks in the database
callback struct confd_db_cbs by means of
the API call
confd_register_db_cb()
.
The final case is when both the running
and
the candidate
configuration are kept entirely
outside of ConfD. Remember the ConfD transactions that
get executed. When a new transaction is started, one of
the fields in the transaction context, the
dbname
field indicates which database the
transaction is started for.
If ConfD owns the candidate
, no
transactions will ever be created towards the
candidate
. If the application owns
both running
and the
candidate
(as configured in
confd.conf
) then transaction may
be directed towards either running
or candidate
.
In the case where the candidate is owned by the
application, the application needs to register six
candidate callbacks in the database callback struct
struct confd_db_cbs by means of the API
call confd_register_db_cb()
.
This mode of operations only make sense if the external database
can truly support the candidate callbacks. If that is not the case
it i better to let ConfD manage the candidate.
In this section we provide an example where ConfD owns the candidate datastore. The application needs to register the following callbacks.
add_checkpoint_running()
- This callback must create a
checkpoint of the current running configuration and
store it in non-volatile memory. When the system
restarts, it is the responsibility of the external
application to check if there is a checkpoint
available, and use the checkpoint instead of running.
del_checkpoint_running()
-
This function must delete a checkpoint created by
add_checkpoint_running()
.
It is called by ConfD when a confirming
commit is received.
activate_checkpoint_running()
-
This function should rollback running to the
checkpoint created by
add_checkpoint_running()
.
It is called by ConfD when the timer
expires or if the user session expires. There can be at most
one checkpoint live at a time.
Using our previous save()
and
restore()
functions the implementation
of the checkpoint callbacks becomes very simple.
Example 7.12. checkpoint db callbacks
add_checkpoint_running(struct confd_db_ctx *db) { return save("running.checkpoint"); } del_checkpoint_running(struct confd_db_ctx *db) { unlink("running.checkpoint"); return CONFD_OK; } activate_checkpoint_running(struct confd_db_ctx *db) { return restore("running.checkpoint"); }
Two things remain to be done. First we need to
register the checkpoint callbacks. Second we need to
look for the existence of a saved checkpoint when we
initialize our database and if it exists, running
should
be initialized from the checkpoint instead.
Thus:
/* global variable */ static struct confd_db_cbs dbcbs; ... int main() { ... if ((restore("running.checkpoint")) != CONFD_OK) restore("running.DB"); dbcbs.add_checkpoint_running = add_checkpoint_running; dbcbs.del_checkpoint_running = del_checkpoint_running; dbcbs.activate_checkpoint_running = activate_checkpoint_running; /* register the callbacks */ confd_register_db_cb(dctx, &dbcbs); confd_register_done(dctx);
If the underlying database is a real database we would install database checkpoints instead of copying entire files back and forth.
If we choose to implement the checkpoint callbacks as
above, we must obviously also configure ConfD accordingly.
The relevant sections in confd.conf
from the
datastores
section are:
<candidate> <enabled>true</enabled> <implementation>confd</implementation> </candidate>
And from the NETCONF section:
<capabilities> <candidate> <enabled>true</enabled> </candidate> <confirmed-commit> <enabled>true</enabled> </confirmed-commit> </capabilities>
Finally, if we implement the database outside ConfD we
may optionally choose to implement the lock()
and
unlock()
callbacks.
This is only interesting if there exists additional
locking mechanisms towards the database - such as an external CLI
which can lock the database, of if the external database owns the
candidate.
In this section we discuss some of the requirements that an external database must be able to fulfill in order for ConfD to work properly. The reasons for choosing an external database as opposed to CDB may vary between projects. Some projects already have a database and the managed object code is already tightly coupled to that database. Other projects may feel that the underlying database must have characteristics that CDB doesn't have. It is certainly the case that CDB is not the best choice for, for example distributed replication of large amounts of state data. CDB is not a check-pointing database for application state replication.
The first and most important requirement ConfD has on an external database is that it can execute transactions. The transaction manager inside ConfD will collect all data for a transaction and once the data has been validated, it will send the data as a series of write operations to the data provider. It is the responsibility of the database to execute this series of write operations atomically. Either they all get written or none. External databases that do not support transactions can still be used of course, but that then comes with the possibility of getting a corrupt configuration. Corruption will occur if:
Another data provider rejects the transaction - in this case ConfD will tell all data providers to abort. If there are no other data providers than the external database - this cannot happen.
ConfD dies while sending the write operations to the data provider, alternatively the network connectivity between ConfD and the data provider breaks. If this happens, the data provider never gets the whole transaction. One way of partially addressing this problem may be to make use of CONFD_ACCUMULATE feature whereby all writes are accumulated inside the library. That way the data provider at least can be certain that it has the entire transaction prior to starting its own write session.
Furthermore, CDB has two important features, schema upgrade and subscriptions. An external database must at least address this functionality.
Schema upgrade. When the YANG data model files are changed, CDB has the old schema - and its associated data - stored. On upgrade, CDB transforms all the old data so that it adheres to the new schema. If CDB is not used, the equivalent functionality must be performed by the external database.
Subscriptions - when the configuration is changed - the applications, the consumers of configuration data, must somehow be notified of the configuration changes. If CDB is not used, this is now the task of the external database.
Finally, if an external database is used, we must provide a
mapping in the code of the data provider between ConfD
keypaths and values to entries in the external database. For
example, if we use a simple key/value database it's possible
to write general code that works for all possible
keypaths. The key is a confd_hkeypath_t and the value is
obviously a confd_value_t. The only problem is how to handle
create()
and
delete()
operations for a key/value
database. In the case of a delete operation, all children must
also be deleted. It is easy to find the children since the
schema is loaded in a data provider (through
confd_load_schemas()
) and a key/value data
provider would then have to follow the schema, and delete all
children.