Table of Contents
This chapter describes the north bound NETCONF implementation in ConfD. As of this writing, the server supports the following specifications:
RFC 4741 - NETCONF Configuration Protocol
RFC 4742 - Using the NETCONF Configuration Protocol over Secure Shell (SSH)
RFC 5277 - NETCONF Event Notifications
RFC 5717 - Partial Lock Remote Procedure Call (RPC) for NETCONF
RFC 6020 - YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)
RFC 6021 - Common YANG Data Types
RFC 6022 - YANG Module for NETCONF Monitoring
RFC 6241 - Network Configuration Protocol (NETCONF)
RFC 6242 - Using the NETCONF Configuration Protocol over Secure Shell (SSH)
RFC 6243 - With-defaults capability for NETCONF
RFC 6470 - NETCONF Base Notifications
RFC 6536 - NETCONF Access Control Model
RFC 6991 - Common YANG Data Types
RFC 7895 - YANG Module Library
RFC 7950 - The YANG 1.1 Data Modeling Language
For the <delete-config> operation specified in RFC 4741 / RFC 6241, only <url> with scheme "file" is supported for the <target> parameter - i.e. no data stores can be deleted. The concept of deleting a data store is not well defined, and at odds with the transaction-based configuration management of ConfD. To delete the entire contents of a data store, with full transactional support, a <copy-config> with an empty <config/> element for the <source> parameter can be used.
ConfD NETCONF north bound API can be used
by arbitrary NETCONF clients. A simple Python based NETCONF
client called netconf-console
is shipped as source
code in the distribution. See Section 15.8, “Using netconf-console” for
details. Other NETCONF clients will work too, as long as they
adhere to the NETCONF protocol. If you need a Java client, the
open source client JNC
can be used.
When integrating NCS into larger OSS/NMS environments, the NETCONF API is a good choice of integration point.
The NETCONF server in ConfD supports all capabilities in both NETCONF 1.0 (RFC 4741) and NETCONF 1.1 (RFC 6241).
:writable-running
This capability is enabled by default. If the candidate
is used, this capability should be disabled in confd.conf(5). Additionally,
/confdConfig/datastores/running/access
should
be set to writable-through-candidate.
:candidate
The NETCONF server uses the candidate provided by the ConfD backplane. This can either be implemented in an external database, or using the built-in candidate support.
This capability is enabled by default. If the candidate is not used, this capability should be disabled in confd.conf(5).
:confirmed-commit
If the running
data store is
implemented as an external database, it has to support the
checkpoint functions (see Chapter 7, The external database API). If it doesn't support
checkpoints, this capability must be disabled. The
built-in CDB database supports checkpoints, and can thus
be used with this capability.
This capability is enabled by default. If the candidate is not used, this capability should be disabled in confd.conf(5).
ConfD supports both version 1.0 and 1.1 of this capability.
:rollback-on-error
This capability allows the client to set the
<error-option>
parameter to
rollback-on-error
. The other permitted
values are stop-on-error
(default) and
continue-on-error
. Note that the
meaning of the word "error" in this context is not defined
in the specification. Instead, the meaning of this word
must be defined by the data model. Also note that if
stop-on-error
or
continue-on-error
is triggered by the
server, it means that some parts of the edit operation
succeeded, and some parts didn't. The error
partial-operation
must be returned in
this case. If some other error occurs (i.e. an error not
covered by the meaning of "error" above), the server
generates an appropriate error message, and the data store
is unaffected by the operation.
The ConfD server never allows partial
configuration changes, since it might result in
inconsistent configurations, and recovery from such a
state can be very difficult for a client. This means that
regardless of the value of the
<error-option>
parameter, ConfD will
always behave as if it
had the value rollback-on-error
. So in
ConfD, the meaning of the word
"error" in stop-on-error
and
continue-on-error
, is something which
never can happen.
This capability is enabled by default. It can be disabled in confd.conf(5), but it doesn't affect the server behavior, other than the capability is not advertised.
:validate
This capability is enabled by default. It can be disabled in confd.conf(5). The only reason for disabling this capability would be if CDB is not used, and validation constraints are not specified in the YANG data models, and the underlying database does not support any form of validation.
ConfD supports both version 1.0 and 1.1 of this capability.
:startup
This capability is disabled by default. Enable this if
/confdConfig/datastores/startup
is enabled.
:url
The URL schemes supported are file
,
ftp
, and
sftp
(SSH File Transfer Protocol).
There is no standard URL syntax for the
sftp
scheme, but ConfD supports
the syntax used by
curl
:
sftp://<user>:<password>@<host>/<path>
Note that user name and password must be given for
sftp
URLs.
This capability is disabled by default, but can be enabled in confd.conf(5).
:xpath
This capability is enabled by default, but can be disabled in confd.conf(5).
The NETCONF server supports XPath according to the W3C XPath 1.0 specification (http://www.w3.org/TR/xpath), except for the list given below. There are several reasons for not supporting conventional XPath or for diverging from XPath, including the following:
The operation is performed on an XML database, not an XML document.
The implementation context does not support the operation.
Immaturity of IETF specifications. This refers to the result returned for some queries.
An XPath expression evaluation may terminate without matches or with an error (returned as a NETCONF error). Upon one or more successful matches, the XPath output is returned as an XML tree summarizing the matched database information, similarly to a conventional NETCONF subtree filter.
The following XPath features are not available:
Variables are not supported, since the evaluation context binds no variables.
Some location step axes are not supported: preceding, following, preceding-sibling, following-sibling.
Some node tests are not supported: comment(), processing-instruction(). Note that these node types are not stored in the database.
The XPath root node is not available. Instead, evaluation begins from each exported namespace. This primarily affects the parent and ancestor axes.
XPath built-ins:
id()
the database does not store unique IDs
The following list of optional standard capabilities are also supported:
:notification
ConfD implements the urn:ietf:params:netconf:capability:notification:1.0
capability, including support for the optional replay
feature.
This capability is disabled by default, but can be enabled in confd.conf(5).
See Section 15.7, “Notification Capability” for details.
:interleave
ConfD implements the urn:ietf:params:netconf:capability:interleave:1.0
capability, which allows the client to get send RPCs while
a notification subscription is active.
This capability is disabled by default, but can be enabled in confd.conf(5).
:partial-lock
ConfD implements the urn:ietf:params:netconf:capability:partial-lock:1.0
capability, which allows the client to lock parts of the running
data store.
This capability is disabled by default, but can be enabled in confd.conf(5).
:with-defaults
ConfD implements the urn:ietf:params:netconf:capability:with-defaults:1.0
capability, which is used by the server to inform the
client how default values are handled by the server, and
by the client to control whether defaults values should be
generated to replies or not.
This capability is enabled by default, but can be disabled in confd.conf(5).
:yang-library
ConfD implements the urn:ietf:params:netconf:capability:yang-library:1.0
capability, which informs the client that server
implements the YANG module library
RFC 7895, and informs the client about the
current module-set-id
.
This capability is required by the YANG 1.1 specification RFC 7950, and cannot be disabled.
In addition to the standard capabilities ConfD also includes the
following optional, non-standard capabilities. They must be
explicitly enabled in confd.conf
(see confd.conf(5)) to be used.
actions
See Section 15.9, “Actions Capability” for details.
This capability should be enabled if actions are defined in the data model.
transactions
See Section 15.10, “Transactions Capability” for details.
This capability should be defined if ConfD runs as a subagent.
proxy forwarding
See Section 15.11, “Proxy Forwarding Capability” for details. This capability should be defined if ConfD runs as a proxy host.
inactive
See Section 15.12, “Inactive Capability” for details.
This capability should be defined if ConfD is configured
to use attributes (see
/confdConfig/enableAttributes
in confd.conf(5)).
The server reports each data model namespace it has loaded as separate capabilities, according to the YANG specification.
The user can configure the server to make it report additional capability URIs.
All enabled NETCONF capabilities are advertised in the hello message that the server sends to the client.
A YANG module is supported by the NETCONF server if it's fxs file is found in ConfD's loadPath, and if the fxs files is exported to NETCONF.
The following YANG modules are built-in, which means that their fxs files must not be present in the loadPath:
ietf-netconf
ietf-netconf-with-defaults
ietf-yang-library
ietf-yang-types
ietf-inet-types
All built-in modules except
ietf-netconf-with-defaults
are always
supported by the server. Support for
ietf-netconf-with-defaults
can be
controlled by a setting in confd.conf
.
All YANG version 1 modules supported by the server are advertised in the hello message, according to the rules defined in RFC 6020.
All YANG version 1 and version 1.1 modules supported by the
server are advertised in the module
list
defined in ietf-yang-library
.
If a YANG module (any version) is supported by the server, and
its .yang or .yin file is found in the fxs file or in the
loadPath, then the module is also advertised in the
schema
list defined in
ietf-netconf-monitoring
, made available for
download with the RPC operation get-schema
, and
if RESTCONF is enabled, also advertised in the
schema
leaf in
ietf-yang-library
. See Section 15.6, “Monitoring of the NETCONF Server”.
The NETCONF server natively supports the mandatory SSH transport, i.e., SSH is supported without the need for an external SSH daemon (such as sshd). It also supports integration with OpenSSH.
ConfD is delivered with a program
netconf-subsys which is an OpenSSH
subsystem program. It is invoked by the
OpenSSH daemon after successful authentication. It functions
as a relay between the ssh daemon and ConfD; it reads data
from the ssh
daemon from standard input, and writes the data to ConfD
over a loopback socket, and vice
versa. This program is delivered as source code in $CONFD_DIR/src/confd/netconf/netconf-subsys.c
. It
can be modified to fit the needs of the application. For
example, it could be modified to read the group names for a
user from an external LDAP server.
When using OpenSSH, the users are authenticated by OpenSSH,
i.e. the user names are not stored in ConfD. To use
OpenSSH, compile the
netconf-subsys program, and put the
executable in e.g. /usr/local/bin
. Then
add the following line to the ssh daemon's config file,
sshd_config
:
Subsystem netconf /usr/local/bin/netconf-subsys
The connection from netconf-subsys to ConfD can be arranged in one of two different ways:
Make sure ConfD is configured to listen to TCP traffic
on localhost, port 2023, and disable SSH in
confd.conf
(see
confd.conf(5)
).
(Re)start sshd and ConfD. Or:
Compile netconf-subsys to use a
connection to the IPC port instead of the NETCONF TCP
transport (see the netconf-subsys.c
source for details), and disable both TCP and SSH in
confd.conf
.
(Re)start sshd and ConfD.
This method may be preferable, since it makes it possible to use the IPC Access Check (see Section 28.6.2, “Restricting access to the IPC port” ) to restrict the unauthenticated access to ConfD that is needed by netconf-subsys.
By default the netconf-subsys program sends the names of the UNIX groups the authenticated user belongs to. To test this, make sure that ConfD is configured to give access to the group(s) the user belongs to. Easiest for test is to give access to all groups.
The server can also be configured to accept plain TCP traffic. This can be useful during development, for debugging purposes, but it can also be used to plug in any other transport protocol. The way this works is that some other daemon terminates the transport and authenticates the user. Then it connects to the NETCONF server over TCP (preferably over the loopback interface for security reasons) and relays the XML traffic to NETCONF.
In this case, the transport daemon will have to authenticate the user, and then tell the NETCONF server about it. This should be done as a header sent over the TCP socket before any other bytes are sent. The header looks like this:
[username;source;proto;uid;gid;subgids;homedir;group-list;]\n
username
is the name of the
authenticated user. source
is the
textual representation of the ipv4 or ipv6 address and port
which the user connected from, with address and port separated
by '/' (e.g. "10.0.0.1/1234"). proto
is the name of the transport protocol the client used
(e.g. "beep" or "ssh"). uid
,
gid
, supgids
and
homedir
are the UNIX user id, group id,
supplementary group ids and home directory for this
user. These four parameters are
only used if the user invokes a NETCONF RPC which is
implemented with an external program (see Section 15.5, “Extending the NETCONF Server”).
group-list
is a comma-separated list of
group names for the user. This list should only be sent if the
transport has the capability to determine which groups a user
belongs to. If not, an empty list should be sent. In this
case, the normal AAA mechanisms are used to determine group
membership.
All NETCONF RPCs sent over this socket must use the framing protocol used by NETCONF over SSH.
The TCP socket is also used if we want to use a standard SSH daemon such as sshd instead of the built-in SSH implementation. Then we would configure sshd to invoke a special program for the "netconf" subsystem. This special program would connect to the TCP socket as described above. See more below.
ConfD itself is configured through a
configuration file called confd.conf
. In that
file the following items are related to the NETCONF server. For
a complete description of these parameters, please see the confd.conf(5) man page.
/confdConfig/logs/netconfLog
This log can be enabled in order to troubleshoot the netconf sessions.
/confdConfig/logs/netconfTraceLog
When this log is enabled, all NETCONF traffic to and from ConfD is stored in a file. This can be useful in order to understand and troubleshoot the NETCONF protocol interactions.
/confdConfig/aaa/sshServerKeyDir
This is where the built-in SSH server reads its ssh keys.
/confdConfig/aaa/pam/service
This is the name of the PAM service to be used by the built-in SSH server. Used only if PAM is enabled (which means an SSH user can log in with username and password).
/confdConfig/netconf/enabled
When set to "true", the NETCONF server is started.
/confdConfig/netconf/transport/ssh
Settings for the built-in SSH server, such as listen ip address and port.
/confdConfig/netconf/transport/tcp
Settings for the plain-text TCP transport, such as listen ip address and port.
/confdConfig/netconf/capabilities
Under this parameter, we can control which capabilities are reported by the server.
/confdConfig/netconf/capabilities/capability
This parameter can be given multiple times. It specifies a URI string which the NETCONF server will report as a capability in the hello message sent to the client.
When ConfD processes <get>
,
<get-config>
, and
<copy-config>
requests, the resulting data
set can be very large. To avoid buffering huge amounts of
data, ConfD streams the reply to the client
as it traverses the data tree and calls data provider
functions to retrieve the data.
If a data provider fails to return the data it is supposed to
return, ConfD can take one of two actions.
Either it simply closes the NETCONF transport (default), or it
can reply with an inline rpc error and
continue to process the next data element. This behavior can
be controlled with the
/confdConfig/netconf/rpcErrors
configuration
parameter (see confd.conf(5)).
.
An inline error is always generated as a child element to the parent of the faulty element. For example, if an error occurs when retrieving the leaf element "mac-address" of an "interface" the error might be:
<interface> <name>atm1</name> <rpc-error xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <error-type>application</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-message xml:lang="en">Failed to talk to hardware</error-message> <error-info> <bad-element>mac-address</bad-element> </error-info> </rpc-error> ... </interface>
If a get_next
call fails in the
processing of a list, a reply might look
like this:
<interface> <!-- successfully retrieved list entry --> <name>eth0</name> <mtu>1500</mtu> <!-- more leafs here --> </interface> <rpc-error xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <error-type>application</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-message xml:lang="en">Failed to talk to hardware</error-message> <error-info> <bad-element>interface</bad-element> </error-info> </rpc-error>
NETCONF is an extensible protocol in the sense that new RPC
operations can be defined separately from the standard. The
NETCONF server in ConfD supports this through a simple API. New
operations are typically identified with a new capability. When
a new capability is implemented in this way, the name of the
capability should be added to the list of capabilities that the
NETCONF server sends in its initial <hello>
message. This list is defined in
confd.conf
(see confd.conf(5)).
New RPCs are defined in YANG modules.
An RPC can be implemented in three different ways:
As an executable program which is started by ConfD for each new RPC. The XML is passed as-is from ConfD to the program, and the resulting XML is generated by the program.
As an executable program which is started by ConfD for each new RPC. The XML is parsed by ConfD, and passed (in a certain format) on the command line to the program. ConfD generates an XML reply based on the result from the program.
As a C callback function. The application registers the callback with ConfD, and ConfD invokes the callback function when the RPC operation is received. ConfD parses the XML and passes it in a C data structure to the callback. ConfD generates an XML reply from the return value from the callback.
In this case, the RPC is implemented as an ordinary
executable program, which communicate with ConfD over
stdin/stdout. When ConfD invokes the program, it will pass
the entire XML operation on stdin. The program is
responsible for parsing the operation data and placing its
reply on stdout, and then terminate with exit status zero.
ConfD wraps this reply in a <rpc-reply>
element. Note that ConfD does not interpret the reply XML
sent by the program; it merely sends the data as-is to the
NETCONF client. Thus, it is the responsibility of the
program to produce a valid NETCONF XML reply. Note that a
rpc reply MUST contain one of <ok/>
,
<data>
or <rpc-error>
.
A program can also be run in batch mode, which can be used to send asynchronous data to the client. In this case, the program does not exit after having replied to the original RPC. Instead it signals that the reply has been sent by sending a NUL byte to ConfD. ConfD will enter its main loop and listen for new requests from the client and data from the external programs. When data is received from one source, this source is handled, while the others are (potentially) blocked. The asynchronous data sent by the external program must be a complete self-contained XML chunk, followed by a single NUL byte. The program can exit at any time, the session towards the client is not terminated just because the program exits.
The maximum number of concurrently running batch processes
can be set in confd.conf
(see confd.conf(5)) using the parameter
/confdConfig/netconf/maxBatchProcesses
. The
default is no limit.
Here's an example of an rpc operation defined in this way:
module math-rpc { namespace "http://example.com/math/1.0"; prefix math; import tailf-common { prefix tailf; } rpc math { tailf:exec "/usr/bin/math" { tailf:raw-xml; } } }
All these examples are available under
netconf_extensions/simple_rpc
in the examples distribution.
Now suppose that the following rpc is received by the NETCONF server:
Example 15.1. Example math rpc
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <math xmlns="http://example.com/math/1.0"> <add> <operand>2</operand> <operand>3</operand> </add> </math> </rpc>
ConfD will invoke /usr/local/bin/math
and pass:
<math xmlns="http://example.com/math/1.0"> <add> <operand>2</operand> <operand>3</operand> </add> </math>
on stdin. The program will print the rpc-reply to stdout, and ConfD relays this data to the client.
In this case, the RPC is implemented as an ordinary executable
program, with all XML parameters converted by ConfD into
command-line arguments to the program. If the program
terminates normally without producing any output on stdout,
ConfD replies with an <ok/>
rpc-reply. If the
program terminates normally and also generates data on stdout,
ConfD interprets this data and passes it with
<data>
tags. If the program terminates
abnormally without producing any data, a generic
operation-failed
error is returned. Finally, if the
program terminates abnormally and also generates data on
stdout, ConfD interprets this data as an rpc-error, and sends
the resulting XML to the client.
Here's an example of the same rpc operation as above defined this way:
module math-rpc { namespace "http://example.com/math/1.0"; prefix math; import tailf-common { prefix tailf; } rpc math { tailf:exec "/usr/bin/math"; input { choice op { container add { leaf-list operand { type int32; min-elements 2; max-elements 2; } } container sub { leaf-list operand { type int32; min-elements 2; max-elements 2; } } } } output { leaf result { type int32; } } } }
Now suppose that the same RPC request as in Code listing 2 above is received. ConfD parses the XML and invokes the command as:
/usr/local/bin/math add __BEGIN operand 2 operand 3 add __END
In general, the XML is flattened, and each XML element generates two strings on the command line. If a container is received, the strings "elem-name" "__BEGIN" is generated. When the corresponding close element is received, "elem-name" "__END" is generated. An element with a value will generate "elem-name" "value". An empty element with no subelements will generate "elem-name" "__LEAF".
Next, the math program replies by printing on stdout:
result 5
The same translation rules applies to the result, and ConfD thus sends the following reply to the client:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <data> <result xmlns="http://example.com/math/1.0">5</result> </data> </rpc-reply>
In this case, the RPC is implemented as a callback function in C, with all XML parameters converted by ConfD into a C data structure.
Here's an example of the same rpc operation as above defined this way:
module math-rpc { namespace "http://example.com/math/1.0"; prefix math; import tailf-common { prefix tailf; } rpc math { tailf:actionpoint "math"; input { choice op { container add { leaf-list operand { type int32; min-elements 2; max-elements 2; } } container sub { leaf-list operand { type int32; min-elements 2; max-elements 2; } } } } output { leaf result { type int32; } } } }
The code that implements this looks like this:
static void register_math(struct confd_daemon_ctx *dctx) { struct confd_action_cbs acb; memset(&acb, 0, sizeof(acb)); strcpy(acb.actionpoint, "math"); acb.init = init_action; /* this function is not shown here */ acb.action = do_math; if (confd_register_action_cbs(dctx, &acb) != CONFD_OK) confd_fatal("Couldn't register action callbacks\n"); if (confd_register_done(dctx) != CONFD_OK) confd_fatal("Failed to complete registration \n"); } static int do_math(struct confd_user_info *uinfo, struct xml_tag *name, confd_hkeypath_t *kp, confd_tag_value_t *params, int nparams) { confd_tag_value_t reply[1]; int op1, op2, result; /* we know that we get exactly 4 parameters; add | del BEGIN operand 1 operand 2 add | del END */ op1 = CONFD_GET_INT32(CONFD_GET_TAG_VALUE(¶ms[1])); op2 = CONFD_GET_INT32(CONFD_GET_TAG_VALUE(¶ms[2])); switch (CONFD_GET_TAG_TAG(¶ms[0])) { case math_add: result = op1 + op2; break; case math_del: result = op1 - op2; break; } CONFD_SET_TAG_INT32(&reply[0], math_result, result); confd_action_reply_values(uinfo, reply, 1); return CONFD_OK; }
RFC 6022
- YANG Module for NETCONF Monitoring defines a YANG
module, ietf-netconf-monitoring
, for monitoring of
the NETCONF server. It contains statistics objects such as
number of RPCs received, status objects such as user sessions,
and an operation to retrieve data models from the NETCONF
server.
In order to use this data model with ConfD, the fxs file
(ietf-netconf-monitoring.fxs
) must be
present in ConfD's loadPath. This fxs file is present in a
development installation of ConfD.
This data model defines a new RPC operation,
get-schema
, which is used to retrieve YANG modules
from the NETCONF server. ConfD will report the YANG modules
for all fxs files that are reported as capabilities, and for
which the corresponding YANG or YIN file is stored in the fxs
file or found in the loadPath. If a file is found in the
loadPath, it has priority over a file stored in the fxs file.
Note that by default, the module and its submodules are stored
in the fxs file by the compiler.
If the YANG (or YIN files) are copied into the loadPath, they can be stored as is or compressed with gzip. The filename extension MUST be ".yang", ".yin", ".yang.gz", or ".yin.gz".
Also available is a Tail-f specific data model,
tailf-netconf-monitoring
, which augments
ietf-netconf-monitoring
with additional data about
files available for usage with the
<copy-config>
command with a
file <url>
source or
target.
/confdConfig/netconf/capabilities/url/enabled
and
/confdConfig/netconf/capabilities/url/file/enabled
must both be set to true. If rollbacks are enabled, those files
are listed as well, and they can be loaded using
<copy-config>
.
This data model also adds data about which notification streams are present in the system, and data about sessions that subscribe to the streams.
In order to use this data model with ConfD, the fxs file
(tailf-netconf-monitoring.fxs
) must be
present in ConfD's loadPath. This fxs file is present in a
development installation of ConfD.
These fxs files are available in the
$CONFD_DIR/etc/confd
directory, and the
source for them are available in the
$CONFD_DIR/src/confd/yang
directory, in the
distribution. The Makefile
in the latter
directory can be modified as necessary, for example to compile the
fxs files with a --export
parameter to
confdc.
This section describes how NETCONF notifications are implemented within ConfD, and how the applications generates these events.
Central to NETCONF notifications is the concept of a
stream. The stream serves two purposes.
It works like a high-level filtering mechanism for the client.
For example, if the client subscribes to notifications on the
security
stream, it can expect to get
security related notifications only. Second, each stream may
have its own log mechanism. For example by keeping all debug
notifications in a debug
stream, they can be
logged separately from the security
stream.
ConfD has built-in support for the well-known stream
NETCONF
, defined in RFC
5277. ConfD supports the notifications defined in
RFC
6470 - NETCONF Base Notifications on this stream. If
the application needs to send any additional notifications on
this stream, it can do so.
It is up to the application to define which additional streams
it supports. In ConfD, this is done in
confd.conf
(see confd.conf(5)). Each stream must be
listed, and whether it supports replay or not. An example
which defines two streams, security
and
debug
:
<notifications> <eventStreams> <stream> <name>security</name> <description>Security related notifications</description> <replaySupport>true</replaySupport> <builtinReplayStore> <enabled>true</enabled> <dir>/var/log</dir> <maxSize>S10M</maxSize> <maxFiles>50</maxFiles> </builtinReplayStore> </stream> <stream> <name>debug</name> <description>Debug notifications</description> <replaySupport>true</replaySupport> </stream> </eventStreams> </notifications>
The well-known stream NETCONF
does not have
to be listed, but if it isn't listed, it will not support
replay.
ConfD has builtin support for logging
of notifications, i.e., if replay support has been enabled for
a stream, ConfD automatically stores all
notifications on disk ready to be replayed should a NETCONF
client ask for logged notifications. In the confd.conf
fragment above the
security stream has been setup to use the builtin notification
log/replay store. The replay store uses a set of wrapping log
files on disk (of a certain number and size) to store the
security stream notifications.
The reason for using a wrap log is to improve replay performance whenever a NETCONF client asks for notifications in a certain time range. Any problems with log files not being properly closed due to hard power failures etc. is also kept to a minimum, i.e., automatically taken care of by ConfD.
As an alternative to the builtin notification replay store the application can roll its own. This is described in the next sub-section.
If a stream supports replay, the logging and replay
functionality can alternatively be implemented by the
application. In order to do this, the application must
register a set of callback functions with ConfD using the
function
confd_register_notification_stream()
.
The callbacks are get_log_start_time()
and replay()
. The first one is called by
ConfD in order to find the earliest event time available in
the log. The second one is invoked whenever a NETCONF client
asks for a replay subscription. For full details on the
notification API, please see the confd_lib_dp(3) manual page.
The following example is available in full source code form in
the examples directory. A single stream
interface
is used, and it supports replay.
/* The notification context (filled in by ConfD) for the live feed */
static struct confd_notification_ctx *live_ctx;
struct confd_notification_stream_cbs ncb;
memset(&ncb, 0, sizeof(ncb));
ncb.fd = workersock;
ncb.get_log_times = log_times;
ncb.replay = start_replay;
strcpy(ncb.streamname, "interface");
ncb.cb_opaque = NULL;
if (confd_register_notification_stream(dctx, &ncb, &live_ctx) != CONFD_OK) {
confd_fatal("Couldn't register stream %s\n", ncb.streamname);
}
if (confd_register_done(dctx) != CONFD_OK) {
confd_fatal("Failed to complete registration\n");
}
In this simple example, we keep the replay log in memory, in an array:
struct notif { struct confd_datetime eventTime; confd_tag_value_t *vals; int nvals; }; /* Our replay buffer is kept in memory in this example. It's a circular * buffer of struct notif. */ #define MAX_BUFFERED_NOTIFS 4 static struct notif replay_buffer[MAX_BUFFERED_NOTIFS]; static unsigned int first_replay_idx = 0; static unsigned int next_replay_idx = 0; static struct confd_datetime replay_creation; static int replay_has_aged_out = 0; static struct confd_datetime replay_aged_time;
The get_log_start_time()
callback simply
returns the time of the first notification in the log:
static int log_times(struct confd_notification_ctx *nctx) { struct confd_datetime *aged; if (replay_has_aged_out) aged = &replay_aged_time; else aged = NULL; return confd_notification_reply_log_times(nctx, &replay_creation, aged); }
When a client asks for a replay subscription, ConfD invokes
the callback replay
. The actual replay
notifications must not be sent from the callback. In this
example, the callback allocates a replay structure, and marks
it as being active. The main loop will check for any active
replays, and do the sending there.
#define MAX_REPLAYS 10 struct replay { int active; int started; unsigned int idx; struct confd_notification_ctx *ctx; struct confd_datetime start; struct confd_datetime stop; int has_stop; }; /* Keep tracks of active replays */ static struct replay replay[MAX_REPLAYS]; static int start_replay(struct confd_notification_ctx *nctx, struct confd_datetime *start, struct confd_datetime *stop) { int rnum; for (rnum = 0; rnum < MAX_REPLAYS; rnum++) { if (!replay[rnum].active) { replay[rnum].active = 1; replay[rnum].started = 0; replay[rnum].idx = first_replay_idx; replay[rnum].ctx = nctx; replay[rnum].start = *start; if (stop) { replay[rnum].has_stop = 1; replay[rnum].stop = *stop; } else replay[rnum].has_stop = 0; /* stop when caught up to live */ return CONFD_OK; } } confd_notification_seterr(nctx, "Max no. of replay requests reached"); return CONFD_ERR; }
Before an application can send a notification, the
notification must be defined in a YANG module. In this example,
a notification link-down
is defined. The
notification has a single parameter
if-index
:
notification linkDown { leaf ifIndex { type leafref { path "/interfaces/interface/ifIndex"; } mandatory true; } }
When the application sends an application, it uses the
function confd_notification_send()
.
static void send_notifdown(int index) { confd_tag_value_t vals[3]; int i = 0; CONFD_SET_TAG_XMLBEGIN(&vals[i], notif_linkDown, notif__ns); i++; CONFD_SET_TAG_UINT32(&vals[i], notif_ifIndex, index); i++; CONFD_SET_TAG_XMLEND(&vals[i], notif_linkDown, notif__ns); i++; send_notification(vals, i); } static void send_notification(confd_tag_value_t *vals, int nvals) { int sz; struct confd_datetime now; struct notif *notif; getdatetime(&now); notif = &replay_buffer[next_replay_idx]; if (notif->vals) { /* we're aging out this notification */ replay_has_aged_out = 1; replay_aged_time = notif->eventTime; first_replay_idx = (first_replay_idx + 1) % MAX_BUFFERED_NOTIFS; free(notif->vals); } notif->eventTime = now; sz = nvals * sizeof(confd_tag_value_t); notif->vals = malloc(sz); memcpy(notif->vals, vals, sz); notif->nvals = nvals; next_replay_idx = (next_replay_idx + 1) % MAX_BUFFERED_NOTIFS; OK(confd_notification_send(live_ctx, ¬if->eventTime, notif->vals, notif->nvals)); }
The netconf-console
program is a simple NETCONF
client. It is delivered as Python source code and can be used
as-is or modified.
When ConfD has been started, we can use
netconf-console
to query the configuration of the
NETCONF Access Control groups:
$ netconf-console --get-config -x /nacm/groups <?xml version="1.0" encoding="UTF-8"?> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <data> <nacm xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm"> <groups> <group> <name>admin</name> <user-name>admin</user-name> <user-name>private</user-name> </group> <group> <name>oper</name> <user-name>oper</user-name> <user-name>public</user-name> </group> </groups> </nacm> </data> </rpc-reply>
With the -x
flag an XPath expression can be
specified, in order to retrieve only data matching that
expression. This is a very convenient way to extract portions of
the configuration from the shell or from shell scripts.
This capability introduces one new rpc method which is used to invoke actions (methods) defined in the data model. When an action is invoked, the instance on which the action is invoked is explicitly identified by an hierarchy of configuration or state data.
Here's a simple example which resets an interface.
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <action xmlns="http://tail-f.com/ns/netconf/actions/1.0"> <data> <interfaces xmlns="http://example.com/interfaces/1.0"> <interface> <name>eth0</name> <reset/> </interface> </interfaces> </data> </action> </rpc> <rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <ok/> </rpc-reply>
The alternative is to use a specialized rpc method for each action. There are a couple of drawbacks with that:
The name of the action has to be unique within the namespace. With the generic action method, the name of the action is scoped by the element where the action is defined. For example, without a generic action, ther might be two rpcs, 'reset-interface' and 'reset-server'. With the generic action, there are two 'reset' actions, scoped by 'interface' and 'server'.
Care must be taken to ensure that returned XML is unique within the namespace. Suppose the two methods 'reset-interface' and 'reset-server' returns a 'status', but of different type. The element must be called something like 'reset-interface-status' and 'reset-server-status'.
With the generic action, it is easier to introduce intermediate NETCONF peers such as a master agent in a master-sub agent deployment. For example, suppose there are two subagents, one which handles interface 'eth0' and one which handles 'atm0'. When the hierarchy is excplicit in the request, the master agent can dispatch to the correct subagent without any knowledge about the action parameters. On the other hand, if the master agent gets a rpc 'reset-interface', it will have to parse the parameters to figure out which subagent to send the request to.
The actions capability is identified by the following capability string:
http://tail-f.com/ns/netconf/actions/1.0
The <action> operation identifies the data instance where the action is invoked, the action name, and its parameters. If the action returns any result, it is scoped in the instance hierarchy in the reply.
data:
A hierarchy of configuration or state data as defined by one of the device's data models. The first part of the hierarchy defines which instance the action is invoked upon. Then comes the action name, and any parameters it might need.
One action only can be executed within one rpc. If more than one actions are present in the rpc, an error MUST be returned with an <error-tag> set to "bad-element".
An action that does not return any result value, replies with the standard <ok/>. If a result value is returned, it is encapsulated in the standard <data> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
Suppose we want to start a self-test on interface "eth0", and the test returns the run time (in seconds) of the test and test status. In pseudo code
myif = find_if("eth0") (time, status) = myif.self_test(IMMEDIATELY)
Using the action RPC over NETCONF:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <action xmlns="http://tail-f.com/ns/netconf/actions/1.0"> <data> <interfaces xmlns="http://example.com/interfaces/1.0"> <interface> <name>eth0</name> <self-test> <when>immediately</when> </self-test> </interface> </interfaces> </data> </action> </rpc> <rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <data> <interfaces xmlns="http://example.com/interfaces/1.0"> <interface> <name>eth0</name> <self-test> <run-time>29</run-time> <status>ok</status> </self-test> </interface> </interfaces> </data> </action> </rpc-reply>
This XML Schema defines the new action rpc.
<?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://tail-f.com/ns/netconf/actions/1.0" xmlns="http://tail-f.com/ns/netconf/actions/1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0" elementFormDefault="qualified" attributeFormDefault="unqualified" xml:lang="en"> <!-- <action> operation --> <xs:complexType name="ActionType"> <xs:complexContent> <xs:extension base="netconf:rpcOperationType"> <xs:sequence> <xs:element name="data" type="netconf:dataInlineType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="action" type="ActionType" substitutionGroup="netconf:rpcOperation"/> </xs:schema>
This capability introduces four new rpc methods which are used to control a two-phase commit transaction on the NETCONF server. The normal <edit-config> operation is used to write data in the transaction, but the modifications are not applied until an explicit <commit-transaction> is sent.
This capability is formally defined in the YANG module "tailf-netconf-transactions".
A typical sequence of operations looks like this:
C S | | | capability exchange | |-------------------------->| |<------------------------->| | | | <start-transaction> | |-------------------------->| |<--------------------------| | <ok/> | | | | <edit-config> | |-------------------------->| |<--------------------------| | <ok/> | | | | <prepare-transaction> | |-------------------------->| |<--------------------------| | <ok/> | | | | <commit-transaction> | |-------------------------->| |<--------------------------| | <ok/> | | |
The transactions capability is identified by the following capability string:
http://tail-f.com/ns/netconf/transactions/1.0
Starts a transaction towards a configuration datastore. There can be a single ongoing transaction per session at any time.
When a transaction has been started, the client can send any NETCONF operation, but any <edit-config> or <copy-config> operation sent from the client MUST specify the same <target> as the <start-transaction>, and any <get-config> MUST specify the same <source> as <start-transaction>.
If the server receives an <edit-config> or <copy-config> with another <target>, or a <get-config> with another <source>, an error MUST be returned with an <error-tag> set to "invalid-value".
The modifications sent in the <edit-config> operations are not immediately applied to the configuration datastore. Instead they are kept in the transaction state of the server. The transaction state is only applied when a <commit-transaction> is received.
The client sends a <prepare-transaction> when all modifications have been sent.
target:
Name of the configuration datastore towards which the transaction is started.
with-inactive:
If this parameter is given, the transaction will handle the "inactive" and "active" attributes. If given, it MUST also be given in the <edit-config> and <get-config> invocations in the transaction.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is an ongoing transaction for this session already, an error MUST be returned with <error-app-tag> set to "bad-state".
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <start-transaction xmlns="http://tail-f.com/ns/netconf/transactions/1.0"> <target> <running/> </target> </start-transaction> </rpc> <rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <ok/> </rpc-reply>
Prepares the transaction state for commit. The server may reject the prepare request for any reason, for example due to lack of resources or if the combined changes would result in an invalid configuration datastore.
After a successful <prepare-transaction>, the next transaction related rpc operation must be <commit-transaction> or <abort-transaction>. Note that an <edit-config> cannot be sent before the transaction is either committed or aborted.
Care must be taken by the server to make sure that if <prepare-transaction> succeeds then the <commit-transaction> SHOULD not fail, since this might result in an inconsistent distributed state. Thus, <prepare-transaction> should allocate any resources needed to make sure the <commit-transaction> will succeed.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is no ongoing transaction in this session, or if the ongoing transaction already has been prepared, an error MUST be returned with <error-app-tag> set to "bad-state".
Applies the changes made in the transaction to the configuration datatore. The transaction is closed after a <commit-transaction>.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is no ongoing transaction in this session, or if the ongoing transaction already has not been prepared, an error MUST be returned with <error-app-tag> set to "bad-state".
Aborts the ongoing transaction, and all pending changes are discarded. <abort-transaction> can be given at any time during an ongoing transaction.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is no ongoing transaction in this session, an error MUST be returned with <error-app-tag> set to "bad-state".
The <edit-config> operation is modified so that if it is received during an ongoing transaction, the modifications are not immediately applied to the configuration target. Instead they are kept in the transaction state of the server. The transaction state is only applied when a <commit-transaction> is received.
Note that it doesn't matter if the <test-option> is 'set' or 'test-then-set' in the <edit-config>, since nothing is actually set when the <edit-config> is received.
This XML Schema defines the new transaction rpcs.
<?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://tail-f.com/ns/netconf/transactions/1.0" xmlns="http://tail-f.com/ns/netconf/transactions/1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0" elementFormDefault="qualified" attributeFormDefault="unqualified" xml:lang="en"> <!-- Type for <target> element --> <xs:complexType name="TargetType"> <xs:choice> <xs:element name="running"/> <xs:element name="startup"/> <xs:element name="candidate"/> </xs:choice> </xs:complexType> <!-- <start-transaction> operation --> <xs:complexType name="StartTransactionType"> <xs:complexContent> <xs:extension base="netconf:rpcOperationType"> <xs:sequence> <xs:element name="target" type="TargetType"/> <xs:element name="with-inactive" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="start-transaction" type="StartTransactionType" substitutionGroup="netconf:rpcOperation"/> <xs:element name="prepare-transaction" substitutionGroup="netconf:rpcOperation"/> <xs:element name="commit-transaction" substitutionGroup="netconf:rpcOperation"/> <xs:element name="abort-transaction" substitutionGroup="netconf:rpcOperation"/> </xs:schema>
The Proxy Forwarding capability makes it possible to forward NETCONF requests to a target host through a proxy NETCONF server. It can be used in situations where a client does not have direct network access to a target host:
+--------+ | Client | +--------+ | | | +--------+ | Proxy | | server | +--------+ / \ / \ / \ +--------+ +--------+ | Proxy | | Proxy | | target | | target | +--------+ +--------+
See RFC 2663 for a definition of a proxy. This RFC defines two terms "Application Level Gateway" (ALG) and "Proxy":
ALGs are similar to Proxies, in that, both ALGs and proxies facilitate Application specific communication between clients and servers. Proxies use a special protocol to communicate with proxy clients and relay client data to servers and vice versa. Unlike Proxies, ALGs do not use a special protocol to communicate with application clients and do not require changes to application clients.
A client that wants to set up a NETCONF session to a Proxy target first connects to the Proxy server, which advertises the "forward" capability. The client issues a <forward> RPC, with a <target> parameter which specifies which Proxy target to connect to. The Proxy server sets up a NETCONF connection to the Proxy target, and after successful authentication, replies with the Proxy target's capability list to the client. From this point, the session is established, and any data received by the Proxy server from any side is sent as-is (without interpretation) to the other side.
Client Server Target | | | | | | | capability exchange | | |<------------------------->| | | | | | <forward> | | |-------------------------->| | |<--------------------------| | | <mechanisms> | | | | | | <forward> | | | <mechanism> | | |-------------------------->| | |<--------------------------| | | <challenge> | | | | | | <challenge-response> | | |-------------------------->| | | | connect+authenticate | | |-------------------------->| | |<--------------------------| | | <hello> | | <capabilities> | | |<--------------------------| | | | | | data | | |-------------------------->| data | | |-------------------------->| | | | | | data | | data |<--------------------------| |<--------------------------| | | | | | | |
First, the client constructs a <forward> rpc:
If the client does not know with authentication mechanism is supported by the Proxy server for the target, or if it wants to do automatic login, it sends a <forward> request without the "auth" parameter, and waits for a reply.
If the client knows which mechanism to use, it sends a <forward> request with the "auth/mechanism" parameter set, and waits for a reply.
The client MAY set the "auth/initial-response" parameter.
Then the client waits for a reply.
If the reply contains the "capabilities" parameter, the proxy connection is establihed.
If the reply contains the "challenge" parameter, the client sends a <challenge-response> RPC with the repsonse to the challenge, which it can get e.g. by prompting the user for credentials.
If the mechanism is PLAIN, the challenge is always empty.
After the <challenge-response> RPC is sent, the client continues from step (3).
If the reply contains the "sasl-failure" error, with the "failure" parameter set to "invalid-mechansim", the client continues from step (2).
If the reply contains the "sasl-failure" error, with the "failure" parameter set to "not-authorized", the client continues from step (1) or aborts.
Otherwise, the client interprets the error and aborts.
The procedure when the <forward> RPC is received is as follows:
The server looks up the value of the "target" parameter in the "proxy" list in the running configuration. If the target is not found, an "invalid-value" error is returned.
If the "auth" parameter is not present, and the server is configured to perform auto login, it extracts the current user's credentials from the session, and continues from step (8).
If the "auth" parameter is not present, and the server is not configured to do auto login, it replies with an error "sasl-authentication-needed", with a list of supported mechanisms.
If the "auth" parameter is present, the server verifies that the "mechanism" provided is supported by the server.
Currently, the supported mechanism is "PLAIN".
If the mechanism is not supported, the server replies with a "sasl-failure" error with the "failure" parameter set to "invalid-mechanism".
If the mechanism is supported, and the "initial-response" parameter is present, the server decodes the response according to the mechanism.
If the response could not be decoded, the server replies with an "sasl-failure" error with the "failure" parameter set to "incorrect-encoding".
If the response could be decoded, the server continues from step (8).
If the mechanism is supported, and the "initial-response" parameter is not present, the server replies with a "challenge" parameter.
For PLAIN, the challenge is empty.
The server now remembers the target and mechanism, and waits to receive a <challenge-response> RPC.
When the <challenge-response> RPC is received, the server decodes the "response" parameter as in (5).
If the response could be decoded, the server continues from step (8).
The server connects to the target with the given credentials.
If the connection fails due to communication problems, it replies with an "connection-failure" error.
If the server fails to authenticate with the given credentials, it replies with an "sasl-failure" error with the "failure" parameter set to "not-authorized".
If the connection succeeds, the server replies with the capabilities of the target, and enters the proxying mode.
In proxying mode, the server reads data from both the client and the target, and writes any data received to the other end, without interpreting the data. If any side of the connection is closed, the server closes the other side.
The proxy forwarding capability is identified by the following capability string:
http://tail-f.com/ns/netconf/forward/1.0
Starts a proxy forwarding connection to the given target, if all user credentials are given.
The server can be configured to automatically login to the target. In this case, the <forward> rpc does not contain any authentication parameters.
target:
Name of the target host to connect to. The name refers to an entry in the "proxy" list on the running configuration.
auth/mechanism:
Name of an SASL authentication mechanism to use. Currently the "PLAIN" mechanism is supported.
auth/initial-response:
If allowed by the selected mechanism, an initial response can be given. This saves one round-trip.
For the PLAIN mechanism, the response is a base64 encoded PLAIN "message" as defined in section 2 of RFC 4616. The optional "authzid" MUST NOT be present.
If the server was able to connect and authenticate to the target, it replies with the target's capability list, and the server then enters proxying mode.
If the server could not fully authenticate the client, it replies with a "challenge" element. The client should reply to the challenge with a <challenge-response> RPC.
If the server could not find the target, it replies with an "invalid-value" error.
If the client did not provide a mechanism, the server replies with a "sasl-authentication-needed" error, with a list of available mechanisms.
If the client provided an unsupported mechanism, the server replies with a "sasl-failure" error with the "failure" parameter set to"invalid-mechanism".
If the inital response could not be decoded, the server replies with an "sasl-failure" error with the "failure" parameter set to "incorrect-encoding".
If the server fails to connect to the target, it replies with an "connection-failure" error.
If the server fails to authenticate with the given credentials, it replies with an "sasl-failure" error with the "failure" parameter set to "not-authorized".
The proxy server is configured to do automatic login:
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <data> <capabilities xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <capability>urn:ietf:params:netconf:base:1.0</capability> <capability> urn:ietf:params:netconf:capability:writable-running:1.0 </capability </capabilities> </data> </rpc-reply> <!- client is now successfully connected to rne-141 -->
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <rpc-error> <error-type>protocol</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-app-tag>sasl-mechanisms</error-app-tag> <error-info> <mechanisms xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <mechanism>PLAIN</mechanism> </mechansims> </error-info> </rpc-error> </rpc> <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> <auth> <mechanism>PLAIN</mechanism> <initial-response>AGFkbWluAHNlY3JldA==</initial-response> </auth> </forward> </rpc>
The decoded initial response in the auth message is:
<NUL>admin<NUL>secret <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <data> <capabilities xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <capability>urn:ietf:params:netconf:base:1.0</capability> </capabilities> </data> </rpc-reply> <!- client is now successfully connected to rne-141 -->
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <rpc-error> <error-type>protocol</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-app-tag>sasl-authentication-needed</error-app-tag> <error-info> <mechanisms xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <mechanism>PLAIN</mechanism> </mechansims> </error-info> </rpc-error> </rpc> <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> <auth> <mechanism>PLAIN</mechanism> <initial-response>AGFkbWluAGFlY3JldA==</initial-response> </auth> </forward> </rpc>
The decoded initial response in the auth message is:
<NUL>admin<NUL>aecret (bad passwd) <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <rpc-error> <error-type>protocol</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-app-tag>sasl-failure</error-app-tag> <error-info> <failure xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <not-authorized/> </failure> </error-info> </rpc-error> </rpc-reply>
Sent after receiving a challenge reply to the <forward> request. If it succeds, the server will enter proxying mode.
response
For the PLAIN mechanism, the response is a base64 encoded PLAIN "message" as defined in section 2 of RFC 4616. The optional "authzid" MUST NOT be present.
If the server was able to connect and authenticate to the target, it replies with the target's capability list, and the server then enters proxying mode.
If the server could not fully authenticate the client, it replies with a "challenge" element. The client should reply to the challenge with a <challenge-response> RPC.
If the response could not be decoded, the server replies with an "sasl-failure" error with the "failure" parameter set to "incorrect-encoding".
If the server fails to connect to the target, it replies with an "connection-failure" error.
If the server fails to authenticate with the given credentials, it replies with an "sasl-failure" error with the "failure" parameter set to "not-authorized".
Client needs to authenticate to the target:
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <rpc-error> <error-type>protocol</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-app-tag>sasl-mechanisms</error-app-tag> <error-info> <mechanisms xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <mechanism>PLAIN</mechanism> </mechansims> </error-info> </rpc-error> </rpc> <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>rne-141</target> <auth> <mechanism>PLAIN</mechanism> </auth> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <data> <challenge xmlns="http://tail-f.com/ns/netconf/forward/1.0"/> </data> </rpc-reply> <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="3"> <challenge-response xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <reponse>AGFkbWluAHNlY3JldA==</response> </challenge-response> </rpc>
The decoded response in the auth message is:
<NUL>admin<NUL>secret <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="3"> <data> <capabilities xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <capability>urn:ietf:params:netconf:base:1.0</capability> </capabilities> </data> </rpc-reply> <!- client is now successfully connected to rne-141 -->
This XML Schema defines the new forwarding rpcs.
<?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://tail-f.com/ns/netconf/forward/1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://tail-f.com/ns/netconf/forward/1.0" xmlns:fwd="http://tail-f.com/ns/netconf/forward/1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" elementFormDefault="qualified" attributeFormDefault="unqualified" xml:lang="en"> <!-- <forward> operation --> <xs:element name="forward" substitutionGroup="nc:rpcOperation"> <xs:complexType> <xs:complexContent> <xs:extension base="nc:rpcOperationType"> <xs:sequence> <xs:element name="target" type="xs:string"/> <xs:element name="auth" minOccurs="0"> <xs:complexType> <xs:sequence> <xs:element name="mechanism" type="xs:string"/> <xs:element name="initial-response" minOccurs="0" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:element> <!-- <challenge-response> operation --> <xs:element name="challenge-response" substitutionGroup="nc:rpcOperation"> <xs:complexType> <xs:complexContent> <xs:extension base="nc:rpcOperationType"> <xs:sequence> <xs:element name="response" type="xs:string"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:element> <!-- reply to <forward> and <challenge-response> operations --> <xs:element name="capabilities"> <xs:complexType> <xs:sequence> <xs:element name="capability" minOccurs="0" maxOccurs="unbounded" type="xs:uri"/> </xs:sequence> </xs:complexType> </xs:element> <!-- reply to <forward> and <challenge-response> operations --> <xs:element name="challenge" type="xs:string"/> <!-- <error-info> content when <error-app-tag> is "sasl-failure" --> <xs:element name="failure"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element name="incorrect-encoding"> <xs:complexType/> </xs:element> <xs:element name="invalid-authzid"> <xs:complexType/> </xs:element> <xs:element name="invalid-mechanism"> <xs:complexType/> </xs:element> <xs:element name="mechanism-too-weak"> <xs:complexType/> </xs:element> <xs:element name="not-authorized"> <xs:complexType/> </xs:element> <xs:element name="temporary-auth-failure"> <xs:complexType/> </xs:element> </xs:choice> </xs:sequence> </xs:complexType> </xs:element> <!-- <error-info> content when <error-app-tag> is "sasl-authentication-needed" --> <xs:element name="mechanisms"> <xs:complexType> <xs:sequence> <xs:element name="mechanism" minOccurs="0" maxOccurs="unbounded" type="xs:string"/> </xs:sequence> </xs:complexType> <xs:element> </xs:schema>
This capability is used by the NETCONF server to indicate that it supports marking nodes as being inactive. A node that is marked as inactive exists in the data store, but is not used by the server. Any node can be marked as inactive.
In order to not confuse clients that do not understand this attribute, the client has to instruct the server to display and handle the inactive nodes. An inactive node is marked with an "inactive" XML attribute, and in order to make it active, the "active" XML atribute is used.
This capability is formally defined in the YANG module "tailf-netconf-inactive".
The inactive capability is identified by the following capability string:
http://tail-f.com/ns/netconf/inactive/1.0
A new parameter, <with-inactive>, is added to the <get>, <get-config>, <edit-config>, <copy-config>, and <start-transaction> operations.
The <with-inactive> element is defined in the http://tail-f.com/ns/netconf/inactive/1.0 namespace, and takes no value.
If this parameter is present in <get>, <get-config>, or <copy-config>, the NETCONF server will mark inactive nodes with the "inactive" attribute.
If this parameter is present in <edit-config> or <copy-config>, the NETCONF server will treat inactive nodes as existing, so that an attempt to create a node which is inactive will fail, and an attempt to delete a node which is inactive will succeed. Further, the NETCONF server accepts the "inactive" and "active" attributes in the data hierarchy, in order to make nodes inactive or active, respectively.
If the parameter is present in <start-transaction>, it MUST also be present in any <edit-config>, <copy-config>, <get>, or <get-config> operations within the transaction. If it is not present in <start-transaction>, it MUST NOT be present in any <edit-config> operation within the transaction.
The "inactive" and "active" attributes are defined in the http://tail-f.com/ns/netconf/inactive/1.0 namespace. The "inactive" attribute's value is the string "inactive", and the "active" attribute's value is the string "active".
This request creates an inactive interface:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <edit-config> <target> <running/> </target> <with-inactive xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/> <config> <top xmlns="http://example.com/schema/1.2/config"> <interface inactive="inactive"> <name>Ethernet0/0</name> <mtu>1500</mtu> </interface> </top> </config> </edit-config> </rpc> <rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <ok/> </rpc-reply>
This request shows the inactive interface:
<rpc message-id="102" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <get-config> <source> <running/> </source> <with-inactive xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/> </get-config> </rpc> <rpc-reply message-id="102" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <data> <top xmlns="http://example.com/schema/1.2/config"> <interface inactive="inactive"> <name>Ethernet0/0</name> <mtu>1500</mtu> </interface> </top> </data> </rpc-reply>
This request shows that inactive data is not returned unless the client asks for it:
<rpc message-id="103" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <get-config> <source> <running/> </source> </get-config> </rpc> <rpc-reply message-id="103" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <data> </data> </rpc-reply>
This request activates the interface:
This request creates an inactive interface:
<rpc message-id="104" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <edit-config> <target> <running/> </target> <with-inactive xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/> <config> <top xmlns="http://example.com/schema/1.2/config"> <interface active="active"> <name>Ethernet0/0</name> </interface> </top> </config> </edit-config> </rpc> <rpc-reply message-id="104" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <ok/> </rpc-reply>
This capability is used by a NETCONF peer to inform the other peer about the NETCONF stack and NETCONF client. The receiving peer can use this information in log files etc.
The information a peer may advertise is:
vendor:
The vendor of the NETCONF stack.
product:
The NETCONF product.
version:
The version of the product.
client-identity:
The identity of the user starting the session. This parameter can be the local user name of the operator in the client tool.
All these parameters are free form strings, advertised as query parameters to the capability URI, in the <hello> message.
The identification capability is identified by the following capability string:
http://tail-f.com/ns/netconf/identification/1.0
This is an example of how a client might advertise its identification information. Whitespace is added to make the example more readable.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <capabilities> <capability> urn:ietf:params:netconf:base:1.1 </capability> <capability> http://tail-f.com/ns/netconf/identification/1.0? vendor=tail-f &product=ncs &version=1.8 &client-identity=admin </capability> </capabilities> </hello>
The Query API consists of a number of RPC operations to start queries, fetch chunks of the result from a query, restart a query, and stop a query.
In the installed release there are two YANG
files named tailf-netconf-query.yang
and
tailf-common-query.yang
that defines these
operations. An easy way to find the
files is to run the following command from the top directory of
release installation:
$ find . -name tailf-netconf-query.yang
The API consists of the following operations:
start-query
: Start a query and return a query handle.
fetch-query-result
: Use a query handle to
repeatedly fetch chunks of the result.
reset-query
: (Re)set where the next fetched
result will begin from.
stop-query
: Stop (and close) the query.
In the following examples, the following data model is used:
container x { list host { key number; leaf number { type int32; } leaf enabled { type boolean; } leaf name { type string; } leaf address { type inet:ip-address; } } }
Here is an example of a start-query
operation:
<start-query xmlns="http://tail-f.com/ns/netconf/query"> <foreach> /x/host[enabled = 'true'] </foreach> <select> <label>Host name</label> <expression>name</expression> <result-type>string</result-type> </select> <select> <expression>address</expression> <result-type>string</result-type> </select> <sort-by>name</sort-by> <limit>100</limit> <offset>1</offset> </start-query>
An informal interpretation of this query is:
For each /x/host
where enabled
is true,
select its name
, and address
, and return
the result sorted by name
, in chunks of 100 results at the
time.
Let us discuss the various pieces of this request.
The actual XPath query to run is specified by the
foreach
element. In the example below will search for
all /x/host
nodes that has the enabled
node set to true
:
<foreach> /x/host[enabled = 'true'] </foreach>
Now we need to define what we want to have returned from the
node set by using one or more select
sections. What
to actually return is defined by the XPath
expression
.
We must also choose how the result should be represented.
Basically, it can be the actual value or the path leading to the
value. This is specified per select chunk The possible
result-types are: string
, path
,
leaf-value
and inline
.
The difference between string
and leaf-value
is somewhat subtle. In the case of
string
the result will be processed by the XPath
function string()
(which if the result is a
node-set will concatenate all the values). The
leaf-value
will return the value of the first node in
the result. As long as the result is a leaf node,
string
and leaf-value
will return the same
result. In the example above, we are using string
as
shown below. At least one result-type
must be specified.
The result-type inline
makes it possible to return the full
sub-tree of data in XML format. The data will
be enclosed with a tag: data
.
Finally we can
specify an optional label
for a convenient
way of labeling the returned data. In the example we have the
following:
<select> <label>Host name</label> <expression>name</expression> <result-type>string</result-type> </select> <select> <expression>address</expression> <result-type>string</result-type> </select>
The returned result can be sorted. This is expressed as
XPath expressions, which in most cases are very simple and
refers to the found node set. In this example we sort the
result by the content of the name
node:
<sort-by>name</sort-by>
To limit the max amount of results in each chunk that
fetch-query-result
will return we can set the limit
element. The default is to get all results in one chunk.
<limit>100</limit>
With the offset
element we can specify at which node we
should start to receive the result. The default is 1, i.e., the
first node in the resulting node-set.
<offset>1</offset>
Now, if we continue by putting the operation above in a file
query.xml
we can send a request, using the
command netconf-console, like this:
$ netconf-console --rpc query.xml
The result would look something like this:
<start-query-result> <query-handle>12345</query-handle> </start-query-result>
The query handle (in this example "12345") must be used in all subsequent calls. To retrieve the result, we can now send:
<fetch-query-result xmlns="http://tail-f.com/ns/netconf/query"> <query-handle>12345</query-handle> </fetch-query-result>
Which will result in something like the following:
<query-result xmlns="http://tail-f.com/ns/netconf/query"> <result> <select> <label>Host name</label> <value>One</value> </select> <select> <value>10.0.0.1</value> </select> </result> <result> <select> <label>Host name</label> <value>Three</value> </select> <select> <value>10.0.0.1</value> </select> </result> </query-result>
If we try to get more data with the
fetch-query-result
we might get more
result
entries in return until no more data exists
and we get an empty query result back:
<query-result xmlns="http://tail-f.com/ns/netconf/query"> </query-result>
If we want to go back in the "stream" of received
data chunks and have them repeated, we can do that with the
reset-query
operation. In the example below we ask to get
results from the 42:nd result entry:
<reset-query xmlns=\"http://tail-f.com/ns/netconf/query\"> <query-handle>12345</query-handle> <offset>42</offset> </reset-query>
Finally, when we are done we stop the query:
<stop-query xmlns="http://tail-f.com/ns/netconf/query"> <query-handle>12345</query-handle> </stop-query>
ConfD supports three pieces of meta-data data nodes: tags, annotations, and inactive.
This feature is by
default disabled, but can be enabled by setting
/confdConfig/enableAttributes
to true in
confd.conf
(see confd.conf(5)).
An annotation is a string which acts a comment. Any data node present in the configuration can get an annotation. An annotation does not affect the underlying configuration, but can be set by a user to comment what the configuration does.
An annotation is encoded as an XML attribute 'annotation' on any data node. To remove an annotation, set the 'annotation' attribute to an empty string.
Any configuration data node can have a set of tags. Tags are set by the user for data organization and filtering purposes. A tag does not affect the underlying configuration.
All tags on a data node are encoded as a space separated string in an XML attribute 'tags'. To remove all tags, set the 'tags' attribute to an empty string.
Annotation, tags, and inactive attributes can be present in
<edit-config>
, <copy-config>
,
<get-config>
, and <get>
. For
example:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <edit-config> <target> <running/> </target> <config> <interfaces xmlns="http://example.com/ns/if"> <interface annotation="this is the management interface" tags=" important ethernet "> <name>eth0</name> ... </interface> </interfaces> </config> </edit-config> </rpc>
ConfD adds an additional namespace which
is used to define elements which are included in the
<error-info>
element. This namespace also
describes which <error-app-tag/>
elements the
server might generate, as part of an
<rpc-error/>
.
<?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://tail-f.com/ns/netconf/params/1.1" xmlns:xs="http://www.w3.org/2001/XMLSchema" xml:lang="en"> <xs:annotation> <xs:documentation> Tail-f's namespace for additional error information. This namespace is used to define elements which are included in the 'error-info' element. The following are the app-tags used by the NETCONF agent: o not-writable Means that an edit-config or copy-config operation was attempted on an element which is read-only (i.e. non-configuration data). o missing-element-in-choice Like the standard error missing-element, but generated when one of a set of elements in a choice is missing. o pending-changes Means that a lock operation was attempted on the candidate database, and the candidate database has uncommitted changes. This is not allowed according to the protocol specification. o url-open-failed Means that the URL given was correct, but that it could not be opened. This can e.g. be due to a missing local file, or bad ftp credentials. An error message string is provided in the <error-message> element. o url-write-failed Means that the URL given was opened, but write failed. This could e.g. be due to lack of disk space. An error message string is provided in the <error-message> element. o bad-state Means that an rpc is received when the session is in a state which don't accept this rpc. An example is <prepare-transaction> before <start-transaction> </xs:documentation> </xs:annotation> <xs:element name="bad-keyref"> <xs:annotation> <xs:documentation> This element will be present in the 'error-info' container when 'error-app-tag' is "instance-required". </xs:documentation> </xs:annotation> <xs:complexType> <xs:sequence> <xs:element name="bad-element" type="xs:string"> <xs:annotation> <xs:documentation> Contains an absolute XPath expression pointing to the element which value refers to a non-existing instance. </xs:documentation> </xs:annotation> </xs:element> <xs:element name="missing-element" type="xs:string"> <xs:annotation> <xs:documentation> Contains an absolute XPath expression pointing to the missing element referred to by 'bad-element'. </xs:documentation> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="bad-instance-count"> <xs:annotation> <xs:documentation> This element will be present in the 'error-info' container when 'error-app-tag' is "too-few-elements" or "too-many-elements". </xs:documentation> </xs:annotation> <xs:complexType> <xs:sequence> <xs:element name="bad-element" type="xs:string"> <xs:annotation> <xs:documentation> Contains an absolute XPath expression pointing to an element which exists in too few or too many instances. </xs:documentation> </xs:annotation> </xs:element> <xs:element name="instances" type="xs:unsignedInt"> <xs:annotation> <xs:documentation> Contains the number of existing instances of the element referd to by 'bad-element'. </xs:documentation> </xs:annotation> </xs:element> <xs:choice> <xs:element name="min-instances" type="xs:unsignedInt"> <xs:annotation> <xs:documentation> Contains the minimum number of instances that must exist in order for the configuration to be consistent. This element is present only if 'app-tag' is 'too-few-elems'. </xs:documentation> </xs:annotation> </xs:element> <xs:element name="max-instances" type="xs:unsignedInt"> <xs:annotation> <xs:documentation> Contains the maximum number of instances that can exist in order for the configuration to be consistent. This element is present only if 'app-tag' is 'too-many-elems'. </xs:documentation> </xs:annotation> </xs:element> </xs:choice> </xs:sequence> </xs:complexType> </xs:element> <xs:attribute name="annotation" type="xs:string"> <xs:annotation> <xs:documentation> This attribute can be present on any configuration data node. It acts as a comment for the node. The annotation does not affect the underlying configuration data. </xs:documentation> </xs:annotation> </xs:attribute> <xs:attribute name="tags" type="xs:string"> <xs:annotation> <xs:documentation> This attribute can be present on any configuration data node. It is a space separated string of tags for the node. The tags of a node does not affect the underlying configuration data, but can be used by a user for data organization, and data filtering. </xs:documentation> </xs:annotation> </xs:attribute> </xs:schema>