Table of Contents
ConfD supports a master/subagent concept similar to that found in e.g. AgentX (RFC 2741). The idea is that there is one master agent running on a managed device. It terminates the northbound interfaces such as NETCONF and CLI. The master agent is connected to a set of subagents which provide instrumentation of the subsystems.
A subagent has its own data store, separate from the master agent. A subagent is an essential part of the system, i.e. if the master agent cannot talk to the subagent, this is handled as a data provider failure.
Subagents may be used in a chassis based system when some of the blades may also ship as standalone products. In this case it is desirable to have identical software on the blade regardless of weather the blade sits in a chassis or is shipped as a standalone product.
Subagents are also the right choice when there is a need to to integrate software that already has a management interface of its own. In this case, it is desirable not to change that code, but still make it appear as an integrated part of the entire chassis. A typical usage scenario is when there is an existing standalone product that also should be part of a chassis solution.
Subagents are not the right choice for supporting field replaceable units (FRU), such as interface cards. In this case, it is recommended to have the software on the FRU connect to a ConfD running on a management processor through the normal ConfD C-APIs.
In ConfD, NETCONF is used as master-to-subagent protocol. The subagent only has to provide a NETCONF interface. The master agent can provide any northbound interfaces, for example CLI and Web UI only. This is accomplished in ConfD by separating the northbound agents from the data providers. Somewhat simplified, the subagents are viewed and handled as any other data provider.
Authentication and authorization (access control) is done by the master agent. This means that access control rules are configured at the master agent, and checked in runtime at the master. The subagent should be configured to allow full access to the user which the master agent uses for the connections.
The following picture illustrates how a chassis based system internally consists of three different subsystems.
Subagents are used when management station should perceive the system as a whole; thus subagents can be viewed as an internal implementation detail, not visible from the outside.
Another common architecture is that of configuring one ConfD instance to be able to proxy configuration traffic explicitly to one or more other managed devices.
This architecture is usable in similar situations as subagents with the exception that the outside management station must have explicit knowledge about the internal subsystems. It is common to build chassis based systems that consist of several subsystems that are themselves devices. The internal devices are typically not reachable on the external network, they are attached to network internal to the chassis - thus the need for a proxy solution.
The subagents are registered at the master agent. Information
about each subagent is written into the master agent's
confd.conf
file. The subagent
configuration is marked as "reloadable", see Section 28.4, “Configuring ConfD”, so it is possible
for the application to easily use parts of the subagent
configuration in its own configuration.
Once a subagent is enabled, it will be viewed as an essential
part of the system, i.e. if the master agent cannot talk to
the subagent, this is handled as a data provider failure.
This means that operations like <edit-config> will fail
if the master cannot contact an enabled subagent. ConfD sends
an event to the application when it detects communication
failures with subagents. The event is described in the confd_lib_events(3) man page. The application can
choose to disable or remove the subagent if it wants to,
either by modifying the master agent's
confd.conf
file and do confd
--reload, or by directly changing the configuration
parameters (see Section 28.4, “Configuring ConfD”).
The registration information needed per subagent is:
Currently supported transports are SSH and TCP. TCP is non-standard, but unencrypted and thus more efficient.
For SSH, specify the username and password that ConfD will use when connecting to the subagent.
For TCP, the ConfD specific TCP header described in the NETCONF chapter is used. This means that the user name and groups have to defined for the subagent.
An XPath expression which defines where in the master
agent's data hierarchy the subagent's data is
registered. For example /config/blade[id="3"]
/config/ospf
, or
just /
.
Each subagent registers a set of top-elements from one or more namespaces. These nodes will be mounted at the registration path at the master agent.
The data model that the subagent registers must be available at
the master agent, in the form of a .fxs file in the normal
load path. This .fxs file must be compiled with the flag
--subagent MountPath
before it is loaded in
the master agent. This option tells the master agent that
this namespace is handled by a subagent.
MountPath
is the same as the registration
path in confd.conf
, but without any
instance selectors.
Here is a step-by-step example on how to add three subagents, called A, B and C, to a master agent. We will assume that A and B implement one instance each of some service. A implements the SMTP service, and B IMAP and POP. Subagent C implements the equipment subsystem. The idea is that there might be more than one SMTP service or IMAP service, but a single equipment subsystem.
If a client talks directly to A, it will get the following data:
Example 26.1. smtp subagent data
<smtp-config xmlns="http://example.com/smtp/1.0"> <enabled>true</enabled> ... </smtp-config>
If a client talks directly to B, it will get the following data:
Example 26.2. imap and pop subagent data
<imap-config xmlns="http://example.com/imap/2.1"> <enabled>true</enabled> ... </imap-config> <pop-config xmlns="http://example.com/pop/1.2"> <enabled>true</enabled> ... </pop-config>
If a client talks directly to C, it will get the following data:
Example 26.3. Equipment subagent data
<config xmlns="http://example.com/equipment/2.1"> <chassis> ... </chassis> </config>
At the master agent, we want the following data:
Example 26.4. master agent data
<system xmlns="http://example.com/service/3.3"> <services> <service> <name>smtp1</name> <type>smtp</type> <smtp-config xmlns="http://example.com/smtp/1.0"> <enabled>true</enabled> ... </smtp-config> </service> <service> <name>imap1</name> <type>imap</type> <imap-config xmlns="http://example.com/imap/2.1"> <enabled>true</enabled> ... </imap-config> <service> <name>pop1</name> <type>pop</type> <pop-config xmlns="http://example.com/pop/1.2"> <enabled>true</enabled> ... </pop-config> </pop-config> </service> </services> </system> <config xmlns="http://example.com/equipment/2.1"> <chassis> ... </chassis> </config>
The first thing to do at the master agent is to compile the YANG modules:
Example 26.5. Compile the YANG modules at the master
$ confdc -c --subagent /system/services/service -o smtp.fxs smtp.yang $ confdc -c --subagent /system/services/service -o imap.fxs imap.yang $ confdc -c --subagent /system/services/service -o pop.fxs pop.yang $ confdc -c --subagent / -o equip.fxs equip.yang
Next, we put the following into confd.conf
:
Example 26.6. Master agent's confd.conf
<subagents> <enabled>true</enabled> <subagent> <name>A</name> <enabled>true</enabled> <tcp> <ip>10.0.0.1</ip> <port>2023</port> <confdAuth> <user>admin</user> <group>admin</group> </confdAuth> </tcp> <mount xmlns:sa="http://example.com/smtp/1.0"> <path>/system/services/service[name="smtp1"]</path> <node>sa:smtp-config</node> </mount> </subagent> <subagent> <name>B</name> <enabled>true</enabled> <tcp> <ip>10.0.0.2</ip> <port>2023</port> <confdAuth> <user>admin</user> <group>admin</group> </confdAuth> </tcp> <mount xmlns:imap="http://example.com/imap/2.1" xmlns:pop="http://example.com/pop/1.3"> <path>/system/services/service[name="imap1"] /system/services/service[name="pop1"]</path> <node>imap:imap-config pop:pop-config</node> </mount> </subagent> <subagent> <name>C</name> <enabled>true</enabled> <tcp> <ip>127.0.0.1</ip> <port>2043</port> <confdAuth> <user>admin</user> <group>admin</group> </confdAuth> </tcp> <mount xmlns:sa="http://example.com/equipment/2.1"> <path>/</path> <node>sa:config</node> </mount> </subagent>
Note that the instances
/services/service[name="smtp1"]
,
/services/service[name="imap1"]
, and
/services/service[name="pop1"]
must be created in
the database at the master agent before the subagent will
be used.
Some of the capabilities the master agent advertises must be supported among all subagents. For example, in order for the master agent to advertise the startup capability, all subagents must support it. Some other capabilities can be handled entirely in the master agent, and can be advertised independently of the subagents.
:writable-running
, :startup
, :confirmed-commit
, :validate
These capabilities can be advertised by the master agent if all subagents support them.
:candidate
This capabilities can be advertised by the master agent
if all subagents support them. In this case, the master
ConfD must be configured with
/confdConfig/datastores/candidate/implementation
set to external
in confd.conf.
:rollback-on-error
This capability can be advertised by the master agent if
all subagents support the http://tail-f.com/ns/netconf/transactions/1.0
capability. One exception to this is if there is one
single subagent which doesn't support the 'transactions'
capability (and zero or more agents supporting it), and
this single agent supports
:rollback-on-error
. For more
information on the 'transactions' capability, see Section 15.10, “Transactions Capability”.
:xpath
This capability can be advertised by the master agent independently of the subagents. The subagents do not have to support XPath.
:url
This capability can be advertised by the master agent
independently of the subagents. The subagents do not have
to support the :url
capability.
ConfD can be configured to proxy NETCONF traffic and CLI sessions. The configuration of the proxies reside in confd.conf. The proxy configuration is marked as "reloadable", see Section 28.4, “Configuring ConfD”, so it is possible for the application to easily use parts of the proxy configuration in its own configuration.
As an example, assume we have a chassis system with two internal boards that reside on a chassis internal network that is not reachable from the outside. We still want the operators to be able to configure the boards, thus we instruct ConfD to proxy network traffic to the internal boards. An example configuration snippet (from confd.conf) could be:
Example 26.7. Proxy configuration
<proxyForwarding> <enabled>true</enabled> <autoLogin>true</autoLogin> <proxy> <target>board-1</target> <address>10.10.0.1</address> <netconf> <ssh> <port>830</port> </ssh> </netconf> <cli> <ssh> <port>22</port> </ssh> </cli> </proxy> <proxy> <target>board-2</target> <address>10.10.0.2</address> <netconf> <ssh> <port>830</port> </ssh> </netconf> <cli> <ssh> <port>22</port> </ssh> </cli> </proxy> </proxyForwarding> <netconf> <capabilities> <forward> <enabled>true</enabled> </forward> <!-- other capabilities here ... --> </capabilities> <!-- more netconf config here ... --> </netconf>
The above instructs ConfD to proxy forward CLI traffic and NETCONF traffic from the "Management interface host" (MIH) to the "Internal hosts" (IH) Both types of traffic must be explicitly initiated by the operator.
We define two internal hosts to which we wish to proxy traffic. Each internal host has a symbolic name which is is used by both the CLI operator as well as the NETCONF client.
For all internal hosts we define weather we want to attempt auto login or not. If the ConfD internal SSH server was used in the original connection to the management interface host, be it NETCONF or CLI, ConfD has access to the clear text password. In that case an SSH connection attempt will be made with the same username/password pair as the original connection. If that fails, the NETCONF session will fail with a error whereas the CLI will prompt for a new password. If ConfD does not have access to the SSH password for the original connection to the management interface host, a password must be explicitly supplied by the CLI operator/NETCONF client.
It is of course also possible to arrange private/public keys on the chassis host in such a manner so that passwords will never be used.
The CLI user must explicitly initiate SSH connections to the internal hosts using the builtin "forward" command in the CLI. The single argument of the "forward" command is the string defined as "target" in confd.conf. The SSH connection to the target will be made with the same userid as the original CLI connection has.
admin@chassis> forward [TAB] Possible completions: board-1 - 10.10.0.1:22 board-2 - 10.10.0.2:22 admin@chassis> forward board-1 admin@board-1> id user = admin(2), gid=3, groups=admin, gids= [ok][2008-08-15 12:14:41] admin@board-1> ^D Connection to board-1 closed [ok][2008-08-15 12:14:58] admin@chassis>
The above (Juniper style CLI) shows a session where the CLI operator connects the CLI to an internal host (board-1)
ConfD publishes a new "proxy forwarding" NETCONF capability. If the management station issues the forward command, ConfD relays this connection to the IH. The proxy forwarding capability is defined in the NETCONF chapter.
If the command succeeds, any messages arriving in this session would subsequently be forwarded to the target device without any analysis on the forwarding device. This channel is also open to NETCONF notifications sent from the IH. This goes on until the session is closed.
A NETCONF session that connects to board-1 and asks for the dhcp configuration could like this:
Example 26.8. Agent replies with forward capability
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <capabilities> <capability>urn:ietf:params:netconf:base:1.0</capability> <capability>http://tail-f.com/ns/netconf/forward/1.0</capability> </capabilities> </hello>
Example 26.9. Manager issues forward rpc to board-1
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>board-1</target> </forward> </rpc>
Example 26.10. Manager issues command
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <get> <filter> <dhcp xmlns="http://tail-f.com/ns/example/dhcpd/1.0"/> </filter> </get> </rpc>
This last get
request will be forwarded to the IH by the
MIH. Finally the manager issues a close-session
request whereby the manager will have the original
SSH connection back to the IMH.
Example 26.11. close-session
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="3"> <close-session/> </rpc>
When ConfD at the MIH sees the "forward" command, ConfD looks up the IH identity in its configuration which provides a mapping to the appropriate IP address. ConfD then establishes an SSH connection to the IH.
The "forward" command may require authentication from the user. This happens if ConfD is not configured to do automatic login to the IH, or if automatic login fails. In this case, the reply will be 'not-authorized'.
The authentication protocol is SASL (RFC 4422), using the XML mapping defined for XMMP (RFC 3920). ConfD supports the PLAIN authentication mechanism (RFC 4616).
On successful completion of the "forward" command, the IH's capabilities are returned in the "rpc-reply".
When the IH or management station closes the connection, either normally or in error, the MIH terminates the forwarding of that session.
It should be noted that the management station may choose to open a single SSH session to the MIH and utilize the SSH channel concept to establish multiple NETCONF sessions under a single SSH session. The NETCONF sessions could be directed to the MIH as well as any IH. This is an optimization that saves memory for the rather large SSH session state on the management station. For more information on SSH channels, see section 5 of RFC 4254.
The MIH will however have to establish full SSH sessions to each IH as forward requests come in from the management station.
This is the most simple example, the manager sends a "forward" rpc and receives the capabilities of the IH.
Example 26.12. Auto login
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>board-1</target> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <data> <capabilities> <capability>urn:ietf:params:netconf:base:1.0</capability> </capabilities> </data> </rpc-reply>
Here the client sends a "forward" rpc and receives an error:
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>board-1</target> </forward> </rpc> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1"> <rpc-error> <error-type>protocol</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-app-tag>sasl-mechanisms</error-app-tag> <error-info> <mechanisms xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <mechanism>PLAIN</mechanism> </mechanisms> </error-info> </rpc-error> </rpc>
The error indicates that the client needs to authenticate. This is done using the SASL protocol.
Example 26.13. Forward rpc with auth data
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>board-1</target> <auth> <mechanism>PLAIN</mechanism> <initial-response>AGFkbWluAHNlY3JldA==</initial-response> </auth> </forward> </rpc>
The decoded initial response in the auth message is:
[NUL]admin[NUL]secret
Finally the client receives the capabilities of the IH
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <data> <capabilities> <capability>urn:ietf:params:netconf:base:1.0</capability> </capabilities> </data> </rpc-reply>
The client is now successfully connected to board-1
Similar to the example above, but the client sends a a bad password as in:
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <forward xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <target>board-1</target> <auth> <mechanism>PLAIN</mechanism> <initial-response>AGFkbWluAGFlY3JldA==</initial-response> </auth> </forward> </rpc>
The decoded initial response in the auth message is:
[NUL]admin[NUL]aecret
An error is received:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="2"> <rpc-error> <error-type>protocol</error-type> <error-tag>operation-failed</error-tag> <error-severity>error</error-severity> <error-app-tag>sasl-failure</error-app-tag> <error-info> <failure xmlns="http://tail-f.com/ns/netconf/forward/1.0"> <not-authorized/> </failure> </error-info> </rpc-error> </rpc-reply>