Repository : ssh://git@open-mesh.org/doc
On branches: backup-redmine/2017-07-13,master
>---------------------------------------------------------------
commit 3b81d67cf3992abefb9161df4ffa3849b3dda8bd
Author: Antonio Quartulli <a(a)unstable.cc>
Date: Wed May 18 16:16:12 2011 +0000
doc: batman-adv/Client-announcement
>---------------------------------------------------------------
3b81d67cf3992abefb9161df4ffa3849b3dda8bd
batman-adv/Client-announcement.textile | 113 ++++++++++-----------------------
1 file changed, 32 insertions(+), 81 deletions(-)
diff --git a/batman-adv/Client-announcement.textile b/batman-adv/Client-announcement.textile
index 3c279792..23f24387 100644
--- a/batman-adv/Client-announcement.textile
+++ b/batman-adv/Client-announcement.textile
@@ -1,115 +1,71 @@
h1. Client announcement.
-B.A.T.M.A.N.-Advanced is a Layer2 mesh routing protocol and, as any other mesh
-protocol, it has to announce any kind of client on its level that wants to
-access the mesh network. In case of Layer3 protocols, clients simply are represented by IP
-addresses, while in this case, as you can guess, clients are represented by MAC
-addresses.
-In case of enslaving the mesh interface into an ethernet bridge, together with
-another device, all the packet's source MAC addresses are recognised as
-belonging to clients (mesh interface's MAC address too).
+B.A.T.M.A.N.-Advanced is a Layer2 mesh routing protocol and, as any other mesh protocol, it has to announce any kind of client on its level that wants to access the mesh network. In case of Layer3 protocols, clients simply are represented by IP addresses, while in this case, as you can guess, clients are represented by MAC addresses.
+In case of enslaving the mesh interface into an ethernet bridge, together with another device, all the packet's source MAC addresses are recognised as belonging to clients (mesh interface's MAC address too).
h2. The local translation table
-Every client MAC address that is recognised through the mesh interface will be stored
-in a node local table called "local translation table" which will contain all
-the clients the node is currently serving. This table is the information a
-node has to spread among the network in order to make clients reachable. This
-because when a node wants to contact a particular client, thank to this
-information, it knows the originator it has to send the data to.
-Each node local table has a particular attribute: the translation table
-version number (ttvn). The value of this attribute represents the version of
-the table: each time the node decide to spread the table around, if something
-happened since last spread (a client has been added/removed), the ttvn is
-incremented by one.
-In this way, two tables belonging to the same node can be chronologically
-ordered and it is moreover possible to decide whether they are different or not
-without checking all the entries.
-Moreover the translation table version number if a new OGM field and it will
-contain the originator ttvn value at the moment of sending.
+Every client MAC address that is recognised through the mesh interface will be stored in a node local table called "local translation table" which will contain all the clients the node is currently serving. This table is the information a node has to spread among the network in order to make clients reachable. This because when a node wants to contact a particular client, thank to this information, it knows the originator it has to send the data to.
+Each node local table has a particular attribute: the translation table version number (ttvn). The value of this attribute represents the version of the table: each time the node decide to spread the table around, if something happened since last spread (a client has been added/removed), the ttvn is incremented by one.
+In this way, two tables belonging to the same node can be chronologically ordered and it is moreover possible to decide whether they are different or not without checking all the entries. Moreover the translation table version number if a new OGM field and it will contain the originator ttvn value at the moment of sending.
h2. The global translation table
-Every node in the network has to store all the other node local tables. To
-achieve this, another table is needed: the "global translation table". It is a
-set of entries where each of them contains the client MAC address and a pointer
-to the originator that is currently announcing it.
+Every node in the network has to store all the other node local tables. To achieve this, another table is needed: the "global translation table". It is a set of entries where each of them contains the client MAC address and a pointer to the originator that is currently announcing it.
h2. Updating the tables
-At boot time, every node will have an empty local table and empty global one.
-Its ttvn will be initialised to 0. The OGM sending represents the local table
+At boot time, every node will have an empty local table and empty global one. Its ttvn will be initialised to 0. The OGM sending represents the local table
spread event.
-At this point, on each OGM sending, if something changed in the local table
-since the last event, the ttvn is incremented by one and the list of the
-changes is appended to the OGM message. On the receiver side when receiving a
-new OGM, the node can use the new ttvn field to detect any change in the
-originator's local table. If so the receiver node will use the appended
-changes to update its global translation table.
-
-In case of missing OGM, a query mechanism has been provided. A node will
-detect the missing information using the ttvn field: in case of gap the node
+At this point, on each OGM sending, if something changed in the local table since the last event, the ttvn is incremented by one and the list of the changes is appended to the OGM message. On the receiver side when receiving a new OGM, the node can use the new ttvn field to detect any change in the
+originator's local table. If so the receiver node will use the appended changes to update its global translation table.
+
+In case of missing OGM, a query mechanism has been provided. A node will detect the missing information using the ttvn field: in case of gap the node
will ask for the needed information using a TT_REQUEST message.
-In particulal a node can issue for two different information:
+In particular a node can issue for two different information:
- The originator's full local table
- The last set of changes the originator sent within the OGM.
-This distination is done using the TT_FULL_TABLE bit of the bitwise **flag
-field** in the TT_QUERY packet.
+This distination is done using the TT_FULL_TABLE bit of the bitwise **flag field** in the TT_QUERY packet.
-The originator that receives the TT_REQUEST message will reply with a
-TT_RESPONSE to which the node will append the requested data.
+The originator that receives the TT_REQUEST message will reply with a TT_RESPONSE to which the node will append the requested data.
The TT_REQUEST message will contain the following fields:
-In particular the TT_REQUEST/TT_REPONSE messages are two subtypes of the TT_QUERY
-message which has the following fields (only fields related to the TT
-mechanism have been reported):
+In particular the TT_REQUEST/TT_REPONSE messages are two subtypes of the TT_QUERY message which has the following fields (only fields related to the TT mechanism have been reported):
<pre><code>uint8_t flags;
uint8_t src[ETH_ALEN];
uint8_t ttvn;
uint16_t tt_data;</code></pre>
-The flag field is used to distinguish between a TT_REQUEST and a TT_RESPONSE
-and to inform whether the TT_QUERY message is asking for/carrying a full local
-table or only the last OGM transtable buffer.
+The flag field is used to distinguish between a TT_REQUEST and a TT_RESPONSE and to inform whether the TT_QUERY message is asking for/carrying a full local table or only the last OGM transtable buffer.
The ttvn field is used to inform about the ttvn is asking for/replying with.
-So, in case of OGM buffer request, this value will be set to the ttvn
-associated to the buffer the node needs.
+So, in case of OGM buffer request, this value will be set to the ttvn associated to the buffer the node needs.
The tt_data field has different content in case of TT_REQUEST or TT_RESPONSE.
-If the message is a TT_REQUEST, this field is set to the tt_crc value (for
-details about this value, please read the TT consistency section) that the
-requesting node received by means of the OGM from the destination (of this
-TT_REQUEST).
-If the message is a TT_RESPONSE then this field is set to the number of
-entries the message is carrying. This information is needed to let the receiver
+If the message is a TT_REQUEST, this field is set to the tt_crc value (for details about this value, please read the TT consistency section) that the
+requesting node received by means of the OGM from the destination (of this TT_REQUEST).
+If the message is a TT_RESPONSE then this field is set to the number of entries the message is carrying. This information is needed to let the receiver
node correctly handle the appended buffer.
-In case of unavailability of the last OGM transtable buffer the node will
-answer with the full table.
+In case of unavailability of the last OGM transtable buffer the node will answer with the full table.
h2. TT_REQUEST forward breaking:
-To avoid unicast storming in case of multiple TT_REQUEST traversing the
-network, each node on the path of such message will inspect it and decide
-whether it has the correct information to answer with. To check this, the
-intermediate node has to inspect the TT_REQUEST and compare the ttvn and the
-tt_data (that is the tt_crc in this case) fields with its own information. If
-the destination's ttvn it knows is equal to the requested the ttvn and the
-tt_crc matches as well, then the intermediate node can directly reply to the
-request (with the full table of the destination or the last OGM buffer if
-needed and possible). If something didn't match, the node will forward the
-packet to the nexthop in the path to the destination (as a simple unicast
+To avoid unicast storming in case of multiple TT_REQUEST traversing the network, each node on the path of such message will inspect it and decide
+whether it has the correct information to answer with. To check this, the intermediate node has to inspect the TT_REQUEST and compare the ttvn and the
+tt_data (that is the tt_crc in this case) fields with its own information. If the destination's ttvn it knows is equal to the requested the ttvn and the
+tt_crc matches as well, then the intermediate node can directly reply to the request (with the full table of the destination or the last OGM buffer if
+needed and possible). If something didn't match, the node will forward the packet to the nexthop in the path to the destination (as a simple unicast
packet).
h2. TT consistency:
-The tt_crc field has been added to the struct orig_node. This field is
-computed for a generic originator O as "the xor of all the crc16 value on
-each tt_global_entry->addr field of those entries pointing to O":
+The tt_crc field has been added to the struct orig_node. This field is computed for a generic originator O as "the xor of all the crc16 value on
+each tt_global_entry->addr field of those entries pointing to O".
+
**Pseudocode**:
<pre><code>tt_global_crc(orig_node O) {
res = 0;
@@ -119,15 +75,10 @@ for each tt_global_entry:
endif
endfor
return res</code></pre>
-Moreover the same field has been added to struct bat_priv (a local tt_crc).
-It is computed in the same way as before but using the tt_local_entry->addr
-field of all the local entries. As it is possible to guess, bat_priv->tt_crc
-of a generic node A has to be equal to orig_node_A->tt_crc on all the other
-nodes.
-The tt_crc field is also added to the OGM packet, in this way the node sin the
-network can check whether their global tables are consistent or not. In case
-of mismatch, the full table is recovered through a TT_REQUEST.
+Moreover the same field has been added to struct bat_priv (a local tt_crc). It is computed in the same way as before but using the tt_local_entry->addr
+field of all the local entries. As it is possible to guess, bat_priv->tt_crc of a generic node A has to be equal to orig_node_A->tt_crc on all the other
+nodes. The tt_crc field is also added to the OGM packet, in this way the nodes in the network can check whether their global tables are consistent or not. In case of mismatch, the full table is recovered through a TT_REQUEST.
h2. TT structures in details: