DHT 协议作为 BT 协议的一个辅助 审核中


 我们来分析一下平均索引速度最快的 BT 磁力搜索引擎索引排行(整理分享,每日更新)

 采用分布式架构:

前端 Client asp.net mvc ef 6.0

后端 JAVA 系统:

* j2ee 核心组件:jsp、servlet、jdbc、ejb、jndi
* 数据通信:xml 标记语言
* 前台页面展示:html、dhtml、vue、Ext 开源框架
* 控制层:SPRING MVC 
* 业务逻辑层:spring 核心
* 数据持久层: 、mybatis  HIBERNATE
* 中间件:ejb (2.0)
* 操作系统:WINDOWS 2008
* 数据库:db2、oracle
* 应用服务器:JBOSS AS 7 TOMCAT
* 开发工具:WebSphere Studio Application Developer(WSAD),eclipse
* 搜索 ES 6.22 + SOLR 双引擎

.NET 系统:企业库 5.0+EF6.0 开发,结合 WCF 架构。

 原理如下:

后台采用:JAVA 语言写的爬虫系统。

超过八千万条数据 HASH。

 

DHT 网络爬虫的实现

DHT 协议原理以及一些重点分析:

   要做 DHT 的爬虫,首先得透彻理解 DHT,这样才能知道在什么地方究竟该应用什么算法去解决问题。关于 DHT 协议的细节以及重要的参考文章,请参考文末 1

   DHT 协议作为 BT 协议的一个辅助,是非常好玩的。它主要是为了在 BT 正式下载时得到种子或者 BT 资源。传统的网络,需要一台中央服务器存放种子或者 BT 资源,不仅浪费服务器资源,还容易出现单点的各种问题,而 DHT 网络则是为了去中心化,也就是说任意时刻,这个网络总有节点是亮的,你可以去询问问这些亮的节点,从而将自己加入 DHT 网络。

   要实现 DHT 协议的网络爬虫,主要分 3 步,第一步是得到资源信息 (infohash,160bit,20 字节,可以编码为 40 字节的十六进制字符串),第二步是确认这些 infohash 是有效的,第三步是通过有效的 infohash 下载到 BT 的种子文件,从而得到对这个资源的完整描述。

   其中第一步是其他节点用 DHT 协议中的 get_peers 方法向爬虫发送请求得到的,第二步是其他节点用 DHT 协议中的 announce_peer 向爬虫发送请求得到的,第三步可以有几种方式得到,比如可以去一些保存种子的网站根据 infohash 直接下载到,或者通过 announce_peer 的节点来下载到,具体如何实现,可以取决于你自己的爬虫。

   DHT 协议中的主要几个操作:

 

   主要负责通过 UDP 与外部节点交互,封装 4 种基本操作的请求以及相应。

   ping:检查一个节点是否 “存活”

   在一个爬虫里主要有两个地方用到 ping,第一是初始路由表时,第二是验证节点是否存活时

   find_node:向一个节点发送查找节点的请求

   在一个爬虫中主要也是两个地方用到 find_node,第一是初始路由表时,第二是验证桶是否存活时

   get_peers:向一个节点发送查找资源的请求

   在爬虫中有节点向自己请求时不仅像个正常节点一样做出回应,还需要以此资源的 info_hash 为机会尽可能多的去认识更多的节点。如图,get_peers 实际上最后一步是 announce_peer,但是因为爬虫不能 announce_peer,所以实际上 get_peers 退化成了 find_node 操作。

   announce_peer:向一个节点发送自己已经开始下载某个资源的通知

   爬虫中不能用 announce_peer,因为这就相当于通报虚假资源,对方很容易从上下文中判断你是否通报了虚假资源从而把你禁掉

   DHT 协议中有几个重点的需要澄清的地方:

   1. node 与 infohash 同样使用 160bit 的表示方式,160bit 意味着整个节点空间有 2^160 = 730750818665451459101842416358141509827966271488,是 48 位 10 进制,也就是说有百亿亿亿亿亿个节点空间,这么大的节点空间,是足够存放你的主机节点以及任意的资源信息的。

   2. 每个节点有张路由表。每张路由表由一堆 K 桶组成,所谓 K 桶,就是桶中最多只能放 K 个节点,默认是 8 个。而桶的保存则是类似一颗前缀树的方式。相当于一张 8 桶的路由表中最多有 160-4 个 K 桶。

   3. 根据 DHT 协议的规定,每个 infohash 都是有位置的,因此,两个 infohash 之间就有距离一说,而两个 infohash 的距离就可以用异或来表示,即 infohash1 xor infohash2,也就是说,高位一样的话,他们的距离就近,反之则远,这样可以快速的计算两个节点的距离。计算这个距离有什么用呢,在 DHT 网络中,如果一个资源的 infohash 与一个节点的 infohash 越近则该节点越有可能拥有该资源的信息,为什么呢?可以想象,因为人人都用同样的距离算法去递归的询问离资源接近的节点,并且只要该节点做出了回应,那么就会得到一个 announce 信息,也就是说跟资源 infohash 接近的节点就有更大的概率拿到该资源的 infohash

   4. 根据上述算法,DHT 中的查询是跳跃式查询,可以迅速的跨越的的节点桶而接近目标节点桶。之所以在远处能够大幅度跳跃,而在近处只能小幅度跳跃,原因是每个节点的路由表中离自身越接近的节点保存得越多,如下图

   5. 在一个 DHT 网络中当爬虫并不容易,不像普通爬虫一样,看到资源就可以主动爬下来,相反,因为得到资源的方式 (get_peers, announce_peer) 都是被动的,所以爬虫的方式就有些变化了,爬虫所要做的事就是像个正常节点一样去响应其他节点的查询,并且得到其他节点的回应,把其中的数据收集下来就算是完成工作了。而爬虫唯一能做的,是尽可能的去多认识其他节点,这样,才能有更多其他节点来向你询问。

   6. 有人说,那么我把 DHT 爬虫的 K 桶中的容量 K 增大是不是就能增加得到资源的机会,其实不然,之前也分析过了,DHT 爬虫最重要的信息来源全是被动的,因为你不能增大别人的 K,所以距离远的节点保存你自身的概率就越小,当然距离远的节点去请求你的概率相对也比较小。

   一些主要的组件(实际实现更加复杂一些,有其他的模块,这里仅列举主要几个):

   DHT crawler:

   这个就是 DHT 爬虫的主逻辑,为了简化多线程问题,跟 server 用了生产者消费者模型,负责消费,并且复用 server 的端口。

   主要任务就是负责初始化,包括路由表的初始化,以及初始的请求。另外负责处理所有进来的消息事件,由于生产者消费者模型的使用,里面的操作都基本上是单线程的,简化了不少问题,而且相信也比上锁要提升速度(当然了,加锁这步按理是放到了 queue 这里了,不过对于这种生产者源源不断生产的类型,可以用 ring-buffer 大幅提升性能)。

   DHT server:

   这里是 DHT 爬虫的服务器端,DHT 网络中的节点不单是 client,也是 server,所以要有 server 担当生产者的角色,最初也是每个消费者对应一个生产者,但实际上发现可以利用 IO 多路复用来达到消息事件的目的,这样一来大大简化了系统中线程的数量,如果 client 可以的话,也应该用同样的方式来组织,这样系统的速度应该会快很多。(尚未验证)

   DHT route table:

   主要负责路由表的操作。

   路由表有如下操作:

   init:刚创建路由表时的操作。分两种情况:

   1. 如果之前已经初始化过,并且将上次路由表的数据保存下来,则只需要读入保存数据。

   2. 如果之前没有初始化过,则首先应当初始化。

   首先,应当有一个接入点,也就是说,你要想加进这个网络,必须认识这个网络中某个节点 i 并将 i 加入路由表,接下来对 i 用 find_node 询问自己的 hash_info,这里巧妙的地方就在于,理论上通过一定数量的询问就会找到离自己距离很近的节点 (也就是经过一定步骤就会收敛)。find_node 目的在于尽可能早的让自己有数据,并且让网络上别的节点知道自己,如果别人不认识你,就不会发送消息过来,意味着你也不能获取到想要的信息。

   search:比较重要的方法,主要使用它来定位当前 infohash 所在的桶的位置。会被其他各种代理方法调用到。

   findNodes:找到路由表中与传入的 infohash 最近的 k 个节点

   getPeer:找到待查资源是否有 peer(即是否有人在下载,也就是是否有人 announce 过)

   announcePeer:通知该资源正在被下载

   DHT bucket:

   acitiveNode:逻辑比较多,分如下几点。

 

        1. 查找所要添加的节点对应路由表的桶是否已经满,如果未满,添加节点

        2. 如果已经满,检查该桶中是否包含爬虫节点自己,如果不包含,抛弃待添加节点

        3. 如果该桶中包含本节点,则平均分裂该桶

   其他的诸如 locateNode,
replaceNode, updateNode,
removeNode,就不一一说明了

   DHT torrent parser:  

   主要从 bt 种子文件中解析出以下几个重要的信息:name,size,file list (sub file name, sub file size),比较简单,用 bencode 方向解码就行了

   Utils:

   distance:计算两个资源之间的距离。在 kad 中用 a xor b 表示

 

   为了增加难度,选用了不太熟悉的语言 python,结果步步为营,但是也感慨 python 的简洁强大。在实现中,也碰到很多有意思的问题。比如如何保存一张路由表中的所有桶,之前想出来几个办法,甚至为了节省资源,打算用 bit 数组 + dict 直接保存,但是因为估计最终的几个操作不是很方便直观容易出错而放弃,选用的结构就是前缀树,操作起来果然是没有障碍;

   在超时问题上,比如桶超时和节点超时,一直在思考一个高效但是比较优雅的做法,可以用一个同步调用然后等待它的超时,但是显然很低效,尤其我没有用更多线程的情况,一旦阻塞了就等于该端口所有事件都被阻塞了。所以必须用异步操作,但是异步操作很难去控制它的精确事件,当然,我可以在每个事件来的时候检查一遍是否超时,但是显然也是浪费和低效。那么,剩下的只有采用跟 tomcat 类似的方式了,增加一个线程来监控,当然,这个监控线程最好是全局的,能监控所有 crawler 中所有事务的超时。另外,超时如果控制不当,容易导致内存没有回收以至于内存泄露,也值得注意。超时线程是否会与其他线程互相影响也应当仔细检查。

   最初超时的控制没处理好,出现了 ping storm,运行一定时间后大多数桶已经满了,如果按照协议中的方式去跑的话会发现大量的事件都是在 ping 以确认这个节点是否 ok 以至于大量的 cpu 用于处理 ping 和 ping 响应。深入理解后发现,检查节点状态是不需要的,因为节点状态只是为了提供给询问的人一些好的节点,既然如此,可以将每次过来的节点替换当前桶中最老的节点,如此一来,我们将总是保存着最新的节点。

   搜索算法也是比较让我困惑的地方,简而言之,搜索的目的并不是真正去找资源,而是去认识那些能够保存你的节点。为什么说是能够保存你,因为离你越远,桶的数量越少,这样一来,要想进他们的桶中去相对来说就比较困难,所以搜索的目标按理应该是附近的节点最好,但是不能排除远方节点也可能保存你的情况,这种情况会发生在远方节点初始化时或者远方节点的桶中节点超时的时候,但总而言之,概率要小些。所以搜索算法也不应该不做判断就胡乱搜索,但是也不应该将搜索的距离严格限制在附近,所以这是一个权衡问题,暂时没有想到好的方式,觉得暂时让距离远的以一定概率发生,而距离近的必然发生

   还有一点,就是搜索速度问题,因为 DHT 网络的这种结构,决定了一个节点所认识的其他节点必然是有限的附近节点,于是每个节点在一定时间段内能拿到的资源数必然是有限的,所以应当分配多个节点同时去抓取,而抓取资源的数量很大程度上就跟分配节点的多少有关了。

   最后一个值得优化的地方是 findnodes 方法,之前的方式是把一个桶中所有数据拿出来排序,然后取其中前 K 个返回回去,但是实际上我们做了很多额外的工作,这是经典的 topN 问题,使用排序明显是浪费时间的,因为这个操作非常频繁,所以即便所有保存的节点加起来很少((160 - 4) * 8),也会一定程度上增加时间。而采用的算法是在一篇论文《可扩展的 DHT 网络爬虫设计和优化》中找到的,基本公式是 IDi = IDj xor 2 ^(160 - i),这样,已知 IDi 和 i 就能知道 IDj,若已知 IDi 和 IDj 就能知道 i,通过这种方式,可以快速的查找该桶 A 附近的其他桶(显然是离桶 A 层次最近的桶中的节点距离 A 次近),比起全部遍历再查找效率要高不少。

  

    dht 协议 http://www.bittorrent.org/beps/bep_0005.html 及其翻译 http://gobismoon.blog.163.com/blog/static/5244280220100893055533/

    基于 dht 协议的网络爬虫 http://codemacro.com/2013/05/19/crawl-dht/

    dht 协议的原理分析,非常不错,建议一看 http://blog.sina.com.cn/s/blog_5384aaf00100a88k.html

 

BitTorrent uses a "distributed sloppy hash table" (DHT) for storing peer contact information for "trackerless" torrents. In effect, each peer becomes a tracker. The protocol is based on Kademila [1] and is implemented over UDP.

Please note the terminology used in this document to avoid confusion. A "peer" is a client/server listening on a TCP port that implements the BitTorrent protocol. A "node" is a client/server listening on a UDP port implementing the distributed hash table protocol. The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get the location of peers to download from using the BitTorrent protocol.

Overview

Each node has a globally unique identifier known as the "node ID." Node IDs are chosen at random from the same 160-bit space as BitTorrent infohashes[2]. A "distance metric" is used to compare two node IDs or a node ID and an infohash for "closeness." Nodes must maintain a routing table containing the contact information for a small number of other nodes. The routing table becomes more detailed as IDs get closer to the node's own ID. Nodes know about many other nodes in the DHT that have IDs that are "close" to their own but have only a handful of contacts with IDs that are very far away from their own.

In Kademlia, the distance metric is XOR and the result is interpreted as an unsigned integer.distance(A,B) = |A xor B| Smaller values are closer.

When a node wants to find peers for a torrent, it uses the distance metric to compare the infohash of the torrent with the IDs of the nodes in its own routing table. It then contacts the nodes it knows about with IDs closest to the infohash and asks them for the contact information of peers currently downloading the torrent. If a contacted node knows about peers for the torrent, the peer contact information is returned with the response. Otherwise, the contacted node must respond with the contact information of the nodes in its routing table that are closest to the infohash of the torrent. The original node iteratively queries nodes that are closer to the target infohash until it cannot find any closer nodes. After the search is exhausted, the client then inserts the peer contact information for itself onto the responding nodes with IDs closest to the infohash of the torrent.

The return value for a query for peers includes an opaque value known as the "token." For a node to announce that its controlling peer is downloading a torrent, it must present the token received from the same queried node in a recent query for peers. When a node attempts to "announce" a torrent, the queried node checks the token against the querying node's IP address. This is to prevent malicious hosts from signing up other hosts for torrents. Since the token is merely returned by the querying node to the same node it received the token from, the implementation is not defined. Tokens must be accepted for a reasonable amount of time after they have been distributed. The BitTorrent implementation uses the SHA1 hash of the IP address concatenated onto a secret that changes every five minutes and tokens up to ten minutes old are accepted.

Routing Table

Every node maintains a routing table of known good nodes. The nodes in the routing table are used as starting points for queries in the DHT. Nodes from the routing table are returned in response to queries from other nodes.

Not all nodes that we learn about are equal. Some are "good" and some are not. Many nodes using the DHT are able to send queries and receive responses, but are not able to respond to queries from other nodes. It is important that each node's routing table must contain only known good nodes. A good node is a node has responded to one of our queries within the last 15 minutes. A node is also good if it has ever responded to one of our queries and has sent us a query within the last 15 minutes. After 15 minutes of inactivity, a node becomes questionable. Nodes become bad when they fail to respond to multiple queries in a row. Nodes that we know are good are given priority over nodes with unknown status.

The routing table covers the entire node ID space from 0 to 2160. The routing table is subdivided into "buckets" that each cover a portion of the space. An empty table has one bucket with an ID space range of min=0, max=2160. When a node with ID "N" is inserted into the table, it is placed within the bucket that has min <= N < max. An empty table has only one bucket so any node must fit within it. Each bucket can only hold K nodes, currently eight, before becoming "full." When a bucket is full of known good nodes, no more nodes may be added unless our own node ID falls within the range of the bucket. In that case, the bucket is replaced by two new buckets each with half the range of the old bucket and the nodes from the old bucket are distributed among the two new ones. For a new table with only one bucket, the full bucket is always split into two new buckets covering the ranges 0..2159 and 2159..2160.

When the bucket is full of good nodes, the new node is simply discarded. If any nodes in the bucket are known to have become bad, then one is replaced by the new node. If there are any questionable nodes in the bucket have not been seen in the last 15 minutes, the least recently seen node is pinged. If the pinged node responds then the next least recently seen questionable node is pinged until one fails to respond or all of the nodes in the bucket are known to be good. If a node in the bucket fails to respond to a ping, it is suggested to try once more before discarding the node and replacing it with a new good node. In this way, the table fills with stable long running nodes.

Each bucket should maintain a "last changed" property to indicate how "fresh" the contents are. When a node in a bucket is pinged and it responds, or a node is added to a bucket, or a node in a bucket is replaced with another node, the bucket's last changed property should be updated. Buckets that have not been changed in 15 minutes should be "refreshed." This is done by picking a random ID in the range of the bucket and performing a find_nodes search on it. Nodes that are able to receive queries from other nodes usually do not need to refresh buckets often. Nodes that are not able to receive queries from other nodes usually will need to refresh all buckets periodically to ensure there are good nodes in their table when the DHT is needed.

Upon inserting the first node into its routing table and when starting up thereafter, the node should attempt to find the closest nodes in the DHT to itself. It does this by issuing find_node messages to closer and closer nodes until it cannot find any closer. The routing table should be saved between invocations of the client software.

BitTorrent Protocol Extension

The BitTorrent protocol has been extended to exchange node UDP port numbers between peers that are introduced by a tracker. In this way, clients can get their routing tables seeded automatically through the download of regular torrents. Newly installed clients who attempt to download a trackerless torrent on the first try will not have any nodes in their routing table and will need the contacts included in the torrent file.

Peers supporting the DHT set the last bit of the 8-byte reserved flags exchanged in the BitTorrent protocol handshake. Peer receiving a handshake indicating the remote peer supports the DHT should send a PORT message. It begins with byte 0x09 and has a two byte payload containing the UDP port of the DHT node in network byte order. Peers that receive this message should attempt to ping the node on the received port and IP address of the remote peer. If a response to the ping is recieved, the node should attempt to insert the new contact information into their routing table according to the usual rules.

Torrent File Extensions

A trackerless torrent dictionary does not have an "announce" key. Instead, a trackerless torrent has a "nodes" key. This key should be set to the K closest nodes in the torrent generating client's routing table. Alternatively, the key could be set to a known good node such as one operated by the person generating the torrent. Please do not automatically add "router.bittorrent.com" to torrent files or automatically add this node to clients routing tables.

nodes = [["<host>", <port>], ["<host>", <port>], ...]
nodes = [["127.0.0.1", 6881], ["your.router.node", 4804]]

KRPC Protocol

The KRPC protocol is a simple RPC mechanism consisting of bencoded dictionaries sent over UDP. A single query packet is sent out and a single packet is sent in response. There is no retry. There are three message types: query, response, and error. For the DHT protocol, there are four queries: ping, find_node, get_peers, and announce_peer.

A KRPC message is a single dictionary with two keys common to every message and additional keys depending on the type of message. Every message has a key "t" with a string value representing a transaction ID. This transaction ID is generated by the querying node and is echoed in the response, so responses may be correlated with multiple queries to the same node. The transaction ID should be encoded as a short string of binary numbers, typically 2 characters are enough as they cover 2^16 outstanding queries. The other key contained in every KRPC message is "y" with a single character value describing the type of message. The value of the "y" key is one of "q" for query, "r" for response, or "e" for error.

Contact Encoding

Contact information for peers is encoded as a 6-byte string. Also known as "Compact IP-address/port info" the 4-byte IP address is in network byte order with the 2 byte port in network byte order concatenated onto the end.

Contact information for nodes is encoded as a 26-byte string. Also known as "Compact node info" the 20-byte Node ID in network byte order has the compact IP-address/port info concatenated to the end.

Queries

Queries, or KRPC message dictionaries with a "y" value of "q", contain two additional keys; "q" and "a". Key "q" has a string value containing the method name of the query. Key "a" has a dictionary value containing named arguments to the query.

Responses

Responses, or KRPC message dictionaries with a "y" value of "r", contain one additional key "r". The value of "r" is a dictionary containing named return values. Response messages are sent upon successful completion of a query.

Errors

Errors, or KRPC message dictionaries with a "y" value of "e", contain one additional key "e". The value of "e" is a list. The first element is an integer representing the error code. The second element is a string containing the error message. Errors are sent when a query cannot be fulfilled. The following table describes the possible error codes:

Code Description
201 Generic Error
202 Server Error
203 Protocol Error, such as a malformed packet, invalid arguments, or bad token
204 Method Unknown

Example Error Packets:

generic error = {"t":"aa", "y":"e", "e":[201, "A Generic Error Ocurred"]}
bencoded = d1:eli201e23:A Generic Error Ocurrede1:t2:aa1:y1:ee

DHT Queries

All queries have an "id" key and value containing the node ID of the querying node. All responses have an "id" key and value containing the node ID of the responding node.

ping

The most basic query is a ping. "q" = "ping" A ping query has a single argument, "id" the value is a 20-byte string containing the senders node ID in network byte order. The appropriate response to a ping has a single key "id" containing the node ID of the responding node.

arguments:  {"id" : "<querying nodes id>"}

response: {"id" : "<queried nodes id>"}

Example Packets

ping Query = {"t":"aa", "y":"q", "q":"ping", "a":{"id":"abcdefghij0123456789"}}
bencoded = d1:ad2:id20:abcdefghij0123456789e1:q4:ping1:t2:aa1:y1:qe
Response = {"t":"aa", "y":"r", "r": {"id":"mnopqrstuvwxyz123456"}}
bencoded = d1:rd2:id20:mnopqrstuvwxyz123456e1:t2:aa1:y1:re

find_node

Find node is used to find the contact information for a node given its ID. "q" == "find_node" A find_node query has two arguments, "id" containing the node ID of the querying node, and "target" containing the ID of the node sought by the queryer. When a node receives a find_node query, it should respond with a key "nodes" and value of a string containing the compact node info for the target node or the K (8) closest good nodes in its own routing table.

arguments:  {"id" : "<querying nodes id>", "target" : "<id of target node>"}

response: {"id" : "<queried nodes id>", "nodes" : "<compact node info>"}

Example Packets

find_node Query = {"t":"aa", "y":"q", "q":"find_node", "a": {"id":"abcdefghij0123456789", "target":"mnopqrstuvwxyz123456"}}
bencoded = d1:ad2:id20:abcdefghij01234567896:target20:mnopqrstuvwxyz123456e1:q9:find_node1:t2:aa1:y1:qe
Response = {"t":"aa", "y":"r", "r": {"id":"0123456789abcdefghij", "nodes": "def456..."}}
bencoded = d1:rd2:id20:0123456789abcdefghij5:nodes9:def456...e1:t2:aa1:y1:re

get_peers

Get peers associated with a torrent infohash. "q" = "get_peers" A get_peers query has two arguments, "id" containing the node ID of the querying node, and "info_hash" containing the infohash of the torrent. If the queried node has peers for the infohash, they are returned in a key "values" as a list of strings. Each string containing "compact" format peer information for a single peer. If the queried node has no peers for the infohash, a key "nodes" is returned containing the K nodes in the queried nodes routing table closest to the infohash supplied in the query. In either case a "token" key is also included in the return value. The token value is a required argument for a future announce_peer query. The token value should be a short binary string.

arguments:  {"id" : "<querying nodes id>", "info_hash" : "<20-byte infohash of target torrent>"}

response: {"id" : "<queried nodes id>", "token" :"<opaque write token>", "values" : ["<peer 1 info string>", "<peer 2 info string>"]}

or: {"id" : "<queried nodes id>", "token" :"<opaque write token>", "nodes" : "<compact node info>"}

Example Packets:

get_peers Query = {"t":"aa", "y":"q", "q":"get_peers", "a": {"id":"abcdefghij0123456789", "info_hash":"mnopqrstuvwxyz123456"}}
bencoded = d1:ad2:id20:abcdefghij01234567899:info_hash20:mnopqrstuvwxyz123456e1:q9:get_peers1:t2:aa1:y1:qe
Response with peers = {"t":"aa", "y":"r", "r": {"id":"abcdefghij0123456789", "token":"aoeusnth", "values": ["axje.u", "idhtnm"]}}
bencoded = d1:rd2:id20:abcdefghij01234567895:token8:aoeusnth6:valuesl6:axje.u6:idhtnmee1:t2:aa1:y1:re
Response with closest nodes = {"t":"aa", "y":"r", "r": {"id":"abcdefghij0123456789", "token":"aoeusnth", "nodes": "def456..."}}
bencoded = d1:rd2:id20:abcdefghij01234567895:nodes9:def456...5:token8:aoeusnthe1:t2:aa1:y1:re

announce_peer

Announce that the peer, controlling the querying node, is downloading a torrent on a port. announce_peer has four arguments: "id" containing the node ID of the querying node, "info_hash" containing the infohash of the torrent, "port" containing the port as an integer, and the "token" received in response to a previous get_peers query. The queried node must verify that the token was previously sent to the same IP address as the querying node. Then the queried node should store the IP address of the querying node and the supplied port number under the infohash in its store of peer contact information.

There is an optional argument called implied_port which value is either 0 or 1. If it is present and non-zero, theport argument should be ignored and the source port of the UDP packet should be used as the peer's port instead. This is useful for peers behind a NAT that may not know their external port, and supporting uTP, they accept incoming connections on the same port as the DHT port.

arguments:  {"id" : "<querying nodes id>",
  "implied_port": <0 or 1>,
  "info_hash" : "<20-byte infohash of target torrent>",
  "port" : <port number>,
  "token" : "<opaque token>"}

response: {"id" : "<queried nodes id>"}
TYPESCRIPT 复制 全屏

Example Packets:

announce_peers Query = {"t":"aa", "y":"q", "q":"announce_peer", "a": {"id":"abcdefghij0123456789", "implied_port": 1, "info_hash":"mnopqrstuvwxyz123456", "port": 6881, "token": "aoeusnth"}}
bencoded = d1:ad2:id20:abcdefghij01234567899:info_hash20:<br />
mnopqrstuvwxyz1234564:porti6881e5:token8:aoeusnthe1:q13:announce_peer1:t2:aa1:y1:qe
Response = {"t":"aa", "y":"r", "r": {"id":"mnopqrstuvwxyz123456"}}
bencoded = d1:rd2:id20:mnopqrstuvwxyz123456e1:t2:aa1:y1:re
 
 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章