摘要
当前的互联网流量模型正迅速从端到端通信发展为内容的传播共享.为了适应这种变化,学术界提出了内容中心网络(CCN)的未来网络架构.在内容中心网络中,终端主机根据内容的名字而不是内容所在主机的IP地址进行通信.内容路由器作为底层基础设施,承载着内容请求和内容响应的高速交换的任务.不同于IP路由器的无状态转发模型,内容路由器的数据平面因具备网络缓存功能而更加复杂.尽管网络缓存可以有效减轻网络拥塞,但这样的设计也将额外的状态附加到网络中间节点上,这在一定程度上破坏了网络设计中的"端到端"原则.这些额外状态给路由器的数据平面造成了性能负担,也使内容路由器数据包转发成为当下热门的研究课题.内容路由器包含转发信息库(FIB)、待定兴趣表(PIT)和内容缓存(CS)三个模块.针对基于名字的最长前缀匹配、名字路由表的状态爆炸、待定兴趣表的频繁更新等问题,目前大量研究工作围绕FIB和PIT开展性能优化.然而,尽管CS也存在潜在的性能问题,却较少有工作对其进行讨论.FIB、PIT和CS在内容路由器中以流水线的形式存在,而流水线的处理速度由最慢的流水段决定.因此,为了提高内容路由器的整体性能,有必要首先确定其性能瓶颈,然后进行针对性的优化.在该文中,为了避免盲目的性能优化,我们首先建立了基于开放排队网络的数学模型.通过定量分析,文章发现CS是整个路由器的性能瓶颈.目前,已有工作采用跳表作为CS的数据结构.然而由于其O(logn)的查找复杂度,经典跳表在处理高速网络流量时依然存在性能问题.受到网络流量中广泛存在的时间局部性和空间局部性的启发,我们提出了局部性原理跳表来提升CS的性能.在新的设计中,考虑到新到达的内容请求可能与之前到达的内容请求共享相同的名字前缀,且所请求的内容块通常处于邻近�
The Internet usage is experiencing a rapid evolution from host-to-host communication to content dissemination.To embrace this trend,content-centric networking(CCN)is proposed as a promising future Internet architecture.In CCN,end hosts communicate based on named content rather than IP addresses.Content routers are developed as the underlying infrastructure for carrying the high-speed exchange of content request messages as well as content return messages.Unlike IP router’s stateless forwarding model,a content router has a far more sophisticated data plane with the capability of in-network caching.Although the in-network caching allows content requests to be satisfied directly by routers instead of from end hosts to reduce network congestion,it impairs the classic“end-to-end”principle in network design by pushing extra states into the intermediary nodes of the network.These states lay performance burden on content router’s data plane and make CCN packet forwarding a hot research topic.There are three major components in a content router,namely,forwarding information base(FIB),pending interest table(PIT)and content store(CS).Due to the performance issues such as longest prefix match in an unbounded name space,expensive memory cost for explosive routing states,frequent update of pending states,lots of recent works focus on the performance issues of FIB and PIT.However,interestingly,although it is quite obvious that CS will also suffer from the performance penalty,it seems to be unintentionally ignored by the research community.Since FIB,PIT and CS are organized into a three-stage pipeline while generally,a pipeline runs only as fast as its slowest stage,to improve the overall performance of the content router,we should identify the performance bottleneck first and then,eliminate the exact bottleneck by a novel design.In this work,to prevent“blind optimization”,we first build a mathematical model based on open queuing networks which identifies CS as the performance bottleneck.Currently,the state-of-the-ar
作者
潘恬
黄韬
张雪贝
PAN Tian;HUANG Tao;ZHANG Xue-Bei(State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,Beijing 100876;Beijing Advanced Innovation Center for Future Internet Technology,Beijing 100124)
出处
《计算机学报》
EI
CSCD
北大核心
2018年第9期2029-2043,共15页
Chinese Journal of Computers
基金
国家"八六三"高技术研究发展计划项目(2015AA016101)
国家自然科学基金(61501042
61702049)
中国博士后科学基金(2016M590068)
北京市科技新星计划(Z151100000315078)资助~~
关键词
内容中心网络
内容路由器
排队网络
跳表
局部性
查找算法
content-centric networking
content router
queuing networks
skip list
locality
search algorithm