【openstack】【cinder】Offload rbd’s copy_volume_to_image function from host to ceph cluster

    目前,雲硬盤創建鏡像,調用cinder upload-to-image。處理流程:先緩存rbd image到本地(本地磁盤IO利用率高),再調用glance upload image 到glance(佔用管理網絡帶寬),整體很耗時間。

官方的具體描述:https://specs.openstack.org/openstack/cinder-specs/specs/liberty/optimze-rbd-copy-volume-to-image.html

官方提供兩種方案,具體描述如下:

前提:

  1. cinder和glance的存儲後端在同一個ceph集羣中
  2. 如果不是,仍然採用目前的方案,先將rbd image導入到cinder volume本地,然後再調用glance uoload api上傳文件到glance 存儲後端
  3. 如果是的話,可採用rbd.image.copy 方法,將volume  從一個volume pool 拷貝到image pool

方法1:採用copy

方法2:採用clone。具體如下:1. 創建一個volume 快照snap ,同時project 這個snap 2. 基於快照clone一個child image 3.Flatten 這個child image 4. unprotect 這個snap,然後刪除它

根據測試:方法1耗時更短。(具體可查看官網描述)


具體的代碼修改,可參考如下(Liberty):

cinder.volume.drivers.rbd.py

def copy_volume_to_image(self,context, volume, image_service, image_meta):

    #(offload rbd's copy_volume_to_imagefunction from host to ceph cluster)

   if self.configuration.rbd_copy_volume_to_glance://cinder.conf中可控制是否採用copy方法

       glance_rbd_ceph_fsid = self.configuration.glance_rbd_ceph_fsid

       if self._get_fsid() ==glance_rbd_ceph_fsid://判斷glance和cinder是否在同一個ceph集羣

           LOG.debug("Glance backend is in the same ceph cluster, "

                      "try to copy volume%s to glance.", volume['name'])

           image_id = image_meta['id']

           src_pool = self.configuration.rbd_pool

           src_img_name = volume['name']

           dst_pool = self.configuration.glance_rbd_pool

           dst_img_name = image_id

           self._copy(src_pool, src_img_name, dst_pool, dst_img_name)

 

           # 可根據需要是否創建一個snapshot

              """"""

              return

       else:

           LOG.debug("Glance backend is in a different ceph cluster.")

   # 'rbd_copy_volume_to_glance'is False or glance's ceph cluster is difference with cinder' ceph cluster

   LOG.debug("Try to upload volume %sto glance.", volume['name'])

   tmp_dir = self._image_conversion_dir()

   tmp_file = os.path.join(tmp_dir,

                            volume['name'] +'-' + image_meta['id'])

    withfileutils.remove_path_on_error(tmp_file):

       args = ['rbd', 'export',

                '--pool', self.configuration.rbd_pool,

                volume['name'],tmp_file]

       args.extend(self._ceph_args())

       self._try_execute(*args)

       image_utils.upload_volume(context, image_service,

                                  image_meta,tmp_file)

   os.unlink(tmp_file)

# 新增_cpye方法

def _copy(self, src_pool, src_img_name,dst_pool, dst_img_name):

       """

       :param src_pool:      source rbdimage's pool

               src_img_name:  source rbd image's name

               dst_pool:      dest rbd image's pool

               dst_img_name:  dest rbd image's name

       """

       LOG.debug('copying %(pool)s/%(img)s to %(dst_pool)s/%(dst_img)s',

                  dict(pool=src_pool,img=src_img_name, dst_pool=dst_pool, dst_img=dst_img_name))

       src_name = utils.convert_str(src_img_name)

       dest_name = utils.convert_str(dst_img_name)

       src_pool = utils.convert_str(src_pool)

       dest_pool = utils.convert_str(dst_pool)

       with RBDVolumeProxy(self, src_name, pool=src_pool, read_only=True) asvol:

           with RADOSClient(self, dest_pool) as dest_client:

                vol.copy(dest_client.ioctx,dest_name, features=vol.features())



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章