CT影像文件格式
轉自 https://blog.csdn.net/Acmer_future_victor/article/details/106428407
CT圖像的文件格式是 dicom 格式,可以用 pydicom 進行處理,其含有許多的DICOM Tag信息。查看一些tag信息的代碼實現如下所示。
# __author: Y
# date: 2019/12/10
import pydicom
import numpy as np
import matplotlib
import pandas
import SimpleITK as sitk
import cv2
from PIL import Image
# 應用pydicom來提取患者信息
def loadFile(filename):
ds = sitk.ReadImage(filename)
image_array = sitk.GetArrayFromImage(ds)
frame_num, width, height = image_array.shape
print('frame_num:%s, width:%s, height:%s'%(frame_num, width, height))
return image_array, frame_num, width, height
def loadFileInformation(filename):
information = {}
ds = pydicom.read_file(filename)
information['PatientID'] = ds.PatientID
information['PatientName'] = ds.PatientName
information['PatientBirthDate'] = ds.PatientBirthDate
information['PatientSex'] = ds.PatientSex
information['StudyID'] = ds.StudyID
information['StudyDate'] = ds.StudyDate
information['StudyTime'] = ds.StudyTime
information['InstitutionName'] = ds.InstitutionName
information['Manufacturer'] = ds.Manufacturer
information['NumberOfFrames'] = ds.NumberOfFrames
print(information)
return information
loadFile('../000000.dcm')
loadFileInformation('abdominallymphnodes-26828')
CT圖像是根據人體不同組織器官對X射線的吸收能力不同掃描得到的,由許多軸向切片組成三維圖像,從三個方向觀察可以分爲三個視圖,分別是軸狀圖、冠狀圖和矢狀圖。運用pydicom讀取dcm格式的CT圖像切片的代碼實現如下所示。
def load_scan(path):
# 獲取切片
slices = [pydicom.read_file(path + '/' + s) for s in os.listdir(path)]
# 按ImagePositionPatient[2]排序,否則得到的掃描面是混亂無序的
slices.sort(key=lambda x: int(x.ImagePositionPatient[2]))
# 獲取切片厚度
try:
slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])
except:
slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)
for s in slices:
s.SliceThickness = slice_thickness
return slices
爲了更好地觀察不同器官,需要將像素值轉換爲CT值,單位爲HU。計算方法爲HU=pixel*rescale slope+rescale intercept。其中,rescale slope和rescale intercept是dicom圖像文件的兩個tag信息。代碼實現如下所示
def get_pixels_hu(slices):
image = np.stack([s.pixel_array for s in slices])
# Convert to int16 (from sometimes int16),
# should be possible as values should always be low enough (<32k)
image = image.astype(np.int16) # image.shape = (666, 512, 512)
# Set outside-of-scan pixels to 0
# The intercept is usually -1024, so air is approximately 0
# CT掃描邊界之外的灰度值是固定的,爲2000,需要把這些值設置爲0
image[image == -2000] = 0
# Convert to Hounsfield units (HU) 轉換爲HU,就是 灰度值*rescaleSlope+rescaleIntercept
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(np.float64)
image[slice_number] = image[slice_number].astype(np.int16)
image[slice_number] += np.int16(intercept)
return np.array(image, dtype=np.int16)
將像素值轉換爲CT值之後,可以設置窗寬、窗位來更好地觀察不同組織、器官。每種組織都有一定的CT值或CT值範圍,如果想觀察這一特定組織,就將窗位設置爲其對應的CT值,而窗寬是CT圖像可以顯示的CT值範圍,窗位大小是窗寬上、下限的平均值。CT圖像將窗寬範圍內的CT值劃分爲16個灰階進行顯示,例如,CT圖像範圍爲80HU,劃分爲16個灰階,則80/16=5HU,在CT圖像上,只有CT值相差5HU以上的組織纔可以觀察到。設置窗位、窗寬的代碼實現如下所示。
def get_window_size(organ_name):
if organ_name == 'lung':
# 肺部 ww 1500-2000 wl -450--600
center = -500
width = 2000
elif organ_name == 'abdomen':
# 腹部 ww 300-500 wl 30-50
center = 40
width = 500
elif organ_name == 'bone':
# 骨窗 ww 1000-1500 wl 250-350
center = 300
width = 2000
elif organ_name == 'lymph':
# 淋巴、軟組織 ww 300-500 wl 40-60
center = 50
width = 300
elif organ_name == 'mediastinum':
# 縱隔 ww 250-350 wl 250-350
center = 40
width = 350
return center, width
def setDicomCenWid(slices, organ_name):
img = slices
center, width = get_window_size(organ_name)
min = (2 * center - width) / 2.0 + 0.5
max = (2 * center + width) / 2.0 + 0.5
dFactor = 255.0 / (max - min)
d, h, w = np.shape(img)
for n in np.arange(d):
for i in np.arange(h):
for j in np.arange(w):
img[n, i, j] = int((img[n, i, j] - min) * dFactor)
min_index = img < 0
img[min_index] = 0
max_index = img > 255
img[max_index] = 255
return img
CT圖像不同掃描面的像素尺寸、粗細粒度是不同的,這對進行CNN有不好的影響,因此需要進行重構採樣,將圖像重採樣爲[1,1,1]的代碼實現如下所示
def resample(image, slice, new_spacing=[1, 1, 1]):
spacing = map(float, ([slice.SliceThickness] + [slice.PixelSpacing[0], slice.PixelSpacing[1]]))
spacing = np.array(list(spacing))
resize_factor = spacing / new_spacing
new_real_shape = image.shape * resize_factor
new_shape = np.round(new_real_shape)
real_resize_factor = new_shape / image.shape
new_spacing = spacing / real_resize_factor
image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest')
return image, new_spacing
爲了更好地進行網絡訓練,通常進行標準化,有min-max標準化和0-1標準化。
————————————————
原文鏈接:https://blog.csdn.net/Acmer_future_victor/article/details/106428407