初探linux中斷系統

近日需要使用msi中斷,遂在網上查找linux下中斷方面資料。資料雖多,但是需要組織成系統卻有些困難。而LDD3上關於中斷雖有提及,但卻未涉及msi中斷,故有必要自己進行一番學習。

今天閱讀了kernel源碼中的msi-HOWTO.txt文檔,對linux下msi的使用有了一些瞭解,但還甚爲淺薄,無法投入應用。後翻看了一些源碼,打算從基本開始瞭解,以便記憶。本篇將寫一些linux內核管理與存儲中斷服務的內容。

---------------------------------------------------------------------

1. 重要接口

LDD上說,“內核維護了一箇中斷信號線的註冊表,該註冊表類似於I/O端口的註冊表。模塊在使用中斷前要先請求一箇中斷通道(或者中斷請求IRQ),然後在使用後釋放該通道。”

撇開系統如何遍歷各個設備進行初始化,上面兩句話說的實際上就是指兩個接口函數:

extern int __must_check request_irq(unsigned int irq, irq_handler_t handler, unsigned long flags, const char *name, void *dev);
  extern void free_irq(unsigned int, void *);

顧名思義,以上兩個函數分別用於申請和釋放IRQ。

而再一看,會發現其實request_irq是個“皮包”函數,它的定義是這樣的:

static inline int __must_check request_irq(unsigned int irq, irq_handler_t handler, unsigned long flags, const char *name, void *dev) { return request_threaded_irq(irq, handler, NULL, flags, name, dev); }

所以實際上起到申請IRQ作用的,正是這個request_threaded_irq函數。一查,它位於/kernel/irq/manage.c中。

2.追隨request_threaded_irq

先貼上request_threaded_irq全文

int request_threaded_irq(unsigned int irq, irq_handler_t handler, irq_handler_t thread_fn, unsigned long irqflags, const char *devname, void *dev_id) { struct irqaction *action; struct irq_desc *desc; int retval; /* * Sanity-check: shared interrupts must pass in a real dev-ID, * otherwise we'll have trouble later trying to figure out * which interrupt is which (messes up the interrupt freeing * logic etc). */ if ((irqflags & IRQF_SHARED) && !dev_id) return -EINVAL; desc = irq_to_desc(irq); if (!desc) return -EINVAL; if (desc->status & IRQ_NOREQUEST) return -EINVAL; if (!handler) { if (!thread_fn) return -EINVAL; handler = irq_default_primary_handler; } action = kzalloc(sizeof(struct irqaction), GFP_KERNEL); if (!action) return -ENOMEM; action->handler = handler; action->thread_fn = thread_fn; action->flags = irqflags; action->name = devname; action->dev_id = dev_id; chip_bus_lock(irq, desc); retval = __setup_irq(irq, desc, action); chip_bus_sync_unlock(irq, desc); if (retval) kfree(action); #ifdef CONFIG_DEBUG_SHIRQ if (!retval && (irqflags & IRQF_SHARED)) { /* * It's a shared IRQ -- the driver ought to be prepared for it * to happen immediately, so let's make sure.... * We disable the irq to make sure that a 'real' IRQ doesn't * run in parallel with our fake. */ unsigned long flags; disable_irq(irq); local_irq_save(flags); handler(irq, dev_id); local_irq_restore(flags); enable_irq(irq); } #endif return retval; }

可以看到除去一些驗證的語句,整個函數主要完成的任務是初始化了一個irqaction類型的struct和一個irq_desc類型的struct,接着對這兩個struct進一步賦值和處理,便實現了IRQ申請。至此,我們有理由認爲這兩個struct是kernel管理IRQ的核心數據結構。因此不妨看看他們都是什麼樣的。

struct irq_desc { unsigned int irq; struct timer_rand_state *timer_rand_state; unsigned int *kstat_irqs; #ifdef CONFIG_INTR_REMAP struct irq_2_iommu *irq_2_iommu; #endif irq_flow_handler_t handle_irq; struct irq_chip *chip; struct msi_desc *msi_desc; void *handler_data; void *chip_data; struct irqaction *action; /* IRQ action list */ unsigned int status; /* IRQ status */ unsigned int depth; /* nested irq disables */ unsigned int wake_depth; /* nested wake enables */ unsigned int irq_count; /* For detecting broken IRQs */ unsigned long last_unhandled; /* Aging timer for unhandled count */ unsigned int irqs_unhandled; raw_spinlock_t lock; #ifdef CONFIG_SMP cpumask_var_t affinity; const struct cpumask *affinity_hint; unsigned int node; #ifdef CONFIG_GENERIC_PENDING_IRQ cpumask_var_t pending_mask; #endif #endif atomic_t threads_active; wait_queue_head_t wait_for_threads; #ifdef CONFIG_PROC_FS struct proc_dir_entry *dir; #endif const char *name; } ____cacheline_internodealigned_in_smp;

irq_desc實際是個用於構成數組的數據結構。這裏irq就是我們熟悉的irq號,每個設備申請到一個IRQ,就需要填充一個irq_desc,並由kernel放入所維護的數組中進行管理。在這些需要填充的內容裏,irq_chip和irqaction是兩個比較有助於理解數據結構的struct。

struct irq_chip { const char *name; unsigned int (*startup)(unsigned int irq); void (*shutdown)(unsigned int irq); void (*enable)(unsigned int irq); void (*disable)(unsigned int irq); void (*ack)(unsigned int irq); void (*mask)(unsigned int irq); void (*mask_ack)(unsigned int irq); void (*unmask)(unsigned int irq); void (*eoi)(unsigned int irq); void (*end)(unsigned int irq); int (*set_affinity)(unsigned int irq, const struct cpumask *dest); int (*retrigger)(unsigned int irq); int (*set_type)(unsigned int irq, unsigned int flow_type); int (*set_wake)(unsigned int irq, unsigned int on); void (*bus_lock)(unsigned int irq); void (*bus_sync_unlock)(unsigned int irq); /* Currently used only by UML, might disappear one day.*/ #ifdef CONFIG_IRQ_RELEASE_METHOD void (*release)(unsigned int irq, void *dev_id); #endif /* * For compatibility, ->typename is copied into ->name. * Will disappear. */ const char *typename; };

這個struct裏主要定義了硬件層面上一個系統對一個IRQ的管理接口。

struct irqaction { irq_handler_t handler; unsigned long flags; const char *name; void *dev_id; struct irqaction *next; int irq; struct proc_dir_entry *dir; irq_handler_t thread_fn; struct task_struct *thread; unsigned long thread_flags; };

這個struct中handler定義了中斷處理函數, *next指向了下一個irqaction,也就是說irqaction是以鏈表的形式存在的。也就是說,每一個IRQ對應一個irq_desc,而irq_desc維護着irq_chip管理了硬件層面的中斷使能,同時irq_desc也維護了一個irqaction鏈表。

根據所查的資料,實際上,系統在處理一箇中斷時,會根據中斷號調用irq_desc數組中的handle_irq, handle_irq再使用chip控制硬件的使能,接着調用irqaction鏈表,逐個調用中斷處理函數。

回過頭來,request一個IRQ的過程實際上就是構造irqaction項,free的過程就是移除不需要的irqaction項。

中斷系統初始化的過程

用來初始化中斷系統的函數位於arch/x86/kernel/irqinit.c,定義如下

void __init init_IRQ(void) { int i; /* * On cpu 0, Assign IRQ0_VECTOR..IRQ15_VECTOR's to IRQ 0..15. * If these IRQ's are handled by legacy interrupt-controllers like PIC, * then this configuration will likely be static after the boot. If * these IRQ's are handled by more mordern controllers like IO-APIC, * then this vector space can be freed and re-used dynamically as the * irq's migrate etc. */ for (i = 0; i < legacy_pic->nr_legacy_irqs; i++) per_cpu(vector_irq, 0)[IRQ0_VECTOR + i] = i; x86_init.irqs.intr_init(); }

函數寫的很簡單,留下的疑問是x86_init是做什麼的?

在arch/x86/include/asm/x86_init.h中可以找到,x86_init是一個x86_init_ops類型的結構體,其中irqs是一個x86_init_irqs類型的結構體。

struct x86_init_irqs { void (*pre_vector_init)(void); void (*intr_init)(void); void (*trap_init)(void); };

在arch/x86/kernel/x86_init.c中找到x86_init的初始默認賦值:

struct x86_init_ops x86_init __initdata = { ... .irqs = { .pre_vector_init = init_ISA_irqs, .intr_init = native_init_IRQ, .trap_init = x86_init_noop, }, ... };

對於這幾個函數,我們又要回到開頭的irqinit.c中來尋找了。先看之前調用的intr_init,也就是native_init_IRQ:

void __init native_init_IRQ(void) { int i; /* Execute any quirks before the call gates are initialised: */ x86_init.irqs.pre_vector_init(); apic_intr_init(); /* * Cover the whole vector space, no vector can escape * us. (some of these will be overridden and become * 'special' SMP interrupts) */ for (i = FIRST_EXTERNAL_VECTOR; i < NR_VECTORS; i++) { /* IA32_SYSCALL_VECTOR could be used in trap_init already. */ if (!test_bit(i, used_vectors)) set_intr_gate(i, interrupt[i-FIRST_EXTERNAL_VECTOR]); } if (!acpi_ioapic) setup_irq(2, &irq2); #ifdef CONFIG_X86_32 /* * External FPU? Set up irq13 if so, for * original braindamaged IBM FERR coupling. */ if (boot_cpu_data.hard_math && !cpu_has_fpu) setup_irq(FPU_IRQ, &fpu_irq); irq_ctx_init(smp_processor_id()); #endif }

其中pre_vector_init對應着init_ISA_irqs,主要完成了irq_desc的初始化分配。

void __init init_ISA_irqs(void) { int i; #if defined(CONFIG_X86_64) || defined(CONFIG_X86_LOCAL_APIC) init_bsp_APIC(); #endif legacy_pic->init(0); /* * 16 old-style INTA-cycle interrupts: */ for (i = 0; i < legacy_pic->nr_legacy_irqs; i++) { struct irq_desc *desc = irq_to_desc(i); desc->status = IRQ_DISABLED; desc->action = NULL; desc->depth = 1; set_irq_chip_and_handler_name(i, &i8259A_chip, handle_level_irq, "XT"); } }

完成數據結構的初始化後就是對硬件資源的分配了,不做深究。

轉載來自: http://www.cnblogs.com/garychen2272/archive/2011/02/25/1964176.html

發佈了11 篇原創文章 · 獲贊 15 · 訪問量 12萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章