Semantic Kernel 通過 LocalAI 集成本地模型

本文是基於 LLama 2是由Meta 開源的大語言模型,通過LocalAI 來集成LLama2 來演示Semantic kernel(簡稱SK) 和 本地大模型的集成示例。
SK 可以支持各種大模型,在官方示例中多是OpenAI 和 Azure OpenAI service 的GPT 3.5+。今天我們就來看一看如何把SK 和 本地部署的開源大模型集成起來。我們使用MIT協議的開源項目“LocalAI“:https://github.com/go-skynet/LocalAI
LocalAI 是一個本地推理框架,提供了 RESTFul API,與 OpenAI API 規範兼容。它允許你在消費級硬件上本地或者在自有服務器上運行 LLM(和其他模型),支持與 ggml 格式兼容的多種模型家族。不需要 GPU。LocalAI 使用 C++ 綁定來優化速度。 它基於用於音頻轉錄的 llama.cpp、gpt4all、rwkv.cpp、ggml、whisper.cpp 和用於嵌入的 bert.cpp。

 image


可參考官方 Getting Started 進行部署,通過LocalAI我們將本地部署的大模型轉換爲OpenAI的格式,通過SK 的OpenAI 的Connector 訪問,這裏需要做的是把openai的Endpoint 指向 LocalAI,這個我們可以通過一個自定義的HttpClient來完成這項工作,例如下面的這個示例:

internal class OpenAIHttpclientHandler : HttpClientHandler

{

    private  KernelSettings _kernelSettings;

    public OpenAIHttpclientHandler(KernelSettings settings)

    {

        this._kernelSettings = settings;

    }

    protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)

    {

        if (request.RequestUri.LocalPath == "/v1/chat/completions")

        {

            UriBuilder uriBuilder = new UriBuilder(request.RequestUri)

            {

                Scheme = this._kernelSettings.Scheme,

                Host = this._kernelSettings.Host,

                Port = this._kernelSettings.Port

            };

            request.RequestUri = uriBuilder.Uri;

        }

        return await base.SendAsync(request, cancellationToken);

    }

}

上面我們做好了所有的準備工作,接下來就是要把所有的組件組裝起來,讓它們協同工作。因此打開Visual studio code 創建一個c# 項目sk-csharp-hello-world,其中Program.cs 內容如下:

using System.Reflection;

using config;

using Microsoft.Extensions.DependencyInjection;

using Microsoft.Extensions.Logging;

using Microsoft.SemanticKernel;

using Microsoft.SemanticKernel.ChatCompletion;

using Microsoft.SemanticKernel.Connectors.OpenAI;

using Microsoft.SemanticKernel.PromptTemplates.Handlebars;

using Plugins;

var kernelSettings = KernelSettings.LoadSettings();

var handler = new OpenAIHttpclientHandler(kernelSettings);

IKernelBuilder builder = Kernel.CreateBuilder();

builder.Services.AddLogging(c => c.SetMinimumLevel(LogLevel.Information).AddDebug());

builder.AddChatCompletionService(kernelSettings,handler);

builder.Plugins.AddFromType<LightPlugin>();

Kernel kernel = builder.Build();

// Load prompt from resource

using StreamReader reader = new(Assembly.GetExecutingAssembly().GetManifestResourceStream("prompts.Chat.yaml")!);

KernelFunction prompt = kernel.CreateFunctionFromPromptYaml(

    reader.ReadToEnd(),

    promptTemplateFactory: new HandlebarsPromptTemplateFactory()

);

// Create the chat history

ChatHistory chatMessages = [];

// Loop till we are cancelled

while (true)

{

    // Get user input

    System.Console.Write("User > ");

    chatMessages.AddUserMessage(Console.ReadLine()!);

    // Get the chat completions

    OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()

    {

    };

    var result = kernel.InvokeStreamingAsync<StreamingChatMessageContent>(

        prompt,

        arguments: new KernelArguments(openAIPromptExecutionSettings) {

            { "messages", chatMessages }

        });

    // Print the chat completions

    ChatMessageContent? chatMessageContent = null;

    await foreach (var content in result)

    {

        System.Console.Write(content);

        if (chatMessageContent == null)

        {

            System.Console.Write("Assistant > ");

            chatMessageContent = new ChatMessageContent(

                content.Role ?? AuthorRole.Assistant,

                content.ModelId!,

                content.Content!,

                content.InnerContent,

                content.Encoding,

                content.Metadata);

        }

        else

        {

            chatMessageContent.Content += content;

        }

    }

    System.Console.WriteLine();

    chatMessages.Add(chatMessageContent!);

}

首先,我們做的第一件事是導入一堆必要的命名空間,使一切正常(第 1 行到第 9 行)。

然後,我們創建一個內核構建器的實例(通過模式,而不是因爲它是構造函數),這將有助於塑造我們的內核。

IKernelBuilder builder = Kernel.CreateBuilder();

你需要知道每時每刻都在發生什麼嗎?答案是肯定的!讓我們在內核中添加一個日誌。我們在第14行添加了日誌的支持。

我們想使用Azure,OpenAI中使用Microsoft的AI模型,以及我們LocalAI 集成的本地大模型,我們可以將它們包含在我們的內核中。正如我們在15行看到的那樣:

internal static class ServiceCollectionExtensions
{
     /// <summary>
     /// Adds a chat completion service to the list. It can be either an OpenAI or Azure OpenAI backend service.
     /// </summary>
     /// <param name="kernelBuilder"></param>
     /// <param name="kernelSettings"></param>
     /// <exception cref="ArgumentException"></exception>
     internal static IKernelBuilder AddChatCompletionService(this IKernelBuilder kernelBuilder, KernelSettings kernelSettings, HttpClientHandler handler)
     {
       
         switch (kernelSettings.ServiceType.ToUpperInvariant())
         {
             case ServiceTypes.AzureOpenAI:
                 kernelBuilder = kernelBuilder.AddAzureOpenAIChatCompletion(kernelSettings.DeploymentId,  endpoint: kernelSettings.Endpoint, apiKey: kernelSettings.ApiKey, serviceId: kernelSettings.ServiceId, kernelSettings.ModelId);
                 break;

            case ServiceTypes.OpenAI:
                 kernelBuilder = kernelBuilder.AddOpenAIChatCompletion(modelId: kernelSettings.ModelId, apiKey: kernelSettings.ApiKey, orgId: kernelSettings.OrgId, serviceId: kernelSettings.ServiceId);
                 break;

            case ServiceTypes.HunyuanAI:               
                 kernelBuilder = kernelBuilder.AddOpenAIChatCompletion(modelId: kernelSettings.ModelId, apiKey: kernelSettings.ApiKey, httpClient: new HttpClient(handler));
                 break;
             case ServiceTypes.LocalAI:
                 kernelBuilder = kernelBuilder.AddOpenAIChatCompletion(modelId: kernelSettings.ModelId, apiKey: kernelSettings.ApiKey, httpClient: new HttpClient(handler));
                 break;
             default:
                 throw new ArgumentException($"Invalid service type value: {kernelSettings.ServiceType}");
         }

        return kernelBuilder;
     }
}

接下來開啓一個聊天循環,使用SK的流式傳輸 InvokeStreamingAsync,如第42行到46行代碼所示,運行起來就可以體驗下列的效果:

image

本文示例源代碼:https://github.com/geffzhang/sk-csharp-hello-world


參考文章:

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章