譯:Building a Text Editor in the Times of AI 在AI時代構建文本編輯器

原文:https://zed.dev/blog/building-a-text-editor-in-times-of-ai
作者:Thorsten Ball
譯者:Claude 3 Opus
發佈時間:03/26/24

Building a Text Editor in the Times of AI
在AI時代構建文本編輯器

This is my fifth conversation with Zed's three co-founders Nathan, Max, and Antonio. You can read the previous one here.
這是我與Zed三位聯合創始人Nathan、Max和Antonio的第五次對話。你可以在這裏閱讀之前的對話。

This time I had to address the elephant in the room: AI. I wanted to know how each of the founder's found their way to using AI, how they use it today, and how they would like to use it. We also talked about the nitty-gritty of the current implementation of AI features in Zed and what this year will bring for Zed in regards to AI. I also had to ask: is building an editor in times of AI not ignoring the sign of the times?
這一次,我不得不正視房間裏的那頭大象:AI。我想知道每位創始人是如何找到使用AI的方法的,他們今天如何使用它,以及他們想如何使用它。我們還討論了目前在Zed中實現AI功能的細節,以及今年Zed在AI方面的計劃。我還不得不問:在AI時代構建編輯器,難道不是對時代跡象的無視嗎?

What follows is an editorialized transcript of an hour long conversation. I tried to preserve intent and meaning as much as possible, while getting rid of the uhms, the likes, the you-knows, and the pauses and course-corrections that make up an in-depth conversation.
以下是一個長達一小時的對話的編輯版記錄。我儘量保留原意,同時去掉了那些無意義的詞,比如"嗯"、"喜歡"、"你知道"等,以及深入對話中的停頓和調整。

(You can watch the full conversation on our YouTube channel.)
(你可以在我們的YouTube頻道上觀看完整的對話。)

Thorsten: When did you first use AI for programming? Do you remember?
Thorsten:你第一次使用AI編程是什麼時候?還記得嗎?

Nathan: I used it pretty early on. I think my first really eye-opening experience with it was using ChatGPT when it first came out and just having it do really basic stuff. I think I defined geometry, like a geometry library in it, sort of just for fun though. And I was blown away that I could even do those really basic things like defining a point and a circle and an area function and all these things that it was doing at the time. Things have gotten a lot more sophisticated since that moment, but that was kind of like this mind-blowing moment for me.
Nathan:我相當早就開始使用了。我想我第一次讓人大開眼界的經歷是在ChatGPT剛出來的時候使用它,讓它做一些非常基本的事情。我想我定義了幾何,就像一個幾何庫,只是爲了好玩。我當時就震驚了,它甚至可以做那些非常基本的事情,比如定義一個點、一個圓和一個面積函數,以及所有這些它當時正在做的事情。從那一刻起,事情變得更加複雜了,但對我來說,那是一個讓人大開眼界的時刻。

Thorsten: Was it mind-blowing or were you skeptical?

Thorsten:是讓人大開眼界還是讓你感到懷疑?

Nathan: It was mind-blowing. I don't understand the general like hate and skepticism toward AI that so many programmers have.
Nathan:那是讓人大開眼界。我不理解很多程序員對AI的普遍討厭和懷疑。

I remember in college, I studied natural language processing and I worked for my professor, right after school. He was the head of SRI doing like classic AI, Jerry Hobbs. I remember how fascinated I was with the idea of like, what is meaning? What is language? How does it work? And studying these like combinatorial, categorical grammar mechanisms, where it was like, we define grammar as this directional lambda calculus formalism and, you know, I was really curious and fascinated by all of that and but also came away frustrated because at the time it was... a language model was the dumbest thing ever. It was based on the frequency of tokens or something and you couldn't get anything out of it.
我記得在大學時,我學習自然語言處理,畢業後爲我的教授工作。他是SRI的負責人,做的是經典的人工智能,Jerry Hobbs。我記得我當時對語言的含義、語言如何運作等問題有多着迷。我研究了這些組合的、範疇語法機制,它把語法定義爲這種定向的lambda演算形式主義。我對所有這些都非常感興趣和着迷,但也感到沮喪,因爲當時的語言模型簡直是世界上最愚蠢的東西。它基於標記的頻率之類的東西,你從中什麼也得不到。

So just the idea that I could sit there and in English ask it to do anything and have it do anything at all to me is mind-blowing. Right then and there. That's amazing. That's a freaking miracle that I never would have anticipated being good. So why everybody's not blown away by that fact that this exists in our world is beyond me. I just don't get it. It pisses me off, kind of, that people are so, so close-minded about it. Like yeah, you drove a Lamborghini into my driveway, but I don't like the color.

所以,我只是坐在那裏,用英語問它做任何事情,讓它做任何事情,這對我來說都是令人驚奇的。 那時候這就很了不起了。那簡直是個奇蹟,我從來沒想到會這麼好。所以,爲什麼每個人都不被這個存在於我們世界的事實所震撼,這超出了我的理解範圍。我有點生氣,人們對此如此封閉。是的,你把一輛蘭博基尼開進我的車道,但我不喜歡它的顏色。

It's just this fixation on negativity and what's wrong and what it can't do instead of being amazed and blown away by what it can. And I guess that's just the personality difference between me and people that are like that. I am always looking at the glass half-full and I'm always looking at what's exciting. Now, I never bought a single NFT, right? Just to be clear. So I get that we, in technology, we have these hype cycles and it can get a little exhausting and you're like rolling your eyes at the latest hype cycle and people in your Twitter timeline in all capital letters talking about how this changes everything and is game changing. But I think in this case, it's actually pretty freaking amazing that we have this technology. Okay, I'll stop ranting.
這只是對消極和錯誤的執着,而不是對它的能力感到驚奇和震撼。我想這只是我和那些人之間的性格差異。我總是看到杯子是半滿的,我總是在尋找令人興奮的東西。現在,我從來沒買過一個NFT,要說清楚。所以我明白,在科技領域,我們有這些炒作週期,這可能有點令人厭煩。你會對最新的炒作週期感到不耐煩,人們在你的Twitter時間線裏用全大寫字母說這改變了一切,具有顛覆性。但我認爲,在這種情況下,我們擁有這項技術真的相當了不起。好了,我就說到這兒。

Thorsten: It's funny that you mentioned natural language processing because I come from the other side of the fence. I studied philosophy and I studied philosophy of language. Then when ChatGPT came out, everybody was saying that it doesn't "understand." And I was sitting there thinking: what does "understanding" even mean? How do you understand things? What is meaning? So, I was on the other side of the fence, also thinking that things aren't that easy and that this is super fascinating.
Thorsten:你提到自然語言處理很有趣,因爲我來自另一個陣營。我學的是哲學,研究語言哲學。 當ChatGPT出來的時候,大家都說它不 "理解"。而我坐在那裏想:"理解 "到底是什麼意思?你是如何理解事物的?什麼是意義?所以,我站在另一邊,也認爲事情沒有那麼簡單,這超級有趣。

Antonio: I used ChatGPT right after — I don't know, I think Nathan prompted us to use it. I'm not an AI skeptic or anything — I'm amazed and I also use AI for non-coding tasks — but I've never had an eye-opening experience, I don't know.
Antonio:我在Nathan提示我們使用後馬上使用了ChatGPT--我不知道,我想是Nathan提示我們使用的。我不是AI懷疑論者什麼的--我很驚訝,我也使用AI做非編碼任務--但我從來沒有過讓我大開眼界的經歷,我不知道。

One thing I struggle a lot with with AI is what I do every day. I write code every day for multiple hours a day and I write it in Rust and in this pretty complex code base. And so my first use case for it was this to try to use it in our code base. And every time I try to do that there's always some friction.
我在AI方面遇到的一個很大的困難就是我每天要做的事情。我每天要寫好幾個小時的代碼,用Rust寫,在這個相當複雜的代碼庫裏。所以我第一個用例就是嘗試在我們的代碼庫中使用它。每次我嘗試那樣做的時候總會有一些摩擦。

But one thing that I really like and where I think it really shines is when it comes to generating complex pieces of code. Basically, there are certain patterns in code, right? But you can't really express those in regular expressions or by using Cmd-D to setup multi-cursors, but AI is really good at it. You can just say "okay, I want to apply this refactoring to these five functions" and I can just explain it in a way that I couldn't explain it with any tool like regex. There's a lot of interesting potential.
但我真正喜歡的一點,我認爲它真的很出色的地方,是在生成複雜的代碼片段時。基本上,代碼中有某些模式,對吧?但你不能用正則表達式或者用Cmd-D來設置多個光標來表達這些,但AI在這方面真的很擅長。你可以說 "好的,我想把這個重構應用到這五個函數上",我可以用一種我無法用正則表達式解釋的方式來解釋它。這裏有很多有趣的可能性。

Thorsten: Sounds like there was a bit of a disappointment moment.
Thorsten:聽起來有點失望。

Antonio: Yeah. I don't know whether this thing hasn't seen enough Rust. Maybe that's a problem. But there's also a problem of how we integrate with it probably, right? Where we don't give it enough context. I think the problem of just feeding it the right... One thing that I've started to learn only recently is that crafting that context is essential.

Antonio:是的。我不知道這東西是不是沒看過足夠多的Rust。也許那是個問題。但如何與它集成可能也是一個問題,對吧?我們沒有給它足夠的上下文。我認爲,如何恰當地表達這個問題......我最近纔開始學到的一點是,精心設計上下文是至關重要的。

And you really need to kind of express it right. The machine really needs to understand what you're trying to do. Especially in a complex code base where you have, in your brain, like 50 references, but the machine can't know that. How could it? So, yeah, part of my disappointment is just the integration with the AI, not the tooling per se, but just like,
你真的需要以正確的方式表達它。機器真的需要理解你想做什麼。特別是在一個複雜的代碼庫中,你的大腦裏可能有50個引用,但機器不可能知道。它怎麼可能知道呢?所以,是的,我的部分失望只是與AI的集成,而不是工具本身。

Nathan: We're not there yet, yeah.
Nathan:我們還沒到那一步,是的。

Max: Yeah, the difference between using Copilot in the Zed code base — which I still do sometimes, but I wouldn't call it game changer for me — and then using it with some, say, JavaScript script that is a single file where all the context is there and the job of the script is to minimize a random test failure by reducing the steps or something, and it needs to read a bunch of files and invoke some shell commands, etc. The difference is large and in the latter case, the single JavaScript file, Copilot just knocks it out of the park.
Max:是的,在Zed代碼庫中使用Copilot--我有時還是會這樣做,但我不會說這對我來說是個改變遊戲規則的事情--然後在一些JavaScript腳本中使用它,比如說,一個單一的文件,所有的上下文都在那裏,而且腳本的工作是通過減少步驟來最小化隨機測試失敗,等等。差別很大,在後一種情況下,單一的JavaScript文件,Copilot就會一鳴驚人。

So if we can get it to behave like that in our day-to-day work when we're working on a hundreds of thousands of line codebase, there's a lot of potential there.
所以,如果我們能讓它在我們的日常工作中表現得像在處理數十萬行的代碼庫時那樣,那裏有很大的潛力。

Thorsten: That's what I noticed in our code base when I use the chat assistant. I often thought, "oh, if you could only see the inlay hints, if you could see the types, then you wouldn't give me this answer." But yes, integration.

Thorsten:這就是我在我們的代碼庫中使用聊天助手時注意到的。我經常想,"哦,如果你能看到內嵌提示,如果你能看到類型,那你就不會給我這個答案了。"但是,是的,集成。

Nathan: And that's, again, our failing too. The most successful times I've ever had with it are times when I'm synthesizing together things that are already in the model's training data. I love that mode.

Nathan:再說一次,這也是我們的失敗。我用它最成功的時候,都是在綜合那些已經在模型的訓練數據中的東西。我喜歡那種模式。

A lot of GPUI2's renderer I wrote just in the assistant panel purely from going "yo, I need a renderer that integrates with the Metal APIs. It's written in Rust." It wasn't perfect it was way faster than me configuring all these graphics pipelines and stuff. That's not something I've done a ton.
我用助手面板寫了很多GPUI2的渲染器,純粹是從 "喲,我需要一個與Metal API集成的渲染器。它是用Rust寫的。"它並不完美,但比我配置所有這些圖形管道等要快得多。那不是我做過很多的事情。

I love just like distilling something I need, out of the latent space of one of these models, where I'm providing a few parameters but it's mostly in the weights. But I'm guiding what's in the weights to sort of give me like this StackOverflow-on-acid type thing, where the knowledge is out there. I just need it in a certain shape and I want to guide it.

我只是喜歡從這些模型的潛在空間中提取我需要的東西,我提供一些參數,但大部分都在權重中。但我在指導權重中的內容,以某種方式給我這種StackOverflow加強版的東西,知識就在那裏。我只是需要它以某種形狀,我想指導它。

So I was playing with Claude this weekend in the bath, right? And I literally wrote an entire file index that used like lock free maps to store file paths, interpreted all the FS events coming out of the FS events API. It did everything asynchronously. You know, I wrote randomized tests for it, had a fake file system implementation and I was in the bath, right, on my phone. I didn't have a single moment where I was writing a curly brace. Now, I never ran the thing that it produced, but I reviewed it with my eyes and while it may had a few issues here or there, it was a very legit implementation that this thing wrote of something that took Antonio, Max and I days, days and days of continuous investment to work on. My knowledge of like having solved it before helped me guide it, but I don't know, there's almost some way in which it changes the scale of what you can do quickly.

所以我這個週末在洗澡的時候玩Claude,對吧?我真的寫了一個完整的文件索引,使用無鎖映射來存儲文件路徑,解釋從FS事件API中出來的所有FS事件。它異步地做所有事情。你知道,我爲它寫了隨機測試,有一個假的文件系統實現,而我是在洗澡的時候,在我的手機上。我沒有一個時刻是在寫大括號。現在,我從來沒有運行它產生的東西,但我用眼睛審查了它,雖然它可能有一些問題,但它是一個非常合法的實現,這個東西寫了一些我們花了幾天幾天的連續投資來解決的東西。我之前解決過這個問題的知識幫助我指導它,但我不知道,在某種程度上,它改變了你快速做事的規模。

And then sometimes it just falls flat on its face. For the simplest thing.
而且有時它在最簡單的事情上也會徹底失敗。

Thorsten: ChatGPT came out November 2022, right? When we all should have bought NVIDIA stock. Since then, did you adjust to AI and adjust how you use it? For example, people who use Copilot, they say they adjust to it and kind of leave some comments where they want to guide Copilot. Or did any of you ever get into the whole prompt engineering thing? Or did you reduce when you use it, after figuring out what it can and can't do?
Thorsten: ChatGPT是2022年11月出來的,對吧?那時我們都應該買英偉達的股票。從那時起,你們有沒有適應AI,調整使用它的方式?例如,使用Copilot的人,他們說他們適應了它,會留下一些評論來指導Copilot。或者你們有沒有進入整個提示工程的事情?或者在弄清楚它能做什麼和不能做什麼之後,你減少了使用它的頻率?

Nathan: I don't really use Copilot for what it's worth. I find it annoying. It's in my face. I never was into running tests automatically on save either. I always just want to... I don't know. I prefer to interact with the AI more in a chat modality. So I'm really looking forward to the time we're about to invest, to get more into that context window.
Nathan:我其實並不真的使用Copilot。我覺得它很煩人。它就在我面前。我也從來不喜歡在保存時自動運行測試。我總是隻想......我不知道。我更喜歡在聊天模式下與AI互動。所以我真的很期待我們即將投入的時間,進一步進入那個上下文窗口。

I just find Copilot to be kind of dumb. I don't know. Because they have to be able to invoke it on every keystroke they have to use a dumber model. And so I guess I just prefer more using a smarter model, but being more deliberate in how I'm using it. But I'm not married to that perspective. I think maybe some UX tweaks on Copilot could change my relationship, but I don't know. I guess I've been willing to sort of use it and even have my interaction with it be slower or less effective sometimes in the name of investing and learning how to use it.

我只是覺得Copilot有點傻。我不知道。因爲他們必須能夠在每次按鍵時調用它,所以他們必須使用一個更傻的模型。所以我想我更喜歡使用一個更智能的模型,但在如何使用它方面更加深思熟慮。但我並不固執於這個觀點。我認爲Copilot的一些用戶體驗調整可能會改變我的關係,但我不知道。我想我願意使用它,甚至讓我與它的互動有時會更慢或效率更低,以便投資和學習如何使用它。

And yeah, like at the time it saved me on certain really hard things like writing a procedural macro to, or enumerate all the Tailwind classes for GPUI. It kind of taught me how to write proc macros because I didn't know how.
是的,在某些真正困難的事情上,它節省了我的時間,比如編寫一個過程宏來爲GPUI枚舉所有的Tailwind類。它教會了我如何編寫過程宏,因爲我不知道如何編寫。

Thorsten: Exactly a year ago, I was at a conference and I was meeting programmer friends and we were all talking about ChatGPT and some of them were saying, "oh, it doesn't know anything. I just tried this and it doesn't know anything." But the queries or the prompts they used, they looked like the prompts people used 20 years ago with Google. Back when you still had this keyword search, people would type in, "where can I get a hot dog?" But that's not how it worked back then. One friend of mine, though, he said, "you know what I use it for? I use it like an assistant. I use it like an intern." So essentially, when he's debugging something and he needs a little program to reproduce the bug, he says to the AI, "Can you write me a little HTTP server that doesn't set a connection timeout" or something like that. Because he knows where the shortcomings are. And I think that a lot of us have had this over the past year, we started to get a feel for where the shortcomings are and adjust our use to it. So I was curious whether you had any of these moments.
Thorsten:正好一年前,我參加了一個會議,遇到了一些程序員朋友,我們都在談論ChatGPT,其中一些人說,"哦,它什麼都不知道。我剛試過這個,它什麼都不知道。"但他們使用的查詢或提示,看起來就像20年前人們用谷歌時使用的提示。那時你還有關鍵詞搜索,人們會輸入 "哪裏可以買到熱狗?"但那時候它不是這樣工作的。不過,我的一個朋友說,"你知道我用它做什麼嗎?我把它當作助手用。我把它當作實習生用。"基本上,當他在調試一些東西,需要一個小程序來重現bug時,他會對AI說,"你能給我寫一個HTTP服務器,不設置連接超時 "之類的。因爲他知道缺點在哪裏。我想我們很多人在過去一年裏都有過這樣的經歷,我們開始體會到缺點在哪裏,並調整我們的使用方式。所以我很好奇你們是否有過這樣的時刻。

Max: I have a one in my day-to-day life. I use ChatGPT a lot instead of Google. And I've learned to say, "now, don't hedge this by saying, 'it depends'. I'm aware. Just tell me, give me an answer", so that ChatGPT doesn't say, "there are many possible responses to this."

Max:在我的日常生活中,我經常使用ChatGPT代替谷歌。我已經學會說,"現在,不要用'這取決於'來回避這個問題。我知道。告訴我,給我一個答案",這樣ChatGPT就不會說,"對此有很多可能的回答。"

But I think I have a lot to learn about what to do in the programming world still. There's probably a lot of knowledge out there that I just haven't adopted into my workflow yet for prompting the assistant.
但我認爲在編程世界裏,我還有很多東西要學。可能有很多知識我還沒有采用到我的工作流程中,用於提示助手。

Nathan: I think I have the advantage of just being not as good of a raw programmer as Max or Antonio. A lot of times when I'm pairing, I take more of a navigator role in the interaction. And so I just reach for AI more because I'm just not as fast at cranking out code. And so I think it's less frustrating to me.
Nathan:我認爲我的優勢在於,我不像Max或Antonio那樣是一個純粹的程序員。很多時候,當我配對時,我在互動中更多地扮演了領航員的角色。所以我更多地求助於AI,因爲我並沒有那麼快地編寫代碼。所以我想對我來說它不那麼令人沮喪。

Thorsten: When did you decide "we have to add this to Zed"? Was it being swept up in the hype and everybody asking for it, or was there a specific thing, or time when you said, "no, I need this in the editor."

Thorsten:你們什麼時候決定 "我們必須把它加入Zed"?是被炒作所淹沒,每個人都在要求它,還是有什麼具體的事情,或者是你們什麼時候說,"不,我需要它在編輯器裏。"

Nathan: For me there's Copilot and then there's the Assistant. So Copilot, everybody asks for it. And I was like, "oh, I wanna see what it's like to work with this". But then I ended up not using it a lot. But for the other one, the assistant, it was just that I was using GPT4 and they were rate-limiting me. So then I was going into the SDK or the playground and writing text in a fricking web browser. And I'm just like, this is driving me crazy. I wanna write text in Zed.
Nathan:對我來說,有Copilot,然後還有助手。所以Copilot,每個人都要求它。我想,"哦,我想看看和它一起工作是什麼樣子的"。但後來我最終沒有經常使用它。但對於另一個,助手,那只是因爲我在使用GPT4,而他們在限制我的使用速率。所以我進入SDK或playground,在一個網絡瀏覽器中寫文本。我就像,這快把我逼瘋了。我想在Zed裏寫文本。

And, I mean, that's what the assistant is right now. It's kind of pretty bare bones. It's like an API, it's like an OpenAI API request editor almost, one that isn't annoying to use from a text editing perspective. That's kind of where things are at right now, which isn't where they need to stay. We have a lot of work to do on it, but that's the thought process.
而且,我的意思是,這就是助手現在的樣子。它有點非常簡陋。它就像一個API,幾乎就像一個OpenAI API請求編輯器,從文本編輯的角度來看,使用起來並不令人討厭。這就是事情目前的狀態,這不是它們需要停留的地方。我們在這方面還有很多工作要做,但這就是思考過程。

Thorsten: I kind of want to go into the weeds a little and ask you about the inline assist in Zed. Context for whoever's watching or listening or reading: you can select text in Zed, you can then hit ctrl-enter and you send a message along with the selected text and some more context to the AI and you can ask it to "change the return type of this" or whatever, "reorder this" or "use a macro", something like that. What then happens when the request comes back from the AI, is that you can see it type the text or change it and you can see it change word by word. It doesn't look like it's just shoving the text into the buffer. So I'm curious, what happens when the LLM request comes back and says, here's a snippet of code?

Thorsten:我有點想深入探討一下,問你關於Zed中的內聯輔助。爲了給觀看或聽或閱讀的人一些背景:你可以在Zed中選擇文本,然後按ctrl-enter,你就會把選中的文本和一些更多的上下文以消息的形式發送給AI,你可以要求它 "改變這個的返回類型 "或者其他什麼,"重新排序這個 "或 "使用宏",諸如此類。當請求從AI返回時,會發生什麼,你可以看到它輸入文本或改變它,你可以看到它一個字一個字地改變。看起來不像是把文本塞進緩衝區。所以我很好奇,當LLM請求回來說,這裏有一個代碼片段時,會發生什麼?

Antonio: Basically, we implemented a custom version of the Needleman-Wunsch algorithm. There's several algorithms for fuzzy finding and they all stem from this dynamic programming algorithm, which is essentially about finding the lowest cost path from point A, the origin, which is where both strings start, and the end, which is where both strings end. So we're kind of doing this like diff, the streaming diff, because typically diff is this lossy function where you need to have like both texts entirely, but the problem is that the AI streams the response, chunk by chunk. But we don't want to wait for the entire response to come back before diffing. So we kind of have this like slightly modified version of Needleman, in which we try to favor insertions and deletions and we kind of look ahead a little bit and have a different cost function. That lets us produce these streaming edits. It's pretty fun project.
Antonio:基本上,我們實現了Needleman-Wunsch算法的自定義版本。有幾種模糊查找的算法,它們都源於這種動態規劃算法,本質上是找到從A點,也就是起點,也就是兩個字符串的起點,到終點,也就是兩個字符串的終點,成本最低的路徑。所以我們在做這種diff,流式diff,因爲通常diff是這種有損函數,你需要完整地擁有兩個文本,但問題是AI是分塊流式傳輸響應的。但我們不想等整個響應回來再做diff。所以我們有這種稍微修改過的Needleman版本,我們試圖偏好插入和刪除,我們稍微向前看一點,有一個不同的成本函數。這讓我們能夠生成這些流式編輯。這是一個相當有趣的項目。

Thorsten: So did you build this specifically for the inline assist? I assumed it code that's also used in the collaboration features, no?
Thorsten:所以你是專門爲內聯輔助構建的嗎?我以爲它也用於協作功能的代碼,不是嗎?

Antonio: No. What we tried at first actually was to have the AI use function calling to give us the edits, as opposed to, asking for a response and the AI just spitting it out, top to bottom. The initial attempt was like, "okay, just give us the precise edits that, you know, you want us to apply". But what we found out pretty early on was that it wasn't working very reliably. It was kind of tricky to have it produce precise locations.
Antonio:不是。我們最初嘗試的實際上是讓AI使用函數調用來給我們編輯,而不是要求響應,AI只是從上到下吐出來。最初的嘗試是,"好的,只要給我們你想讓我們應用的精確編輯就行了"。但我們很早就發現,它工作得不太可靠。讓它產生精確的位置有點棘手。

It's really good at understanding what you're trying to do as a whole, but it's very hard to have it say, "okay, at point three, you know, row three, column two, I want to insert, delete, you know, five characters and insert, you know, these other six characters".
它真的很擅長理解你想做的事情作爲一個整體,但很難讓它說,"好的,在第三點,你知道,第三行,第二列,我要插入,刪除,你知道,五個字符,插入,你知道,這另外六個字符"。

So we went back to the drawing board and we said it's good at spitting out texts, so let's just have it write what you wanted, and that's Nathan's idea to do it came in.
所以我們回到了繪圖板,我們說它擅長吐出文本,所以讓我們讓它寫出你想要的,這就是Nathan的想法進來了。

Nathan: And Antonio's algorithmic chops actually making it happen. Yeah.
Nathan:還有Antonio的算法能力真正實現了它。是的。

Thorsten: How, it's pretty reliable, right?
Thorsten:它相當可靠,對吧?

Antonio: Thanks. Yeah.
Antonio:謝謝。是的。

Nathan: It sometimes it overdraws. It... I don't know. It's not always reliable for me. I think that has to do with our prompting maybe. There's a lot of exploration to do here. I'll ask it to write the documentation for a function and it'll rewrite the function. That drives me crazy.

Nathan:它有時會過度繪製。它......我不知道。對我來說,它並不總是可靠的。我覺得這可能與我們的提示有關。這裏有很多需要探索的地方。我會要求它爲一個函數編寫文檔,而它會重寫這個函數。這讓我抓狂。

Thorsten: The prompting, sure, but the actual text insertion — every time I see these words light up, I'm like, what's going on here? How do they do this? How long did it take to implement this? I'm curious.
Thorsten:提示,當然,但實際的文本插入--每次我看到這些字亮起,我就在想,這裏發生了什麼?他們是怎麼做到的?實現這個需要多長時間?我很好奇。

Antonio: Half a day. Yeah, I remember a day. Yeah, something like that.

Antonio:半天。是的,我記得是一天。是的,差不多就是這樣。

Thorsten: No way.
Thorsten:不會吧。

Nathan: But to be fair, we had already really explored and just needed a little bit of push for the path matching. That took a little more time, wrapping our brains around it. And I think more of it stuck for you, Antonio, to put it that way.
Nathan:但公平地說,我們已經真正探索過了,只是需要一點推動來進行路徑匹配。這花了更多的時間,我們的大腦需要適應它。我想更多的東西留在了你那裏,Antonio,就這麼說吧。

Antonio: Hahaha!
Antonio:哈哈哈!

Nathan: Cause, yeah, traversing that dynamic programming matrix still kind of boggles my mind a little
Nathan:因爲,是的,遍歷那個動態規劃矩陣仍然讓我有點困惑

Thorsten: Half a day — you could have said a week just to make me feel better.
Thorsten:半天--你本可以說一週,只是爲了讓我感覺好點。

Antonio: A week, yeah a week, no.
Antonio:一週,是的,一週,不是。

Max: Hahaha.
Max:哈哈哈。

Thorsten: So right now in Zed we have the inline assist, we have the chat assistant, which you can use to just write Markdown, and you can talk to multiple models. What's next? What's on the roadmap?
Thorsten:所以現在在Zed中,我們有內聯輔助,我們有聊天助手,你可以用它來寫Markdown,你可以與多個模型對話。接下來是什麼?路線圖上有什麼?

Nathan: A big piece is just getting more context into what the assistant sees. Transitioning it away from an API client to starting to pull in more context. Kyle, who's been contracting with us, has a branch where we're pulling in the current file. Obviously we want more mechanisms for pulling context in, not only the current file, but all the open files, and you can dial in the context, opt in or out, and so on.
Nathan:一個重要的部分就是讓助手看到更多的上下文。將它從API客戶端轉變爲開始引入更多上下文。Kyle,他一直在與我們簽約,有一個分支,我們在其中引入當前文件。顯然,我們希望有更多的機制來引入上下文,不僅是當前文件,還有所有打開的文件,你可以調整上下文,選擇加入或退出,等等。

But then also doing tool-calling where I can talk to the assistant and have it help me craft my context, if that makes sense. Also having it interact with the language server, but also using tree-sitter to sort of traverse dependencies of files that we're pulling in so that we make sure that we have all this in the context window. Of course, context sizes have gone up a lot, which makes this all a lot easier, because we can be more greedy in terms of what we're pulling in.
但接下來還要做工具調用,我可以與助手交談,讓它幫我製作上下文,如果這有意義的話。還要讓它與語言服務器交互,但也要使用tree-sitter來遍歷我們引入的文件的依賴關係,以確保我們在上下文窗口中擁有所有這些。當然,上下文的大小已經大大增加了,這使得這一切變得容易多了,因爲我們可以在引入的內容方面更加貪婪。

So that's a big dimension, populating that context window more intelligently, but also giving the assistant tool calls that it can use to write a command in the terminal. I don't know if I want to give it the ability to hit enter, you know, but like, at least write it in and stage it and shift my focus over there so that I can run something. I could get help with whatever random bash incantation I might want to run. Having the assistant escape that little box and reach out and interact with other parts of the editor.
所以這是一個重要的維度,更智能地填充上下文窗口,但也給助手工具調用,它可以用來在終端中寫命令。我不知道我是否想給它按回車的能力,你知道,但至少把它寫進去,把它放在那裏,把我的注意力轉移到那裏,這樣我就可以運行一些東西。我可以得到幫助,運行任何我想運行的隨機bash咒語。讓助手逃離那個小盒子,伸出手來與編輯器的其他部分交互。

That's all really low hanging fruit. I think that we need to pick. That's what's next for me. And then we're also like experimenting with alternative completion providers, for the Copilot style experience. We'll see where that goes. It's still kind of early days there.
這些都是唾手可得的果實。我想我們需要選擇。這就是我下一步要做的。然後我們還在嘗試替代的補全提供者,爲Copilot風格的體驗。我們將看看它的發展。那裏還處於早期階段。

Max: I'm excited about another dimension of the feature set. Right now, all the stuff we were just been talking about, that system is very local. You select a block of code and its output is directed into that location.
Max:我對功能集的另一個維度感到興奮。現在,我們剛剛談論的所有東西,那個系統都是非常局部的。你選擇一塊代碼,它的輸出被定向到那個位置。

But being able to — just like code actions in Zed — say "inline this function into all callers" and get a multi-buffer opened up that says "I changed here, I changed here, I changed here. Do you want to save this or undo it?" I can then go look at what it did.
但能夠像Zed中的代碼操作一樣說 "把這個函數內聯到所有調用者",然後打開一個多緩衝區,說 "我在這裏改了,我在這裏改了,我在這裏改了。你想保存還是撤銷?"然後我可以去看它做了什麼。

I want to be able to say, "extract a struct out of this that isn't in this crate, that's in a subcrate and depend on it in all these usages of this crate so they don't have to depend on all this other stuff" and then have it go, "here, I changed your Cargo.toml, I created a crate, I changed this, I did these sort of more complex transformations to various pieces of code in your code base. You wanna save this or undo it?"
我想能夠說,"從這個板條箱中提取一個結構體,它不在這個板條箱中,而是在一個子板條箱中,並在這個板條箱的所有這些用法中依賴它,這樣它們就不必依賴所有這些其他東西了",然後讓它說,"在這裏,我修改了你的Cargo.toml,我創建了一個板條箱,我修改了這個,我對你代碼庫中的各個代碼片段做了這些更復雜的轉換。你想保存還是撤銷?"

I think that's gonna be a really powerful way of letting it do more stuff while keeping control. I think the multi-buffer is a good way to the user and ask "you want to apply all these transformations that I just made?"

我認爲這將是一種非常強大的方式,讓它在保持控制的同時做更多的事情。我認爲多緩衝區是一個很好的方式,可以問用戶 "你想應用我剛剛做的所有這些轉換嗎?"

Nathan: Speaking of multi-buffer, another really low-hanging fruit thing is invoking the model in parallel on multiple selections. When I pull up a multi-buffer full of compile errors that are all basically the same stupid manipulation that I needed to do, it'd be great to just apply an LLM prompt to every single one of those errors in parallel.
Nathan:說到多緩衝區,另一個真正唾手可得的東西是在多個選擇上並行調用模型。當我打開一個充滿編譯錯誤的多緩衝區,而這些錯誤基本上都是我需要做的相同的愚蠢操作時,如果能並行地對每一個錯誤應用LLM提示,那就太好了。

Thorsten: Low-hanging fruits are everywhere — you could add AI to every text input basically, adding autocomplete or generation or whatsoever. There was an example last week, when I talked with somebody who wanted to use an LLM in the project search input where you could use the LLM to generate regex for you. That's cool, but at the same time, I thought, wouldn't it actually be the better step be to have a proper keyword search instead of having the LLM translate to a regex? I'm wondering whether there isn't the possibility of being trapped in a local maximum by going for the low-hanging fruit.
Thorsten:到處都是唾手可得的果實--你基本上可以在每個文本輸入中添加AI,添加自動完成或生成等功能。上週有個例子,我和一個人交談,他想在項目搜索輸入中使用LLM,你可以用LLM爲你生成正則表達式。這很酷,但與此同時,我在想,實際上更好的步驟難道不是進行適當的關鍵詞搜索,而不是讓LLM轉換爲正則表達式嗎?我在想,是否有可能因爲追求唾手可得的果實而陷入局部最大值。

Max: Meaning, like, how much of the current programming tool paradigm, like, regex search do we even want to keep? Or do we say that we don't even need that feature anymore?
Max:意思是,比如,我們到底還想保留多少當前的編程工具範式,比如正則表達式搜索?或者我們說我們甚至不再需要那個功能了?

Thorsten: Something like that, yeah. A year ago, everybody was adding AI to every field and obviously things changed and people now say, "this is not a good use case for that", and you're now also saying you want it to have access to files, and so on. How do you think about that? What do you see as the next big milestone?

Thorsten:對,差不多就是這樣。一年前,每個人都在每個領域添加AI,顯然情況發生了變化,人們現在說,"這不是一個好的用例",你現在也說你希望它能訪問文件,等等。你怎麼看這個問題?你認爲下一個重要的里程碑是什麼?

Nathan: Well, I guess I have different perspectives. There's a couple different things I want to respond to that with. One is we experimented with semantic search over the summer and the initial thing was that we generated all these embeddings with OpenAI's embedding API, which is designed for text and prose. I think less for code, that's at least my understanding, maybe people can comment on the YouTube video and tell me I'm wrong. So I don't know how good embedding models are in general for code, but what I did find is that with this initial experiment, that was literally that you would start typing your query and we would just show you the matching files or like the file and line number. And I was just using that a ton for navigation that it was just really useful to be able to mash keys.
Nathan:嗯,我想我有不同的看法。我想用幾個不同的角度來回應這個問題。一個是我們在夏天嘗試了語義搜索,最初的想法是我們用OpenAI的嵌入API生成了所有這些嵌入,這個API是爲文本和散文設計的。我認爲對代碼來說不太合適,至少這是我的理解,也許人們可以在YouTube視頻下評論告訴我我錯了。所以我不知道嵌入模型對代碼到底有多好,但我發現,通過這個初始實驗,你開始輸入查詢,我們就會向你顯示匹配的文件或文件和行號。我大量使用它進行導航,它真的很有用,可以快速按鍵。

It was better than the fuzzy finder and better than cmd-t, which is the symbol search on the language server. Because at least with rust-analyzer on our code base, that can be really slow. So I used it as kind of this just quick, convenient navigation tool.
它比模糊查找器好,也比cmd-t好,後者是語言服務器上的符號搜索。因爲至少在我們的代碼庫上使用rust-analyzer,那可能會非常慢。所以我把它用作這種快速、方便的導航工具。

But then I was not super involved and we pivoted that prototype-modal-experience into a feature of our search. And then I just stopped using it because of the friction of using that was too high. And the quality of the results that we were getting at least then, wasn't really high enough. I want to get back to that, restoring that modal fuzzy navigation experience of using semantics to quickly jump to different areas. But it's not like a search result, not quite, it's more like this quick thing. So that's one thing.
但後來我沒有太多參與,我們把那個原型模態體驗轉變成了我們搜索的一個功能。然後我就不再使用它了,因爲使用它的摩擦太大了。而且我們當時至少得到的結果質量還不夠高。我想回到那種狀態,恢復使用語義快速跳轉到不同區域的模糊導航體驗。但它不像搜索結果,不完全是,它更像是這種快速的東西。所以這是一個方面。

But the other thing I want to say is like, I'm skeptical, I guess, of... I was skeptical of AI in general until I was proven wrong. So I want to be careful to be humble about the possibilities of what can be done. But, in general, where I'm at right now is that I really want to still see what is going on. I don't want a lot of shit like happening behind the scenes on my behalf where it's writing a regex and then running it because I don't have enough confidence that's going to work well.

但我想說的另一件事是,我對......我猜我對AI普遍持懷疑態度,直到事實證明我錯了。所以我想謹慎地對可能做到的事情保持謙遜。但是,總的來說,我現在的立場是,我真的還想看看正在發生什麼。我不希望在幕後代表我發生很多事情,比如它在寫一個正則表達式然後運行它,因爲我沒有足夠的信心認爲它會工作得很好。

So until I get that confidence I like the idea of there being this very visible hybrid experience. The AI is helping me use the algorithmic, traditional tools. Even OpenAI has the code interpreter, right? They're not trying to get the LLM to add numbers. They just shell out to Python. And so I think giving the AI access to these more algorithmic traditional tools is like where I want to go.
所以在我獲得這種信心之前,我喜歡有這種非常可見的混合體驗的想法。AI在幫助我使用算法化的、傳統的工具。甚至OpenAI也有代碼解釋器,對吧?他們不是想讓LLM來加數字。他們只是把它交給Python。所以我認爲給AI訪問這些更算法化的傳統工具的權限,是我想要的方向。

Thorsten: Do you have any thoughts on the context windows? When you have a large context window, you would think all of the problems are solved, right? Just shove the whole codebase in. But then you also have to upload a lot of code and it takes a while longer until the response come back. Any thoughts on this trade off between context size and latency?
Thorsten:關於上下文窗口,你有什麼想法嗎?當你有一個大的上下文窗口時,你會認爲所有的問題都解決了,對吧?把整個代碼庫都塞進去。但是你也要上傳很多代碼,需要更長的時間才能得到響應。對這種上下文大小和延遲之間的權衡,你有什麼想法嗎?

Nathan: I'm still wrapping my brain around what causes the additional latency when the context size grows larger. In my mental model of a transformer, I don't understand why it takes longer, but I can see practically that it does. So yeah, I guess, I'm revealing my ignorance here.
Nathan:我還在思考當上下文大小增加時,是什麼導致了額外的延遲。在我對transformer的心理模型中,我不明白爲什麼它需要更長的時間,但實際上我可以看到它確實如此。所以,是的,我想,我在這裏暴露了我的無知。

But to me it seems like, Giving it everything is a recipe for maybe giving it too much and confusing it. Although my understanding is that this is also improving, they're getting less confused now by noise and the needle in the haystack problem. That's what I saw from Gemini, I'm still kind of waiting to get my API access. But what I saw was that it's very good at kind of plucking out details that matter among the sea of garbage.
但對我來說,給它一切可能會給它太多並讓它感到困惑。儘管我的理解是,這也在改進,它們現在不會被噪音和大海撈針問題搞得那麼糊塗。這就是我從Gemini看到的,我還在等待獲得API訪問權限。但我看到的是,它非常擅長從垃圾堆中挑選出重要的細節。

I don't know, that wasn't a very coherent thought other than it seems to me that we need to think about how to curate context for a while longer. And the times when I've interacted with models and been most successful has been either when I'm, again, like drawing from the weights, the latent space of that model, and very little needed in the context window because the problem I'm solving is sort of out there in the ether. Or I really set it up with the specific things that it needs to be successful.
我不知道,這不是一個很連貫的想法,除了在我看來,我們需要再考慮一段時間如何策劃上下文。我與模型互動並取得最大成功的時候,要麼是我再次從模型的權重、潛在空間中提取信息,而上下文窗口中幾乎不需要什麼,因爲我要解決的問題就在那裏。要麼我真的用它需要成功的特定東西來設置它。

But to be fair, I think we have a lot to learn in this space. Yeah.
但公平地說,我認爲我們在這個領域還有很多要學的。是的。

Thorsten: I asked because you said you used the fuzzy-search when you had it within reach, but once there was a little bit more friction you stopped using it. And I noticed, speaking of large context windows, that I already get impatient when I have to wait for ChatGPT sometimes. "Come on, skip the intro, give me the good stuff." With large context windows, I wonder whether I would rather skip asking when I know that the answer's gonna take 20 seconds to come back, or 10 seconds, or whatever it is.
Thorsten:我之所以問,是因爲你說當模糊搜索在你觸手可及的時候你會使用它,但一旦有一點摩擦你就不再使用它了。我注意到,說到大的上下文窗口,有時我必須等待ChatGPT時我已經變得不耐煩了。"來吧,跳過簡介,給我好東西。"對於大的上下文窗口,我不知道當我知道答案要花20秒或10秒或無論多長時間才能回來時,我是否寧願跳過提問。

Nathan: Yeah, I think the higher the latency, the more I'm going to expect out of what it responds with. I mean, I was just having a great time in the bath, while I waited for Claude to respond. I took a deep breath and felt the warm water on my body, you know, and then by the time it responds, I'm just reading it.
Nathan:是的,我認爲延遲越高,我對它的迴應就期望越大。我的意思是,我剛纔在洗澡時玩得很開心,等待Claude迴應的時候。我深吸一口氣,感受到溫水在我身上,你知道,等它迴應的時候,我只是在讀它。

Thorsten: I think you should redo this with a control group that also codes in the bath but without AI. Maybe the results are the same. It sounds like a fantastic bath. Let me ask some controversial questions... When I said I'm going to join Zed, people asked me, "oh, a text editor? Do we even have to write code two years from now, with AI?" What do you think about that? Do you think we will still type program language syntax into Zed in five years, or do you think that how we program will fundamentally change?

Thorsten:我覺得你應該重做這個實驗,用一個對照組,也在浴缸裏編碼,但沒有AI。也許結果是一樣的。聽起來是個夢幻般的洗澡。讓我問一些有爭議的問題......當我說我要加入Zed時,人們問我,"哦,一個文本編輯器?兩年後我們還需要用AI寫代碼嗎?"你怎麼看?你認爲五年後我們還會在Zed中輸入程序語言語法嗎,或者你認爲我們編程的方式會從根本上改變嗎?

Nathan: Yeah, it's a good question. I mean, I've tweeted out that it's kind of ironic that as soon as AI can write me a code editor, I won't need a code editor. But as of yet, it's not yet possible to sit down and say, build me a code editor written in Rust with GPU accelerated graphics. I don't know. I don't think AI is there yet.
Nathan:是啊,這是個好問題。我的意思是,我發過推特說,諷刺的是,一旦AI可以給我寫一個代碼編輯器,我就不需要代碼編輯器了。但到目前爲止,還不可能坐下來說,給我建一個用Rust編寫的、有GPU加速圖形的代碼編輯器。我不知道。我認爲AI還沒有達到那個程度。

Now maybe that's the only product complex enough. Maybe the only thing that AI can't build is a code editor, but I'm skeptical right now. Maybe Ray Kurzweil is right and we're all just going to be like uploading our brains into the cloud and I just don't know. All I know is things are changing fast, but what seems true right now is at the very least I'm going to want supervisory access, like that Devon demo.
對,也許我們最終都要把大腦上傳到雲端,我不知道。我所知道的是,事情變化很快,但現在似乎至少我還是想要監督訪問權,就像Devon演示的那樣。

To me, a potential outcome is that editing code ends up feeling, for a while, like that Devon demo but with an amazing UX for having a human programmer involved in that loop, guiding that process so that we're not just spamming an LLM with brute force attempts. Instead there's this feedback loop of the LLM taking access and the human being involved has to correct that or guide that. Yeah, so it becomes this like human LLM collaboration, but the human is still involved.

對我來說,一個可能的結果是,編輯代碼最終會有一段時間的感覺,就像Devon演示的那樣,但有一個令人驚歎的用戶體驗,讓人類程序員參與到那個循環中,指導那個過程,這樣我們就不只是用蠻力嘗試向LLM發送垃圾郵件。相反,有這樣一個反饋循環,LLM獲取訪問權限,而參與的人類必須糾正或指導。是的,所以它變成了人類與LLM的合作,但人類仍然參與其中。

If that ends up not being true, yeah, I guess we don't need a code editor anymore. I don't know how far away that is, if it's ever gonna be here.
如果最後這不是真的,是的,我想我們不再需要代碼編輯器了。我不知道那還有多遠,如果它真的會到來的話。

They've been telling me for a long, long time that I'm gonna be riding around in these self-driving taxis and I've done it a couple times. But I will say, the taxi refused to park where we actually were in San Francisco. So we had to walk in pouring rain to the place where they pick us up. My mind is freaking blown that a car is automatically driving me, picking me up and driving me somewhere else, and at the same time, I'm a little annoyed that I'm walking through the rain right now to the place where it stopped. It sort of feels like the same thing happens with LLMs, right?
他們很長很長時間以來一直在告訴我,我會乘坐這些自動駕駛的出租車,我也這樣做過幾次。但我要說,出租車拒絕在我們實際所在的舊金山停車。所以我們不得不冒着傾盆大雨走到他們接我們的地方。我的思想被顛覆了,一輛車在自動駕駛我,接我,把我送到別的地方,同時,我有點惱火,我現在要冒雨走到它停下的地方。感覺就像LLM也發生了同樣的事情,對吧?

Who knows what's gonna happen, but for the moment, I like creating software. I don't need to type the code out myself, but I do feel like I'd like to be involved more than just sitting down to a Google search box and being like, go be a code editor.

誰知道會發生什麼,但就目前而言,我喜歡創建軟件。我不需要自己輸入代碼,但我確實覺得我想參與其中,而不僅僅是坐在谷歌搜索框前,像一個代碼編輯器那樣。

Max: I'm bullish on code still being a thing for quite a while longer. I think it goes back to what Nathan said about the AI expanding the set of things that you can build, in a shorter amount of time, it makes it easier to explore a bigger space of ideas, because it's cheaper.
Max:我看好代碼在相當長一段時間內仍然是一個事物。我認爲這要回到Nathan所說的,AI擴大了你能在更短的時間內構建的事物集合,它讓你更容易探索更大的創意空間,因爲它更便宜。

I think there will be code that it won't be anyone's job anymore to write, but that's boring code anyway.
我認爲會有一些代碼不再是任何人的工作去編寫,但反正那是無聊的代碼。

But I think it's just gonna make it possible to have more code because it's cheaper to maintain it, it's cheaper to create it, rewrite it if we want a new version. There'll be all kinds of things that weren't possible before. Like right now, banks aren't able to deliver like good websites, and I think there may be a day where a bank could have a good website. There'll be software that is, for whatever reason, infeasible to deliver right now. It will be feasible to finally deliver. And I think this is going to be code and I'm still going to want to look at it sometimes.
但我認爲它只會讓擁有更多代碼成爲可能,因爲維護它更便宜,創建它更便宜,如果我們想要新版本,重寫它也更便宜。將會有各種以前不可能的事情。就像現在,銀行無法提供好的網站,我認爲可能會有一天,銀行可以擁有一個好網站。將會有一些軟件,由於某種原因,現在無法交付。最終將可以交付。我認爲這將是代碼,我有時仍然會想看看它。

Nathan: Yeah, it's an incredible commentary on the power of human incentives and the corruption of the banking system that a bank having a good website is the day before we achieve AGI.

Nathan:是啊,這是對人類激勵的力量和銀行體系腐敗的一個令人難以置信的評論,銀行擁有一個好網站,是我們實現AGI前一天的事。

Max: Ha ha ha ha.
Max: 哈哈哈哈。

Antonio: If you look at Twitter right now, it's like every post is saying AGI is coming out next month. I don't know. I don't really know. The honest answer for me is that I don't know. It's possible. That's one thing that annoys me about AI, just how opaque some of these things are.
Antonio:如果你現在看Twitter,就像每個帖子都在說下個月AGI就要出來了。我不知道。我真的不知道。對我來說,誠實的回答是我不知道。這是有可能的。這是AI中讓我煩惱的一點,只是這些東西中有些是多麼不透明。

In the past, with technology in general, if there were hard problems or complicated things, I could sit down and at least try to understand them and maybe even create them. With AI, unless you want to do something with ChatGPT or Claude, you have to spend millions of dollars. That part, I don't love that.
在過去,對於技術來說,如果有困難的問題或複雜的事情,我可以坐下來,至少嘗試去理解它們,甚至可能創造它們。對於AI,除非你想用ChatGPT或Claude做點什麼,否則你必須花上百萬美元。那部分,我不喜歡。

That's where my doubts come from, because it's very possible that engineers and researchers from these companies are right there with AGI, right there with super human intelligence, but how much of it is hype? I don't know.
這就是我的疑慮所在,因爲工程師和這些公司的研究人員很可能就在那裏,就在AGI那裏,就在超人智能那裏,但有多少是炒作?我不知道。

Thorsten: Here's a question that I'd love your thoughts on. I use ChatGPT a lot to do the "chores" of programming, some CSS stuff, or some JavaScript, or I use it to generate a Python script for me to talk to the Google API, and it saves me a full day of headaches and trying to find out where to put the OAuth token and whatnot. But with lower-level programming, say async Rust, you can see how it starts to break down. You can see that this other thing seems relatively easy for the AI but this other thing, something happens there. And what I'm wondering is, is that a question of scale? Did it just see more JavaScript? Did it see more HTML than systems Rust code because it scraped the whole internet?
Thorsten:這裏有一個問題,我很想聽聽你的想法。我經常使用ChatGPT來做編程的 "雜務",一些CSS的東西,或者一些JavaScript,或者我用它來爲我生成一個Python腳本來與Google API對話,它爲我節省了一整天的頭痛和試圖找出OAuth令牌該放在哪裏之類的事情。但對於較低級別的編程,比如異步Rust,你可以看到它是如何開始崩潰的。你可以看到這另一件事對AI來說似乎相對容易,但這另一件事,那裏發生了一些事情。我想知道的是,這是規模的問題嗎?它只是看到了更多的JavaScript?它看到的HTML比系統Rust代碼更多,因爲它刮掉了整個互聯網?

Max: I think solving problems that have been solved a lot of times, that require a slight tweak — I think it's great it works that way. Those are boring things because they've been solved a lot of times and I think the LLM is great at knocking those out. And some of the stuff that we do, which has been solved — I'm not going to say we're doing things that have never been done before every day — but a lot of the stuff we're doing day-to-day has not been solved that many times in the world. And it's fun. That's why I like doing it. So I'm not that upset that the LLM can't totally do it for me. But when I do stuff that is super standard, I love that the LLM can just complete it, just solve it.
Max:我認爲,多次解決過的問題,需要稍作調整的問題--我認爲它能這樣工作很好。這些都是無聊的事情,因爲它們已經解決了很多次,我認爲LLM很擅長解決這些問題。我們做的一些事情,已經解決了--我不是說我們每天都在做以前從未做過的事情--但我們日常工作中的很多事情,在世界上還沒有被解決過那麼多次。而且很有趣。這就是我喜歡做這件事的原因。所以我並不那麼沮喪,LLM不能完全爲我做這件事。但當我做一些超級標準的東西時,我喜歡LLM可以直接完成它,解決它。

Nathan: I want the LLM to be able to do as much as it possibly can for me. But yeah, I do think that it hasn't seen a lot of Rust. I mean, I've talked to people in the space that have just stated that. Like they were excited, "oh, you're open sourcing Zed? I'm excited to get more training data in Rust." And I'm like, "me too", other than, you know, competitors just sitting down and saying, "build me a fast code editor" and then it's already learned how to do that and all this work comes to nothing. I don't know.
Nathan:我希望LLM能爲我做盡可能多的事情。但是,我確實認爲它沒有看到很多Rust。我的意思是,我和這個領域的人交談過,他們就是這麼說的。比如他們很興奮,"哦,你要開源Zed了?我很高興能獲得更多的Rust訓練數據。"我也是這麼想的,除了,你知道,競爭對手坐下來說,"給我建一個快速的代碼編輯器",然後它已經學會了如何做,所有這些工作都白費了。我不知道。

But also if that were true, ultimately I'm just excited about advancing the state of human progress. So if the thing I'm working on ends up being irrelevant, maybe I should go work on something else. I mean, that'd be disappointing, I would like to be successful... Anyway, I don't know how I got on that tangent.

但如果這是真的,最終我只是對推進人類進步的狀態感到興奮。所以如果我正在做的事情最終變得無關緊要,也許我應該去做別的事情。我的意思是,那會令人失望,我希望能成功......不管怎樣,我不知道我是怎麼說到這個問題上的。

But writing Python with it, which I don't want to write but I need to because I want to look at histograms of frame times and compare them? Thank you. I had no interest in writing that and it did a great job and I'm good.
但用它寫Python,我不想寫,但我需要寫,因爲我想看看幀時間的直方圖並比較它們?謝謝。我對寫這個沒有興趣,它做得很好,我很滿意。

Antonio: There's also another meta point, which I guess we didn't really discuss. Even in a world where the AI can generate a code editor, at some point you have to decide how do you want this feature to work? And I guess the AI could help you with that, but I guess there'll be a human directing that and understanding what the requirements are and what are you even trying to do, right?
Antonio:還有另一個元點,我想我們沒有真正討論過。即使在AI可以生成代碼編輯器的世界裏,在某個時候你也必須決定你希望這個功能如何工作?我猜AI可以幫你做到這一點,但我想會有一個人來指導它,理解需求是什麼,你究竟想做什麼,對吧?

Maybe that also gets wiped out by AGI at some point, but I don't know. Code at the end of the day is just an expression of ideas, yeah, and the knowledge that's within the company or within a group of individuals.

也許這在某個時候也會被AGI抹去,但我不知道。代碼歸根結底只是思想的表達,是的,是公司內部或一羣人內部的知識。

I'm excited about AI in the context of collaboration. I think that would be like a really good angle for Zed as a collaboration platform.
我對協作背景下的AI感到興奮。我認爲這對於Zed作爲一個協作平臺來說會是一個很好的角度。

We've talked about querying tree-sitter for certain functions or the language server for references and that's some context you can give the AI. But what about all the conversations that happened? Like what about — going back to our previous interview — if it's true that code is a distilled version of all the conversations that have taken place, well, that's great context for the AI to help you write that code.
我們已經討論過查詢tree-sitter獲取某些函數或語言服務器獲取引用,這是你可以給AI的一些上下文。但是所有發生過的對話呢?就像--回到我們之前的採訪--如果代碼真的是所有已經發生的對話的精華版本,那麼,這對於AI幫助你編寫代碼來說是非常好的上下文。

Nathan: And we can capture that context with voice-to-text models or do multi-modal shit.
Nathan:我們可以用語音轉文本模型捕獲這些上下文,或者做多模態的事情。

I mean, my real dream is to just create an AI simulation of Max or Antonio — GPtonio, you know? I love coding with them because it lets me kind of float along at a higher level of abstraction a lot of times. I don't know, maybe I'm just being lazy. I should be typing more, but sometimes I feel like when I'm typing, I get in the way or whatever. I just love pairing and I love being that navigator and being engaged. So a multimodal model that could talk to me and write code as it's talking and hear what I'm saying. And I can also get in there and type here and there. That'd be amazing. That's not the same thing as collaboration though.
我的意思是,我真正的夢想就是創建一個Max或Antonio的AI模擬--GPtonio,你知道嗎?我喜歡和他們一起編碼,因爲它讓我在很多時候能在更高的抽象層次上漂浮。我不知道,也許我只是懶惰。我應該打字打得更多,但有時我覺得當我打字時,我會礙事或什麼的。我就是喜歡結對編程,喜歡當導航員,喜歡投入其中。所以一個多模態模型可以和我說話,在說話的同時寫代碼,聽我說什麼。我也可以進去,到處打字。那就太棒了。但這和協作不是一回事。

But it would learn from watching us collaborate. That's my main thing. You know, yeah.
但它會從觀察我們的協作中學習。這是我的主要想法。你知道,是的。

Thorsten: You could train the AI based on all the edits that Antonio has done over the last year or something. And all the conversations.
Thorsten:你可以根據Antonio過去一年左右所做的所有編輯來訓練AI。還有所有的對話。

Antonio: Right, why did those edits took place? What was the reasoning? Why is it better to do things this way and not that way? What's the internal knowledge, the implicit knowledge that's not written down anywhere? We have it just because of shared context. Just sharing that context with the AI.
Antonio:對,爲什麼要進行這些編輯?推理是什麼?爲什麼用這種方式做事更好,而不是那種方式?什麼是內部知識,哪些是沒有寫在任何地方的隱性知識?我們有這些知識只是因爲共享的背景。只是與AI分享這個背景。

Nathan: When we started Zed, we always had this idea that wouldn't it be cool to just put my cursor on a character and say, show me that character being written. This idea that there was all this conversation and context tied to all this code and we could store it and make it easily accessible. But even that mode is like, it's a lot of shit to comb through, right? So having a tool that could be really good at distilling that down and presenting it in an intelligent way, that's amazing. And that's a really exciting development for collaboration, bringing all this data, that we could potentially capture but aren't really yet, to our fingertips in a new way.
Nathan:當我們開始Zed的時候,我們總是有這樣一個想法,如果我把光標放在一個字符上,說,給我看看這個字符是如何寫成的,那不是很酷嗎。有這樣一個想法,所有這些對話和上下文都與所有這些代碼聯繫在一起,我們可以存儲它並使它易於訪問。但即使是那種模式,也有很多事情要梳理,對吧?所以有一個工具可以很好地提煉它並以智能的方式呈現出來,那就太棒了。這對協作來說是一個非常令人興奮的發展,以一種新的方式將所有這些我們可能捕獲但實際上還沒有捕獲的數據帶到我們的指尖。

Thorsten: One last question, a look into the future. Do you know what Andy Warhol said about Coca-Cola? That it's the same whether the president drinks it or you drink it, that there's no premium Coca-Cola and normal Coca-Cola, but that everybody gets the same. Programming is like that to me. I can write code online with the best of the world and it looks exactly the same on GitHub. My avatar is next to their avatar. It's a level playing. I can put my website up and it's also thorstenball.com, right next to the websites of large companies. There's no exclusive club. And what I keep thinking about is that all you ever had to have to program was a computer, even a RaspberryPi is enough. But now with AI and LLMs, suddenly things have started to become really expensive. Too play in the AI game, you need a lot of money, or GPUs. Do you think about that? Do you see the shift or do you think that it's always been the case that if you wanted to have like a multi-million-users scalable web services you had to have money?
Thorsten:最後一個問題,展望未來。你知道安迪·沃霍爾對可口可樂說過什麼嗎?不管是總統喝它還是你喝它,都是一樣的,沒有高級可口可樂和普通可口可樂之分,但每個人都能得到一樣的東西。編程對我來說就是這樣。我可以和世界上最優秀的人一起在網上寫代碼,在GitHub上看起來一模一樣。我的頭像就在他們的頭像旁邊。這是一個公平的競爭。我可以把我的網站放上去,它也是thorstenball.com,就在大公司網站的旁邊。沒有排他性俱樂部。我一直在想的是,你要編程所需要的只是一臺電腦,即使是樹莓派也足夠了。但現在有了AI和LLM,突然之間事情開始變得非常昂貴。要玩AI遊戲,你需要很多錢,或者GPU。你考慮過這個嗎?你看到這種轉變了嗎,或者你認爲如果你想擁有一個可擴展的、擁有數百萬用戶的網絡服務,你總是需要錢?

Antonio: It might also just be that today the cost of the hardware is because we're not there yet technologically, right? Things have gotten a lot cheaper in CPU land, there's so many of them now. So, I could see a world in which things become a commodity because they become so cheap.
Antonio:也可能只是因爲我們在技術上還沒有達到那個程度,硬件的成本纔會這麼高,對吧?在CPU領域,事情已經變得便宜了很多,現在有這麼多CPU。所以,我可以想象一個世界,在這個世界裏,事物因爲變得如此便宜而成爲商品。

Nathan: Think about the late 70s, right? Ken Thompson and Dennis Ritchie, writing Unix on a teletype printer, hooked to a deck PDP11 that was up to the ceiling of my room in here. Right? And talk about the democracy of access to compute. That wasn't always the case. It just seemed like we were in an era where compute ceased to be the limiting factor on innovation for a really long time. But maybe that's true again now and who knows where it's going.
Nathan:想想70年代後期,對吧?Ken Thompson和Dennis Ritchie在電傳打字機上編寫Unix,連接到一個PDP11的機箱,它一直延伸到我房間的天花板。對吧?談論計算資源獲取的民主化。情況並非總是如此。似乎我們身處一個時代,在很長一段時間裏,計算能力不再是創新的限制因素。但也許現在又是如此,誰知道它會走向何方。

Thorsten: That's beautiful. Nice.
Thorsten:那真是太美了。很好。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章