Note: You can obtain the sources presented here at https://github.com/ms1963/goedit/
An Entertaining Journey Through Terminal UI Programming and LLM Integration
INTRODUCTION: WHY BUILD ANOTHER TEXT EDITOR?
Let me tell you a story. It was a dark and stormy night, and I was SSH'd into a remote server with nothing but vim and my wits. I needed to write some code, but I also wanted AI assistance. Opening a browser? Not an option. Using an IDE? Too heavy. What I needed was something simple, something minimal, something that could run anywhere Go runs and give me AI superpowers right in my terminal.
That's when the idea for GoEdit was born.
GoEdit is not trying to be the next Visual Studio Code or Emacs. It's not even trying to be nano or vim. GoEdit is a minimal text editor with one killer feature: built-in AI assistance that doesn't freeze your editor while thinking. You can keep typing while the AI ponders your question. It's like having a very patient assistant who never interrupts you.
The beauty of GoEdit lies in its simplicity. The entire editor is about one thousand lines of Go code spread across four files. You can read it, understand it, and modify it in an afternoon. It compiles to a single binary that runs on Windows, Linux, and macOS without any dependencies except for a terminal emulator. No Python virtual environments, no Node modules folder larger than your operating system, no Docker containers. Just pure, simple Go.
WHY OLLAMA FOR THE LLM SERVER?
You might wonder why we chose Ollama as our LLM backend instead of calling OpenAI's API or running some Python-based solution. The answer is beautifully simple: Ollama runs locally, it's fast, it's free, and it doesn't send your code to the cloud.
Think about it. When you're editing sensitive code or personal documents, do you really want every keystroke and every question sent to a server somewhere on the internet? With Ollama, your data stays on your machine. The AI runs locally. Your secrets remain secret.
Ollama also provides a clean REST API that's trivial to integrate with. No complex authentication flows, no API keys to manage, no rate limits to worry about. You just send a POST request with your prompt and get back a JSON response. It's so simple that our entire Ollama client implementation is less than two hundred lines of code.
Plus, Ollama supports multiple models. Want to use Llama 2 for general text? Done. Need CodeLlama for programming assistance? Just switch the model flag. Prefer Mistral for its speed? Go for it. The same simple API works with all of them.
THE ARCHITECTURE: KEEPING IT SIMPLE
Before we dive into the code, let's talk about the overall architecture. GoEdit consists of four main components, each in its own file:
First, we have the main editor logic in main.go. This handles the user interface, keyboard input, screen rendering, and the main event loop. It's the conductor of our little orchestra.
Second, there's the buffer implementation in buffer.go. This manages the actual text data, handles file operations, and implements undo and redo functionality. Think of it as the memory of our editor.
Third, we have a tiny cursor module in cursor.go. This is so simple it's almost embarrassing, but it keeps our code organized. The cursor just tracks a row and column position.
Fourth, and most exciting, is our Ollama client in ollama.go. This handles all communication with the Ollama server, including the tricky bits like cancellation and error handling.
The key architectural decision we made was to keep the AI integration non-blocking. When you ask the AI a question, the editor doesn't freeze. You can keep typing, editing, saving files, whatever you want. The AI works in the background, and when it's done, it politely notifies you via the status bar. This is achieved using Go's goroutines and channels, which make concurrent programming almost pleasant.
STEP ONE: SETTING UP THE PROJECT STRUCTURE
Let's start our journey by creating the basic project structure. Open your terminal and create a new directory for our editor. Navigate into it and initialize a Go module.
The first thing we need to do is tell Go that we're creating a new module. This is like giving our project a name and telling Go how to manage its dependencies. We'll call our module "goedit" because, well, it's a Go editor.
go mod init goedit
This creates a go.mod file that will track our dependencies. Speaking of dependencies, we only need one: tcell. This is a fantastic library for building terminal user interfaces in Go. It handles all the messy details of terminal control codes, keyboard input, and screen rendering across different operating systems.
To add tcell to our project, we run:
go get github.com/gdamore/tcell/v2
Now create four empty files: main.go, buffer.go, cursor.go, and ollama.go. These will hold our code. Your directory should now look like this:
goedit/
go.mod
go.sum
main.go
buffer.go
cursor.go
ollama.go
The go.sum file was created automatically when we added tcell. It contains cryptographic checksums of our dependencies to ensure reproducible builds.
STEP TWO: THE CURSOR - STARTING SIMPLE
Let's begin with the simplest component: the cursor. A cursor in a text editor is just a position, defined by a row and a column. That's it. We could make it more complex, but why? Simplicity is our friend.
Open cursor.go and let's write our cursor implementation. We'll define a struct to hold the position and a constructor function to create new cursors.
package main
type Cursor struct {
Row int
Col int
}
func NewCursor() *Cursor {
return &Cursor{Row: 0, Col: 0}
}
That's the entire cursor implementation. Seriously. We don't need methods for moving the cursor because the editor logic will handle that by directly modifying the Row and Col fields. This is a deliberate choice to keep things simple and transparent.
You might be thinking, "Wait, shouldn't we encapsulate the fields and provide getter and setter methods?" In many languages, yes. But Go encourages simplicity. If a struct is simple and its fields are self-explanatory, just make them public. Don't add complexity for the sake of theoretical purity.
STEP THREE: THE BUFFER - MANAGING TEXT DATA
Now we get to something more interesting: the buffer. The buffer is where we store the actual text being edited. It needs to handle multiple lines, support insertions and deletions, manage undo and redo, and save and load files.
Let's start with the basic structure. A buffer is essentially a slice of strings, where each string represents one line of text. We'll also track whether the buffer has been modified, the filename, and stacks for undo and redo operations.
package main
import (
"bufio"
"fmt"
"os"
"strings"
)
const maxUndoLevels = 50
type Buffer struct {
lines []string
filename string
modified bool
undoStack []BufferState
redoStack []BufferState
}
type BufferState struct {
lines []string
cursorRow int
cursorCol int
}
The BufferState struct captures a snapshot of the buffer at a particular moment, including the cursor position. This allows us to restore not just the text, but also where the cursor was when the user hits undo.
Now let's implement the constructor. When creating a new buffer, we need to either load an existing file or start with an empty buffer. If the file doesn't exist, that's okay - we'll just start fresh.
func NewBuffer(filename string) (*Buffer, error) {
b := &Buffer{
lines: []string{""},
filename: filename,
modified: false,
undoStack: make([]BufferState, 0, maxUndoLevels),
redoStack: make([]BufferState, 0, maxUndoLevels),
}
if filename != "" {
if err := b.Load(); err != nil {
if !os.IsNotExist(err) {
return nil, err
}
}
}
return b, nil
}
Notice how we initialize the buffer with one empty line. This is important because we always want at least one line to exist. An empty buffer isn't truly empty - it has one empty line. This simplifies a lot of our logic later because we never have to check if the buffer is completely empty.
Loading a file is straightforward. We open it, read it line by line, and store each line in our slice. We use a scanner because it handles different line ending styles automatically.
func (b *Buffer) Load() error {
file, err := os.Open(b.filename)
if err != nil {
return err
}
defer file.Close()
b.lines = []string{}
scanner := bufio.NewScanner(file)
const maxCapacity = 1024 * 1024
buf := make([]byte, maxCapacity)
scanner.Buffer(buf, maxCapacity)
for scanner.Scan() {
b.lines = append(b.lines, scanner.Text())
}
if err := scanner.Err(); err != nil {
return err
}
if len(b.lines) == 0 {
b.lines = []string{""}
}
b.modified = false
return nil
}
We set a maximum line length of one megabyte. This prevents someone from crashing our editor by opening a file with a single gigantic line. It's a simple safeguard that could save us from embarrassment.
Saving is a bit more careful. We don't want to lose data if something goes wrong during the save operation, so we use the classic technique of writing to a temporary file first, then atomically renaming it.
func (b *Buffer) Save() error {
if b.filename == "" {
return fmt.Errorf("no filename specified")
}
tempFile := b.filename + ".tmp"
file, err := os.Create(tempFile)
if err != nil {
return err
}
writer := bufio.NewWriter(file)
for i, line := range b.lines {
if i > 0 {
if _, err := writer.WriteString("\n"); err != nil {
file.Close()
os.Remove(tempFile)
return err
}
}
if _, err := writer.WriteString(line); err != nil {
file.Close()
os.Remove(tempFile)
return err
}
}
if err := writer.Flush(); err != nil {
file.Close()
os.Remove(tempFile)
return err
}
if err := file.Close(); err != nil {
os.Remove(tempFile)
return err
}
if err := os.Rename(tempFile, b.filename); err != nil {
os.Remove(tempFile)
return err
}
b.modified = false
return nil
}
This might seem overly cautious, but imagine if the disk filled up halfway through writing. Without the temporary file approach, we'd corrupt the original file. With it, the original stays intact and we just delete the incomplete temporary file.
Now let's implement the basic editing operations. Inserting a character is simple: split the line at the cursor position, insert the character, and join it back together.
func (b *Buffer) InsertChar(row, col int, ch rune) {
if row < 0 || row >= len(b.lines) {
return
}
line := b.lines[row]
if col < 0 {
col = 0
}
if col > len(line) {
col = len(line)
}
newLine := line[:col] + string(ch) + line[col:]
b.lines[row] = newLine
b.modified = true
}
Deleting a character is slightly trickier. If we're in the middle of a line, we just remove the character before the cursor. But if we're at the beginning of a line, we need to join this line with the previous one.
func (b *Buffer) DeleteChar(row, col int) {
if row < 0 || row >= len(b.lines) {
return
}
if col > 0 {
line := b.lines[row]
if col > len(line) {
col = len(line)
}
b.lines[row] = line[:col-1] + line[col:]
b.modified = true
} else if row > 0 {
prevLine := b.lines[row-1]
currentLine := b.lines[row]
b.lines[row-1] = prevLine + currentLine
b.lines = append(b.lines[:row], b.lines[row+1:]...)
b.modified = true
}
}
Inserting a newline splits the current line at the cursor position. The part before the cursor stays on the current line, and the part after becomes a new line.
func (b *Buffer) InsertNewline(row, col int) {
if row < 0 || row >= len(b.lines) {
return
}
line := b.lines[row]
if col < 0 {
col = 0
}
if col > len(line) {
col = len(line)
}
before := line[:col]
after := line[col:]
b.lines[row] = before
newLines := make([]string, len(b.lines)+1)
copy(newLines, b.lines[:row+1])
newLines[row+1] = after
copy(newLines[row+2:], b.lines[row+1:])
b.lines = newLines
b.modified = true
}
For undo and redo, we need to save the buffer state before each modification. The SaveState method creates a snapshot of the current buffer.
func (b *Buffer) SaveState(cursorRow, cursorCol int) {
linesCopy := make([]string, len(b.lines))
copy(linesCopy, b.lines)
state := BufferState{
lines: linesCopy,
cursorRow: cursorRow,
cursorCol: cursorCol,
}
b.undoStack = append(b.undoStack, state)
if len(b.undoStack) > maxUndoLevels {
b.undoStack = b.undoStack[1:]
}
b.redoStack = b.redoStack[:0]
}
When the user hits undo, we pop a state from the undo stack, push the current state onto the redo stack, and restore the old state.
func (b *Buffer) Undo() (int, int, bool) {
if len(b.undoStack) == 0 {
return 0, 0, false
}
currentState := BufferState{
lines: make([]string, len(b.lines)),
}
copy(currentState.lines, b.lines)
b.redoStack = append(b.redoStack, currentState)
state := b.undoStack[len(b.undoStack)-1]
b.undoStack = b.undoStack[:len(b.undoStack)-1]
b.lines = make([]string, len(state.lines))
copy(b.lines, state.lines)
b.modified = true
return state.cursorRow, state.cursorCol, true
}
STEP FOUR: THE OLLAMA CLIENT - TALKING TO THE AI
Now we get to the exciting part: integrating with Ollama. Our Ollama client needs to do several things. It needs to send prompts to the Ollama server, receive responses, handle errors gracefully, support cancellation, and check if Ollama is running and if the requested model is available.
Let's start with the basic structure. The client needs to know the Ollama server URL and which model to use. We'll also create an HTTP client with a reasonable timeout.
package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"time"
)
type OllamaClient struct {
baseURL string
model string
client *http.Client
}
func NewOllamaClient(baseURL, model string) *OllamaClient {
if baseURL == "" {
baseURL = "http://localhost:11434"
}
if model == "" {
model = "llama2"
}
return &OllamaClient{
baseURL: baseURL,
model: model,
client: &http.Client{
Timeout: 300 * time.Second,
},
}
}
We set a five-minute timeout for HTTP requests. This might seem long, but remember that LLM inference can take a while, especially on slower hardware or with larger models. We'd rather wait than fail prematurely.
The Ollama API expects JSON requests with a specific structure. We define structs to represent these requests and responses.
type GenerateRequest struct {
Model string `json:"model"`
Prompt string `json:"prompt"`
Stream bool `json:"stream"`
}
type GenerateResponse struct {
Model string `json:"model"`
Response string `json:"response"`
Done bool `json:"done"`
Context []int `json:"context,omitempty"`
TotalDuration int64 `json:"total_duration,omitempty"`
Error string `json:"error,omitempty"`
}
We set Stream to false because we want to receive the complete response at once rather than streaming it token by token. Streaming would be more interactive, but it would also complicate our code significantly.
Now for the main generation function. This is where the magic happens. We need to support cancellation, which means using Go's context package.
func (c *OllamaClient) GenerateWithCancel(prompt string, cancel <-chan bool) (string, error) {
if prompt == "" {
return "", fmt.Errorf("empty prompt")
}
reqBody := GenerateRequest{
Model: c.model,
Prompt: prompt,
Stream: false,
}
jsonData, err := json.Marshal(reqBody)
if err != nil {
return "", fmt.Errorf("failed to marshal request: %w", err)
}
url := fmt.Sprintf("%s/api/generate", c.baseURL)
ctx, ctxCancel := context.WithCancel(context.Background())
defer ctxCancel()
if cancel != nil {
go func() {
select {
case <-cancel:
ctxCancel()
case <-ctx.Done():
}
}()
}
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return "", fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.client.Do(req)
if err != nil {
if ctx.Err() == context.Canceled {
return "", fmt.Errorf("cancelled")
}
if strings.Contains(err.Error(), "connection refused") {
return "", fmt.Errorf("cannot connect to Ollama (is it running?)")
}
return "", fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return "", fmt.Errorf("ollama returned status %d: %s", resp.StatusCode, string(body))
}
body, err := io.ReadAll(resp.Body)
if err != nil {
if ctx.Err() == context.Canceled {
return "", fmt.Errorf("cancelled")
}
return "", fmt.Errorf("failed to read response: %w", err)
}
var genResp GenerateResponse
if err := json.Unmarshal(body, &genResp); err != nil {
return "", fmt.Errorf("failed to parse response: %w", err)
}
if genResp.Error != "" {
return "", fmt.Errorf("ollama error: %s", genResp.Error)
}
response := strings.TrimSpace(genResp.Response)
if response == "" {
return "", fmt.Errorf("received empty response from model")
}
return response, nil
}
The cancellation mechanism works by creating a context that can be cancelled. We launch a goroutine that watches the cancel channel. If it receives a signal, it cancels the context, which aborts the HTTP request. This allows the user to press Escape and stop a long-running AI request.
We also need helper functions to check if Ollama is running and if the requested model is available. These are simple HTTP GET requests to Ollama's tags endpoint.
func (c *OllamaClient) IsAvailable() bool {
url := fmt.Sprintf("%s/api/tags", c.baseURL)
client := &http.Client{Timeout: 2 * time.Second}
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return false
}
resp, err := client.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusOK
}
func (c *OllamaClient) CheckModel() error {
url := fmt.Sprintf("%s/api/tags", c.baseURL)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
resp, err := c.client.Do(req)
if err != nil {
return fmt.Errorf("cannot connect to Ollama: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read response: %w", err)
}
var result struct {
Models []struct {
Name string `json:"name"`
} `json:"models"`
}
if err := json.Unmarshal(body, &result); err != nil {
return fmt.Errorf("failed to parse response: %w", err)
}
modelFound := false
for _, model := range result.Models {
if strings.HasPrefix(model.Name, c.model) {
modelFound = true
break
}
}
if !modelFound {
availableModels := make([]string, 0, len(result.Models))
for _, model := range result.Models {
availableModels = append(availableModels, model.Name)
}
return fmt.Errorf("model '%s' not found. Available: %s",
c.model, strings.Join(availableModels, ", "))
}
return nil
}
STEP FIVE: THE EDITOR - BRINGING IT ALL TOGETHER
Now we reach the heart of our editor: the main editor logic. This is where we handle keyboard input, render the screen, manage different modes, and coordinate with the AI.
The editor has several modes. Normal mode is for regular editing. Find mode is for searching text. Goto mode is for jumping to a specific line. LLM mode is for entering prompts for the AI. Filename mode is for entering a filename when saving a new file.
Let's start with the editor structure:
package main
import (
"flag"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/gdamore/tcell/v2"
)
type Editor struct {
screen tcell.Screen
buffer *Buffer
cursor *Cursor
offsetRow int
offsetCol int
width int
height int
statusMsg string
mode EditorMode
findQuery string
clipboard string
llmClient *OllamaClient
llmPrompt string
llmResponse string
inputBuffer string
quitAttempts int
aiMutex sync.Mutex
aiInProgress bool
aiCancel chan bool
}
type EditorMode int
const (
ModeNormal EditorMode = iota
ModeFind
ModeGoto
ModeLLM
ModeFilename
)
The aiMutex protects the aiInProgress flag from race conditions. The aiCancel channel is used to signal cancellation to the AI goroutine. These are crucial for our non-blocking AI integration.
Creating a new editor involves initializing the screen, loading the buffer, and setting up the initial state.
func NewEditor(filename, ollamaURL, model string) (*Editor, error) {
screen, err := tcell.NewScreen()
if err != nil {
return nil, fmt.Errorf("failed to create screen: %w", err)
}
if err := screen.Init(); err != nil {
return nil, fmt.Errorf("failed to initialize screen: %w", err)
}
screen.EnableMouse()
screen.EnablePaste()
screen.Clear()
buffer, err := NewBuffer(filename)
if err != nil {
screen.Fini()
return nil, fmt.Errorf("failed to create buffer: %w", err)
}
width, height := screen.Size()
if height < 3 {
screen.Fini()
return nil, fmt.Errorf("terminal too small (need at least 3 lines)")
}
return &Editor{
screen: screen,
buffer: buffer,
cursor: NewCursor(),
width: width,
height: height - 2,
statusMsg: "Ctrl+Q: Quit | Ctrl+S: Save | Ctrl+L: AI | Ctrl+F: Find",
mode: ModeNormal,
llmClient: NewOllamaClient(ollamaURL, model),
aiInProgress: false,
aiCancel: make(chan bool, 1),
}, nil
}
The main event loop is beautifully simple. We poll for events, handle them, and render the screen. That's it.
func (e *Editor) Run() error {
if e.screen == nil {
return fmt.Errorf("screen not initialized")
}
defer e.screen.Fini()
e.render()
for {
ev := e.screen.PollEvent()
if ev == nil {
continue
}
if !e.handleEvent(ev) {
return nil
}
e.render()
}
}
Event handling dispatches to different functions based on the event type. For keyboard events, we check if Escape was pressed to cancel AI requests, then delegate to the appropriate mode handler.
func (e *Editor) handleEvent(ev tcell.Event) bool {
switch ev := ev.(type) {
case *tcell.EventResize:
e.width, e.height = ev.Size()
if e.height < 3 {
e.height = 3
}
e.height -= 2
e.screen.Sync()
return true
case *tcell.EventKey:
if ev.Key() == tcell.KeyEscape {
e.aiMutex.Lock()
if e.aiInProgress {
select {
case e.aiCancel <- true:
default:
}
e.aiInProgress = false
e.statusMsg = "AI request cancelled"
e.aiMutex.Unlock()
if e.mode == ModeLLM {
e.mode = ModeNormal
}
return true
}
e.aiMutex.Unlock()
}
return e.handleKey(ev)
}
return true
}
Normal mode handles most of the editing commands. Let's look at a few interesting ones. The Ctrl+L handler checks if Ollama is available before entering LLM mode. This prevents the user from entering a prompt only to discover that Ollama isn't running.
case tcell.KeyCtrlL:
if err := e.checkOllamaSetup(); err != nil {
e.statusMsg = fmt.Sprintf("AI unavailable: %v", err)
return true
}
e.mode = ModeLLM
e.inputBuffer = ""
e.statusMsg = "Ask AI: "
The Ctrl+K handler inserts the AI response at the cursor position. It calculates where the cursor should end up after the insertion, which is tricky when the response contains multiple lines.
case tcell.KeyCtrlK:
if e.llmResponse != "" {
oldRow := e.cursor.Row
oldCol := e.cursor.Col
e.buffer.InsertText(e.cursor.Row, e.cursor.Col, e.llmResponse)
lines := strings.Split(e.llmResponse, "\n")
if len(lines) > 1 {
e.cursor.Row = oldRow + len(lines) - 1
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
lastLine := lines[len(lines)-1]
e.cursor.Col = len(lastLine)
} else {
e.cursor.Col = oldCol + len(e.llmResponse)
}
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.statusMsg = "AI response inserted at cursor"
} else {
e.statusMsg = "No AI response available. Use Ctrl+L to ask AI first"
}
The most complex part of the editor is the asynchronous AI request handling. When the user enters a prompt and presses Enter in LLM mode, we launch a goroutine to handle the request. This goroutine runs independently of the main editor loop, allowing the user to continue editing.
func (e *Editor) askLLMAsync() {
if e.llmPrompt == "" {
e.statusMsg = "No prompt entered"
return
}
e.aiMutex.Lock()
if e.aiInProgress {
e.aiMutex.Unlock()
e.statusMsg = "AI request already in progress. Press Esc to cancel current request"
return
}
e.aiInProgress = true
e.aiMutex.Unlock()
if !e.llmClient.IsAvailable() {
e.aiMutex.Lock()
e.aiInProgress = false
e.statusMsg = "Cannot connect to Ollama. Is it running? Try: ollama serve"
e.aiMutex.Unlock()
return
}
if err := e.llmClient.CheckModel(); err != nil {
e.aiMutex.Lock()
e.aiInProgress = false
e.statusMsg = fmt.Sprintf("Model error: %s", err.Error())
e.aiMutex.Unlock()
return
}
e.statusMsg = "Processing AI request... (Press Esc to cancel)"
go func() {
prompt := e.llmPrompt
done := make(chan bool, 1)
var response string
var err error
go func() {
response, err = e.llmClient.GenerateWithCancel(prompt, e.aiCancel)
done <- true
}()
select {
case <-done:
case <-time.After(90 * time.Second):
select {
case e.aiCancel <- true:
default:
}
err = fmt.Errorf("request timeout (90s)")
}
e.aiMutex.Lock()
defer e.aiMutex.Unlock()
e.aiInProgress = false
if err != nil {
errMsg := err.Error()
if errMsg == "cancelled" {
e.statusMsg = "AI request cancelled by user"
} else if strings.Contains(errMsg, "cannot connect") {
e.statusMsg = "Cannot connect to Ollama. Run: ollama serve"
} else {
if len(errMsg) > 70 {
errMsg = errMsg[:67] + "..."
}
e.statusMsg = fmt.Sprintf("AI error: %s", errMsg)
}
e.llmResponse = ""
return
}
if response == "" {
e.statusMsg = "AI returned empty response. Try rephrasing your prompt"
e.llmResponse = ""
return
}
e.llmResponse = response
preview := response
preview = strings.ReplaceAll(preview, "\n", " ")
preview = strings.TrimSpace(preview)
if len(preview) > 60 {
preview = preview[:57] + "..."
}
if preview == "" {
preview = "[Response ready]"
}
responseLines := strings.Count(response, "\n") + 1
e.statusMsg = fmt.Sprintf("AI response ready (%d lines). Preview: %s | Press Ctrl+K to insert", responseLines, preview)
}()
}
Notice how we use nested goroutines and channels to implement a timeout. The outer goroutine launches the actual AI request in another goroutine, then waits for either the request to complete or a timeout to occur. This ensures that even if Ollama hangs, our editor won't freeze forever.
Rendering the screen is straightforward but tedious. We clear the screen, draw the text lines, render the status bar, and position the cursor.
func (e *Editor) render() {
if e.screen == nil {
return
}
e.screen.Clear()
e.ensureCursorValid()
if e.cursor.Row < e.offsetRow {
e.offsetRow = e.cursor.Row
}
if e.cursor.Row >= e.offsetRow+e.height && e.height > 0 {
e.offsetRow = e.cursor.Row - e.height + 1
}
if e.offsetRow < 0 {
e.offsetRow = 0
}
for y := 0; y < e.height; y++ {
row := y + e.offsetRow
if row >= e.buffer.LineCount() {
e.drawString(0, y, "~", tcell.StyleDefault.Foreground(tcell.ColorBlue))
continue
}
line := e.buffer.GetLine(row)
e.drawString(0, y, line, tcell.StyleDefault)
}
e.renderStatusBar()
screenY := e.cursor.Row - e.offsetRow
screenX := e.cursor.Col
if screenX >= e.width {
screenX = e.width - 1
}
if screenX < 0 {
screenX = 0
}
if screenY >= e.height {
screenY = e.height - 1
}
if screenY < 0 {
screenY = 0
}
e.screen.ShowCursor(screenX, screenY)
e.screen.Show()
}
The status bar uses different colors to indicate the editor's state. When an AI request is in progress, we use a yellow background to make it obvious.
func (e *Editor) renderStatusBar() {
y := e.height
if y < 0 {
return
}
style := tcell.StyleDefault.
Background(tcell.ColorWhite).
Foreground(tcell.ColorBlack)
e.aiMutex.Lock()
inProgress := e.aiInProgress
e.aiMutex.Unlock()
if inProgress {
style = tcell.StyleDefault.
Background(tcell.ColorYellow).
Foreground(tcell.ColorBlack)
}
for x := 0; x < e.width; x++ {
if y >= 0 && y < e.height+2 {
e.screen.SetContent(x, y, ' ', nil, style)
}
if y+1 >= 0 && y+1 < e.height+2 {
e.screen.SetContent(x, y+1, ' ', nil, style)
}
}
statusMsg := e.statusMsg
if len(statusMsg) > e.width && e.width > 3 {
statusMsg = statusMsg[:e.width-3] + "..."
}
e.drawString(0, y, statusMsg, style)
modMark := ""
if e.buffer.modified {
modMark = " [+]"
}
filename := e.buffer.filename
if filename == "" {
filename = "[No Name]"
} else {
filename = filepath.Base(filename)
if len(filename) > 20 {
filename = filename[:17] + "..."
}
}
info := fmt.Sprintf("%s%s | Ln %d/%d | Col %d",
filename, modMark, e.cursor.Row+1, e.buffer.LineCount(), e.cursor.Col+1)
if len(info) > e.width && e.width > 0 {
info = info[:e.width]
}
if y+1 >= 0 && y+1 < e.height+2 {
e.drawString(0, y+1, info, style)
}
}
STEP SIX: THE MAIN FUNCTION - COMMAND LINE INTERFACE
Finally, we need a main function to tie everything together. This parses command line arguments, creates the editor, and runs it.
func main() {
ollamaURL := flag.String("ollama", "http://localhost:11434", "Ollama API URL")
model := flag.String("model", "llama2", "LLM model to use")
showVersion := flag.Bool("version", false, "Show version")
showHelp := flag.Bool("help", false, "Show help")
flag.Parse()
if *showVersion {
fmt.Printf("GoEdit v1.0.0\n")
os.Exit(0)
}
if *showHelp {
printHelp()
os.Exit(0)
}
var filename string
if flag.NArg() > 0 {
filename = flag.Arg(0)
absPath, err := filepath.Abs(filename)
if err == nil {
filename = absPath
}
}
ed, err := NewEditor(filename, *ollamaURL, *model)
if err != nil {
log.Fatalf("Failed to create editor: %v", err)
}
if err := ed.Run(); err != nil {
log.Fatalf("Editor error: %v", err)
}
}
The help function prints usage information:
func printHelp() {
fmt.Println("GoEdit - A minimal yet powerful terminal text editor")
fmt.Println("\nUsage:")
fmt.Println(" goedit [options] [filename]")
fmt.Println("\nOptions:")
fmt.Println(" -ollama string Ollama API URL (default: http://localhost:11434)")
fmt.Println(" -model string LLM model to use (default: llama2)")
fmt.Println(" -version Show version")
fmt.Println(" -help Show this help")
fmt.Println("\nKeyboard Shortcuts:")
fmt.Println(" Ctrl+S Save file")
fmt.Println(" Ctrl+Q Quit")
fmt.Println(" Ctrl+F Find text")
fmt.Println(" Ctrl+G Go to line")
fmt.Println(" Ctrl+L Ask LLM (AI Assistant)")
fmt.Println(" Ctrl+K Insert LLM response at cursor")
fmt.Println(" Esc Cancel AI request")
}
STEP SEVEN: BUILDING AND RUNNING
Now that we have all the code, let's build it. Open your terminal in the goedit directory and run:
go mod tidy
This ensures all dependencies are properly recorded in go.mod and go.sum. Then build the editor:
go build -o goedit
On Windows, you might want to add the .exe extension:
go build -o goedit.exe
To run the editor, you first need to have Ollama installed and running. Visit ollama.com and download the installer for your operating system. After installing, pull a model:
ollama pull llama2
Start the Ollama server:
ollama serve
Now you can run GoEdit:
goedit myfile.txt
Or with a specific model:
goedit -model codellama mycode.py
Try pressing Ctrl+L to ask the AI a question. Type something like "Write a hello world function in Python" and press Enter. The editor stays responsive while the AI thinks. When it's done, press Ctrl+K to insert the response.
WHAT WE LEARNED
Building GoEdit taught us several valuable lessons. First, simplicity is powerful. By keeping our scope limited and our code straightforward, we created something that actually works and is easy to understand. We didn't try to build the next Emacs. We built a minimal editor that does a few things well.
Second, Go's concurrency primitives make non-blocking operations surprisingly easy. Goroutines and channels allowed us to implement asynchronous AI requests without drowning in callback hell or promise chains. The code is almost linear and easy to follow.
Third, local AI is practical. With Ollama, we can have AI assistance without sending our data to the cloud, without paying per request, and without worrying about rate limits. The future of AI might be more local than we think.
Fourth, terminal UIs are still relevant. Not everything needs to be a web app or an Electron application. Sometimes a simple terminal program is exactly what you need, especially when working on remote servers or in resource-constrained environments.
POSSIBLE IMPROVEMENTS
GoEdit is minimal by design, but there are many ways it could be extended. Syntax highlighting would make it more useful for code editing. Multiple file tabs would allow editing several files simultaneously. Integration with the system clipboard would make it easier to move text between GoEdit and other applications.
The AI integration could be enhanced with streaming responses, allowing the user to see the AI's output as it's generated. We could add a history of AI conversations, making it easier to refine prompts. We could even implement AI-powered code completion.
But remember, every feature added is a feature that needs to be maintained, tested, and documented. Sometimes the best feature is the one you don't add.
CONCLUSION
We started this journey wanting to build a simple text editor with AI superpowers. We ended up with exactly that: a minimal, functional editor that runs anywhere Go runs and provides AI assistance without freezing or blocking. The entire implementation is about one thousand lines of readable Go code.
GoEdit proves that you don't need a massive framework or a complex architecture to build something useful. You don't need to send your data to the cloud to get AI assistance. You don't need a gigabyte of dependencies to create a text editor.
Sometimes, simple is better. Sometimes, local is better. Sometimes, less is more.
Now go forth and edit some text. And when you need help, just press Ctrl+L and ask the AI. It's right there, running on your machine, ready to assist.
=
GoEdit - Complete Source Code
File: cursor.go
package main
type Cursor struct {
Row int
Col int
}
func NewCursor() *Cursor {
return &Cursor{Row: 0, Col: 0}
}
File: buffer.go
package main
import (
"bufio"
"fmt"
"os"
"strings"
)
const maxUndoLevels = 50
type Buffer struct {
lines []string
filename string
modified bool
undoStack []BufferState
redoStack []BufferState
}
type BufferState struct {
lines []string
cursorRow int
cursorCol int
}
func NewBuffer(filename string) (*Buffer, error) {
b := &Buffer{
lines: []string{""},
filename: filename,
modified: false,
undoStack: make([]BufferState, 0, maxUndoLevels),
redoStack: make([]BufferState, 0, maxUndoLevels),
}
if filename != "" {
if err := b.Load(); err != nil {
if !os.IsNotExist(err) {
return nil, err
}
}
}
return b, nil
}
func (b *Buffer) Load() error {
file, err := os.Open(b.filename)
if err != nil {
return err
}
defer file.Close()
b.lines = []string{}
scanner := bufio.NewScanner(file)
const maxCapacity = 1024 * 1024
buf := make([]byte, maxCapacity)
scanner.Buffer(buf, maxCapacity)
for scanner.Scan() {
b.lines = append(b.lines, scanner.Text())
}
if err := scanner.Err(); err != nil {
return err
}
if len(b.lines) == 0 {
b.lines = []string{""}
}
b.modified = false
return nil
}
func (b *Buffer) Save() error {
if b.filename == "" {
return fmt.Errorf("no filename specified")
}
tempFile := b.filename + ".tmp"
file, err := os.Create(tempFile)
if err != nil {
return err
}
writer := bufio.NewWriter(file)
for i, line := range b.lines {
if i > 0 {
if _, err := writer.WriteString("\n"); err != nil {
file.Close()
os.Remove(tempFile)
return err
}
}
if _, err := writer.WriteString(line); err != nil {
file.Close()
os.Remove(tempFile)
return err
}
}
if err := writer.Flush(); err != nil {
file.Close()
os.Remove(tempFile)
return err
}
if err := file.Close(); err != nil {
os.Remove(tempFile)
return err
}
if err := os.Rename(tempFile, b.filename); err != nil {
os.Remove(tempFile)
return err
}
b.modified = false
return nil
}
func (b *Buffer) GetLine(row int) string {
if row < 0 || row >= len(b.lines) {
return ""
}
return b.lines[row]
}
func (b *Buffer) LineCount() int {
return len(b.lines)
}
func (b *Buffer) GetText() string {
return strings.Join(b.lines, "\n")
}
func (b *Buffer) InsertChar(row, col int, ch rune) {
if row < 0 || row >= len(b.lines) {
return
}
line := b.lines[row]
if col < 0 {
col = 0
}
if col > len(line) {
col = len(line)
}
newLine := line[:col] + string(ch) + line[col:]
b.lines[row] = newLine
b.modified = true
}
func (b *Buffer) DeleteChar(row, col int) {
if row < 0 || row >= len(b.lines) {
return
}
if col > 0 {
line := b.lines[row]
if col > len(line) {
col = len(line)
}
b.lines[row] = line[:col-1] + line[col:]
b.modified = true
} else if row > 0 {
prevLine := b.lines[row-1]
currentLine := b.lines[row]
b.lines[row-1] = prevLine + currentLine
b.lines = append(b.lines[:row], b.lines[row+1:]...)
b.modified = true
}
}
func (b *Buffer) DeleteCharForward(row, col int) {
if row < 0 || row >= len(b.lines) {
return
}
line := b.lines[row]
if col >= 0 && col < len(line) {
b.lines[row] = line[:col] + line[col+1:]
b.modified = true
}
}
func (b *Buffer) InsertNewline(row, col int) {
if row < 0 || row >= len(b.lines) {
return
}
line := b.lines[row]
if col < 0 {
col = 0
}
if col > len(line) {
col = len(line)
}
before := line[:col]
after := line[col:]
b.lines[row] = before
newLines := make([]string, len(b.lines)+1)
copy(newLines, b.lines[:row+1])
newLines[row+1] = after
copy(newLines[row+2:], b.lines[row+1:])
b.lines = newLines
b.modified = true
}
func (b *Buffer) DeleteLine(row int) {
if row < 0 || row >= len(b.lines) {
return
}
if len(b.lines) == 1 {
b.lines[0] = ""
} else {
b.lines = append(b.lines[:row], b.lines[row+1:]...)
}
b.modified = true
}
func (b *Buffer) AppendToLine(row int, text string) {
if row < 0 || row >= len(b.lines) {
return
}
b.lines[row] += text
b.modified = true
}
func (b *Buffer) InsertText(row, col int, text string) {
if row < 0 || row >= len(b.lines) {
return
}
line := b.lines[row]
if col < 0 {
col = 0
}
if col > len(line) {
col = len(line)
}
lines := strings.Split(text, "\n")
if len(lines) == 1 {
b.lines[row] = line[:col] + text + line[col:]
} else {
before := line[:col]
after := line[col:]
b.lines[row] = before + lines[0]
newLines := make([]string, len(b.lines)+len(lines)-1)
copy(newLines, b.lines[:row+1])
for i := 1; i < len(lines)-1; i++ {
newLines[row+i] = lines[i]
}
newLines[row+len(lines)-1] = lines[len(lines)-1] + after
copy(newLines[row+len(lines):], b.lines[row+1:])
b.lines = newLines
}
b.modified = true
}
func (b *Buffer) SaveState(cursorRow, cursorCol int) {
linesCopy := make([]string, len(b.lines))
copy(linesCopy, b.lines)
state := BufferState{
lines: linesCopy,
cursorRow: cursorRow,
cursorCol: cursorCol,
}
b.undoStack = append(b.undoStack, state)
if len(b.undoStack) > maxUndoLevels {
b.undoStack = b.undoStack[1:]
}
b.redoStack = b.redoStack[:0]
}
func (b *Buffer) Undo() (int, int, bool) {
if len(b.undoStack) == 0 {
return 0, 0, false
}
currentState := BufferState{
lines: make([]string, len(b.lines)),
}
copy(currentState.lines, b.lines)
b.redoStack = append(b.redoStack, currentState)
state := b.undoStack[len(b.undoStack)-1]
b.undoStack = b.undoStack[:len(b.undoStack)-1]
b.lines = make([]string, len(state.lines))
copy(b.lines, state.lines)
b.modified = true
return state.cursorRow, state.cursorCol, true
}
func (b *Buffer) Redo() (int, int, bool) {
if len(b.redoStack) == 0 {
return 0, 0, false
}
currentState := BufferState{
lines: make([]string, len(b.lines)),
}
copy(currentState.lines, b.lines)
b.undoStack = append(b.undoStack, currentState)
state := b.redoStack[len(b.redoStack)-1]
b.redoStack = b.redoStack[:len(b.redoStack)-1]
b.lines = make([]string, len(state.lines))
copy(b.lines, state.lines)
b.modified = true
return state.cursorRow, state.cursorCol, true
}
File: ollama.go
package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"time"
)
type OllamaClient struct {
baseURL string
model string
client *http.Client
}
type GenerateRequest struct {
Model string `json:"model"`
Prompt string `json:"prompt"`
Stream bool `json:"stream"`
}
type GenerateResponse struct {
Model string `json:"model"`
Response string `json:"response"`
Done bool `json:"done"`
Context []int `json:"context,omitempty"`
TotalDuration int64 `json:"total_duration,omitempty"`
Error string `json:"error,omitempty"`
}
func NewOllamaClient(baseURL, model string) *OllamaClient {
if baseURL == "" {
baseURL = "http://localhost:11434"
}
if model == "" {
model = "llama2"
}
return &OllamaClient{
baseURL: baseURL,
model: model,
client: &http.Client{
Timeout: 300 * time.Second,
},
}
}
func (c *OllamaClient) Generate(prompt string) (string, error) {
return c.GenerateWithCancel(prompt, nil)
}
func (c *OllamaClient) GenerateWithCancel(prompt string, cancel <-chan bool) (string, error) {
if prompt == "" {
return "", fmt.Errorf("empty prompt")
}
reqBody := GenerateRequest{
Model: c.model,
Prompt: prompt,
Stream: false,
}
jsonData, err := json.Marshal(reqBody)
if err != nil {
return "", fmt.Errorf("failed to marshal request: %w", err)
}
url := fmt.Sprintf("%s/api/generate", c.baseURL)
ctx, ctxCancel := context.WithCancel(context.Background())
defer ctxCancel()
if cancel != nil {
go func() {
select {
case <-cancel:
ctxCancel()
case <-ctx.Done():
}
}()
}
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return "", fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.client.Do(req)
if err != nil {
if ctx.Err() == context.Canceled {
return "", fmt.Errorf("cancelled")
}
if strings.Contains(err.Error(), "connection refused") {
return "", fmt.Errorf("cannot connect to Ollama (is it running?)")
}
return "", fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
bodyStr := string(body)
var errResp GenerateResponse
if json.Unmarshal(body, &errResp) == nil && errResp.Error != "" {
return "", fmt.Errorf("ollama error: %s", errResp.Error)
}
if len(bodyStr) > 200 {
bodyStr = bodyStr[:200] + "..."
}
return "", fmt.Errorf("ollama returned status %d: %s", resp.StatusCode, bodyStr)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
if ctx.Err() == context.Canceled {
return "", fmt.Errorf("cancelled")
}
return "", fmt.Errorf("failed to read response: %w", err)
}
if len(body) == 0 {
return "", fmt.Errorf("empty response from Ollama")
}
var genResp GenerateResponse
if err := json.Unmarshal(body, &genResp); err != nil {
return "", fmt.Errorf("failed to parse response: %w", err)
}
if genResp.Error != "" {
return "", fmt.Errorf("ollama error: %s", genResp.Error)
}
response := strings.TrimSpace(genResp.Response)
if response == "" {
return "", fmt.Errorf("received empty response from model")
}
return response, nil
}
func (c *OllamaClient) IsAvailable() bool {
url := fmt.Sprintf("%s/api/tags", c.baseURL)
client := &http.Client{Timeout: 2 * time.Second}
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return false
}
resp, err := client.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusOK
}
func (c *OllamaClient) CheckModel() error {
url := fmt.Sprintf("%s/api/tags", c.baseURL)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
resp, err := c.client.Do(req)
if err != nil {
return fmt.Errorf("cannot connect to Ollama: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("ollama returned status %d", resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read response: %w", err)
}
var result struct {
Models []struct {
Name string `json:"name"`
} `json:"models"`
}
if err := json.Unmarshal(body, &result); err != nil {
return fmt.Errorf("failed to parse response: %w", err)
}
modelFound := false
for _, model := range result.Models {
if strings.HasPrefix(model.Name, c.model) {
modelFound = true
break
}
}
if !modelFound {
availableModels := make([]string, 0, len(result.Models))
for _, model := range result.Models {
availableModels = append(availableModels, model.Name)
}
return fmt.Errorf("model '%s' not found. Available: %s",
c.model, strings.Join(availableModels, ", "))
}
return nil
}
File: main.go
package main
import (
"flag"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/gdamore/tcell/v2"
)
const version = "1.0.0"
type Editor struct {
screen tcell.Screen
buffer *Buffer
cursor *Cursor
offsetRow int
offsetCol int
width int
height int
statusMsg string
mode EditorMode
findQuery string
clipboard string
llmClient *OllamaClient
llmPrompt string
llmResponse string
inputBuffer string
quitAttempts int
aiMutex sync.Mutex
aiInProgress bool
aiCancel chan bool
}
type EditorMode int
const (
ModeNormal EditorMode = iota
ModeFind
ModeGoto
ModeLLM
ModeFilename
)
func NewEditor(filename, ollamaURL, model string) (*Editor, error) {
screen, err := tcell.NewScreen()
if err != nil {
return nil, fmt.Errorf("failed to create screen: %w", err)
}
if err := screen.Init(); err != nil {
return nil, fmt.Errorf("failed to initialize screen: %w", err)
}
screen.EnableMouse()
screen.EnablePaste()
screen.Clear()
buffer, err := NewBuffer(filename)
if err != nil {
screen.Fini()
return nil, fmt.Errorf("failed to create buffer: %w", err)
}
width, height := screen.Size()
if height < 3 {
screen.Fini()
return nil, fmt.Errorf("terminal too small (need at least 3 lines)")
}
return &Editor{
screen: screen,
buffer: buffer,
cursor: NewCursor(),
width: width,
height: height - 2,
statusMsg: "Ctrl+Q: Quit | Ctrl+S: Save | Ctrl+L: AI | Ctrl+F: Find",
mode: ModeNormal,
llmClient: NewOllamaClient(ollamaURL, model),
aiInProgress: false,
aiCancel: make(chan bool, 1),
}, nil
}
func (e *Editor) checkOllamaSetup() error {
if !e.llmClient.IsAvailable() {
return fmt.Errorf("Ollama not running. Start with: ollama serve")
}
if err := e.llmClient.CheckModel(); err != nil {
return err
}
return nil
}
func (e *Editor) Run() error {
if e.screen == nil {
return fmt.Errorf("screen not initialized")
}
defer e.screen.Fini()
e.render()
for {
ev := e.screen.PollEvent()
if ev == nil {
continue
}
if !e.handleEvent(ev) {
return nil
}
e.render()
}
}
func (e *Editor) handleEvent(ev tcell.Event) bool {
switch ev := ev.(type) {
case *tcell.EventResize:
e.width, e.height = ev.Size()
if e.height < 3 {
e.height = 3
}
e.height -= 2
e.screen.Sync()
return true
case *tcell.EventKey:
if ev.Key() == tcell.KeyEscape {
e.aiMutex.Lock()
if e.aiInProgress {
select {
case e.aiCancel <- true:
default:
}
e.aiInProgress = false
e.statusMsg = "AI request cancelled"
e.aiMutex.Unlock()
if e.mode == ModeLLM {
e.mode = ModeNormal
}
return true
}
e.aiMutex.Unlock()
}
return e.handleKey(ev)
}
return true
}
func (e *Editor) handleKey(ev *tcell.EventKey) bool {
if ev == nil {
return true
}
switch e.mode {
case ModeFind:
return e.handleFindMode(ev)
case ModeGoto:
return e.handleGotoMode(ev)
case ModeLLM:
return e.handleLLMMode(ev)
case ModeFilename:
return e.handleFilenameMode(ev)
default:
return e.handleNormalMode(ev)
}
}
func (e *Editor) handleNormalMode(ev *tcell.EventKey) bool {
mod := ev.Modifiers()
switch ev.Key() {
case tcell.KeyCtrlQ:
e.aiMutex.Lock()
inProgress := e.aiInProgress
e.aiMutex.Unlock()
if inProgress {
e.statusMsg = "AI in progress. Press Esc to cancel, then Ctrl+Q to quit"
return true
}
if e.buffer.modified && e.quitAttempts == 0 {
e.statusMsg = "File modified! Press Ctrl+Q again to quit, Ctrl+S to save"
e.quitAttempts++
return true
}
return false
case tcell.KeyCtrlS:
e.quitAttempts = 0
e.saveFile()
case tcell.KeyCtrlF:
e.mode = ModeFind
e.inputBuffer = ""
e.statusMsg = "Find: "
case tcell.KeyCtrlG:
e.mode = ModeGoto
e.inputBuffer = ""
e.statusMsg = "Go to line: "
case tcell.KeyCtrlL:
if err := e.checkOllamaSetup(); err != nil {
e.statusMsg = fmt.Sprintf("AI unavailable: %v", err)
return true
}
e.mode = ModeLLM
e.inputBuffer = ""
e.statusMsg = "Ask AI: "
case tcell.KeyCtrlK:
if e.llmResponse != "" {
oldRow := e.cursor.Row
oldCol := e.cursor.Col
e.buffer.InsertText(e.cursor.Row, e.cursor.Col, e.llmResponse)
lines := strings.Split(e.llmResponse, "\n")
if len(lines) > 1 {
e.cursor.Row = oldRow + len(lines) - 1
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
lastLine := lines[len(lines)-1]
e.cursor.Col = len(lastLine)
} else {
e.cursor.Col = oldCol + len(e.llmResponse)
}
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.statusMsg = "AI response inserted at cursor"
} else {
e.statusMsg = "No AI response available. Use Ctrl+L to ask AI first"
}
case tcell.KeyCtrlA:
e.clipboard = e.buffer.GetText()
e.statusMsg = "All text copied to clipboard"
case tcell.KeyCtrlC:
if e.cursor.Row >= 0 && e.cursor.Row < e.buffer.LineCount() {
line := e.buffer.GetLine(e.cursor.Row)
e.clipboard = line
e.statusMsg = "Current line copied to clipboard"
}
case tcell.KeyCtrlX:
if e.cursor.Row >= 0 && e.cursor.Row < e.buffer.LineCount() {
line := e.buffer.GetLine(e.cursor.Row)
e.clipboard = line
e.buffer.DeleteLine(e.cursor.Row)
if e.cursor.Row >= e.buffer.LineCount() && e.cursor.Row > 0 {
e.cursor.Row--
}
e.cursor.Col = 0
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.statusMsg = "Current line cut to clipboard"
}
case tcell.KeyCtrlV:
if e.clipboard != "" {
oldRow := e.cursor.Row
oldCol := e.cursor.Col
e.buffer.InsertText(e.cursor.Row, e.cursor.Col, e.clipboard)
lines := strings.Split(e.clipboard, "\n")
if len(lines) > 1 {
e.cursor.Row = oldRow + len(lines) - 1
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
lastLine := lines[len(lines)-1]
e.cursor.Col = len(lastLine)
} else {
e.cursor.Col = oldCol + len(e.clipboard)
}
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.statusMsg = "Clipboard content pasted"
} else {
e.statusMsg = "Clipboard is empty"
}
case tcell.KeyCtrlZ:
if row, col, ok := e.buffer.Undo(); ok {
e.cursor.Row = row
e.cursor.Col = col
e.ensureCursorValid()
e.statusMsg = "Undo successful"
} else {
e.statusMsg = "Nothing to undo"
}
case tcell.KeyCtrlY:
if row, col, ok := e.buffer.Redo(); ok {
e.cursor.Row = row
e.cursor.Col = col
e.ensureCursorValid()
e.statusMsg = "Redo successful"
} else {
e.statusMsg = "Nothing to redo"
}
case tcell.KeyUp:
if e.cursor.Row > 0 {
e.cursor.Row--
e.ensureCursorValid()
}
case tcell.KeyDown:
if e.cursor.Row < e.buffer.LineCount()-1 {
e.cursor.Row++
e.ensureCursorValid()
}
case tcell.KeyLeft:
if e.cursor.Col > 0 {
e.cursor.Col--
} else if e.cursor.Row > 0 {
e.cursor.Row--
e.cursor.Col = len(e.buffer.GetLine(e.cursor.Row))
}
case tcell.KeyRight:
lineLen := len(e.buffer.GetLine(e.cursor.Row))
if e.cursor.Col < lineLen {
e.cursor.Col++
} else if e.cursor.Row < e.buffer.LineCount()-1 {
e.cursor.Row++
e.cursor.Col = 0
}
case tcell.KeyHome:
if mod&tcell.ModCtrl != 0 {
e.cursor.Row = 0
e.cursor.Col = 0
e.ensureCursorValid()
e.statusMsg = "Moved to start of file"
} else {
e.cursor.Col = 0
}
case tcell.KeyEnd:
if mod&tcell.ModCtrl != 0 {
e.cursor.Row = e.buffer.LineCount() - 1
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
e.cursor.Col = len(e.buffer.GetLine(e.cursor.Row))
e.ensureCursorValid()
e.statusMsg = "Moved to end of file"
} else {
e.cursor.Col = len(e.buffer.GetLine(e.cursor.Row))
}
case tcell.KeyPgUp:
e.cursor.Row -= e.height
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
e.ensureCursorValid()
case tcell.KeyPgDn:
e.cursor.Row += e.height
if e.cursor.Row >= e.buffer.LineCount() {
e.cursor.Row = e.buffer.LineCount() - 1
}
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
e.ensureCursorValid()
case tcell.KeyEnter:
e.buffer.InsertNewline(e.cursor.Row, e.cursor.Col)
e.cursor.Row++
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
e.cursor.Col = 0
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.quitAttempts = 0
case tcell.KeyBackspace, tcell.KeyBackspace2:
if e.cursor.Col > 0 {
e.buffer.DeleteChar(e.cursor.Row, e.cursor.Col)
e.cursor.Col--
} else if e.cursor.Row > 0 {
prevLineLen := len(e.buffer.GetLine(e.cursor.Row - 1))
e.buffer.DeleteChar(e.cursor.Row, e.cursor.Col)
e.cursor.Row--
e.cursor.Col = prevLineLen
}
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.quitAttempts = 0
case tcell.KeyDelete:
lineLen := len(e.buffer.GetLine(e.cursor.Row))
if e.cursor.Col < lineLen {
e.buffer.DeleteCharForward(e.cursor.Row, e.cursor.Col)
} else if e.cursor.Row < e.buffer.LineCount()-1 {
nextLine := e.buffer.GetLine(e.cursor.Row + 1)
e.buffer.AppendToLine(e.cursor.Row, nextLine)
e.buffer.DeleteLine(e.cursor.Row + 1)
}
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.quitAttempts = 0
case tcell.KeyTab:
for i := 0; i < 4; i++ {
e.buffer.InsertChar(e.cursor.Row, e.cursor.Col, ' ')
e.cursor.Col++
}
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.quitAttempts = 0
case tcell.KeyRune:
e.buffer.InsertChar(e.cursor.Row, e.cursor.Col, ev.Rune())
e.cursor.Col++
e.ensureCursorValid()
e.buffer.SaveState(e.cursor.Row, e.cursor.Col)
e.quitAttempts = 0
}
return true
}
func (e *Editor) handleFindMode(ev *tcell.EventKey) bool {
switch ev.Key() {
case tcell.KeyEscape:
e.mode = ModeNormal
e.statusMsg = "Search cancelled"
case tcell.KeyEnter:
e.findQuery = e.inputBuffer
e.findText()
e.mode = ModeNormal
case tcell.KeyBackspace, tcell.KeyBackspace2:
if len(e.inputBuffer) > 0 {
e.inputBuffer = e.inputBuffer[:len(e.inputBuffer)-1]
}
e.statusMsg = "Find: " + e.inputBuffer
case tcell.KeyRune:
e.inputBuffer += string(ev.Rune())
e.statusMsg = "Find: " + e.inputBuffer
}
return true
}
func (e *Editor) handleGotoMode(ev *tcell.EventKey) bool {
switch ev.Key() {
case tcell.KeyEscape:
e.mode = ModeNormal
e.statusMsg = "Go to line cancelled"
case tcell.KeyEnter:
var lineNum int
_, err := fmt.Sscanf(e.inputBuffer, "%d", &lineNum)
if err == nil && lineNum > 0 && lineNum <= e.buffer.LineCount() {
e.cursor.Row = lineNum - 1
e.cursor.Col = 0
e.ensureCursorValid()
e.statusMsg = fmt.Sprintf("Jumped to line %d", lineNum)
} else {
e.statusMsg = "Invalid line number"
}
e.mode = ModeNormal
case tcell.KeyBackspace, tcell.KeyBackspace2:
if len(e.inputBuffer) > 0 {
e.inputBuffer = e.inputBuffer[:len(e.inputBuffer)-1]
}
e.statusMsg = "Go to line: " + e.inputBuffer
case tcell.KeyRune:
if ev.Rune() >= '0' && ev.Rune() <= '9' {
e.inputBuffer += string(ev.Rune())
e.statusMsg = "Go to line: " + e.inputBuffer
}
}
return true
}
func (e *Editor) handleLLMMode(ev *tcell.EventKey) bool {
switch ev.Key() {
case tcell.KeyEscape:
e.mode = ModeNormal
e.statusMsg = "AI prompt cancelled"
case tcell.KeyEnter:
e.llmPrompt = e.inputBuffer
e.askLLMAsync()
e.mode = ModeNormal
case tcell.KeyBackspace, tcell.KeyBackspace2:
if len(e.inputBuffer) > 0 {
e.inputBuffer = e.inputBuffer[:len(e.inputBuffer)-1]
}
e.statusMsg = "Ask AI: " + e.inputBuffer
case tcell.KeyRune:
e.inputBuffer += string(ev.Rune())
e.statusMsg = "Ask AI: " + e.inputBuffer
}
return true
}
func (e *Editor) handleFilenameMode(ev *tcell.EventKey) bool {
switch ev.Key() {
case tcell.KeyEscape:
e.mode = ModeNormal
e.statusMsg = "Save cancelled"
case tcell.KeyEnter:
e.buffer.filename = e.inputBuffer
e.saveFile()
e.mode = ModeNormal
case tcell.KeyBackspace, tcell.KeyBackspace2:
if len(e.inputBuffer) > 0 {
e.inputBuffer = e.inputBuffer[:len(e.inputBuffer)-1]
}
e.statusMsg = "Filename: " + e.inputBuffer
case tcell.KeyRune:
e.inputBuffer += string(ev.Rune())
e.statusMsg = "Filename: " + e.inputBuffer
}
return true
}
func (e *Editor) saveFile() {
if e.buffer.filename == "" {
e.mode = ModeFilename
e.inputBuffer = ""
e.statusMsg = "Enter filename: "
return
}
if err := e.buffer.Save(); err != nil {
e.statusMsg = fmt.Sprintf("Save failed: %v", err)
} else {
basename := filepath.Base(e.buffer.filename)
e.statusMsg = fmt.Sprintf("Saved '%s' (%d lines)", basename, e.buffer.LineCount())
}
}
func (e *Editor) findText() {
if e.findQuery == "" {
e.statusMsg = "No search query entered"
return
}
totalLines := e.buffer.LineCount()
if totalLines == 0 {
e.statusMsg = "Buffer is empty"
return
}
startRow := e.cursor.Row
startCol := e.cursor.Col + 1
if startRow < 0 {
startRow = 0
}
if startRow >= totalLines {
startRow = totalLines - 1
}
for i := 0; i < totalLines; i++ {
row := (startRow + i) % totalLines
line := e.buffer.GetLine(row)
searchFrom := 0
if row == startRow && i == 0 {
searchFrom = startCol
}
if searchFrom >= len(line) {
continue
}
lowerLine := strings.ToLower(line[searchFrom:])
lowerQuery := strings.ToLower(e.findQuery)
idx := strings.Index(lowerLine, lowerQuery)
if idx != -1 {
e.cursor.Row = row
e.cursor.Col = searchFrom + idx
e.ensureCursorValid()
e.statusMsg = fmt.Sprintf("Found '%s' at line %d, column %d", e.findQuery, row+1, e.cursor.Col+1)
return
}
}
e.statusMsg = fmt.Sprintf("'%s' not found in document", e.findQuery)
}
func (e *Editor) askLLMAsync() {
if e.llmPrompt == "" {
e.statusMsg = "No prompt entered"
return
}
e.aiMutex.Lock()
if e.aiInProgress {
e.aiMutex.Unlock()
e.statusMsg = "AI request already in progress. Press Esc to cancel current request"
return
}
e.aiInProgress = true
e.aiMutex.Unlock()
if !e.llmClient.IsAvailable() {
e.aiMutex.Lock()
e.aiInProgress = false
e.statusMsg = "Cannot connect to Ollama. Is it running? Try: ollama serve"
e.aiMutex.Unlock()
return
}
if err := e.llmClient.CheckModel(); err != nil {
e.aiMutex.Lock()
e.aiInProgress = false
errMsg := err.Error()
if len(errMsg) > 80 {
errMsg = errMsg[:77] + "..."
}
e.statusMsg = fmt.Sprintf("Model error: %s", errMsg)
e.aiMutex.Unlock()
return
}
e.statusMsg = "Processing AI request... (Press Esc to cancel)"
go func() {
prompt := e.llmPrompt
done := make(chan bool, 1)
var response string
var err error
go func() {
response, err = e.llmClient.GenerateWithCancel(prompt, e.aiCancel)
done <- true
}()
select {
case <-done:
case <-time.After(90 * time.Second):
select {
case e.aiCancel <- true:
default:
}
err = fmt.Errorf("request timeout (90s)")
}
e.aiMutex.Lock()
defer e.aiMutex.Unlock()
e.aiInProgress = false
if err != nil {
errMsg := err.Error()
if errMsg == "cancelled" {
e.statusMsg = "AI request cancelled by user"
} else if strings.Contains(errMsg, "cannot connect") {
e.statusMsg = "Cannot connect to Ollama. Run: ollama serve"
} else if strings.Contains(errMsg, "model") && strings.Contains(errMsg, "not found") {
modelName := e.llmClient.model
e.statusMsg = fmt.Sprintf("Model '%s' not found. Run: ollama pull %s", modelName, modelName)
} else if strings.Contains(errMsg, "timeout") {
e.statusMsg = "AI request timeout. Try a simpler prompt or check Ollama"
} else {
if len(errMsg) > 70 {
errMsg = errMsg[:67] + "..."
}
e.statusMsg = fmt.Sprintf("AI error: %s", errMsg)
}
e.llmResponse = ""
return
}
if response == "" {
e.statusMsg = "AI returned empty response. Try rephrasing your prompt"
e.llmResponse = ""
return
}
e.llmResponse = response
preview := response
preview = strings.ReplaceAll(preview, "\n", " ")
preview = strings.ReplaceAll(preview, "\r", "")
preview = strings.TrimSpace(preview)
if len(preview) > 60 {
preview = preview[:57] + "..."
}
if preview == "" {
preview = "[Response ready]"
}
responseLines := strings.Count(response, "\n") + 1
e.statusMsg = fmt.Sprintf("AI response ready (%d lines). Preview: %s | Press Ctrl+K to insert", responseLines, preview)
}()
}
func (e *Editor) ensureCursorValid() {
if e.cursor.Row < 0 {
e.cursor.Row = 0
}
maxRow := e.buffer.LineCount() - 1
if maxRow < 0 {
maxRow = 0
}
if e.cursor.Row > maxRow {
e.cursor.Row = maxRow
}
lineLen := len(e.buffer.GetLine(e.cursor.Row))
if e.cursor.Col > lineLen {
e.cursor.Col = lineLen
}
if e.cursor.Col < 0 {
e.cursor.Col = 0
}
}
func (e *Editor) render() {
if e.screen == nil {
return
}
e.screen.Clear()
e.ensureCursorValid()
if e.cursor.Row < e.offsetRow {
e.offsetRow = e.cursor.Row
}
if e.cursor.Row >= e.offsetRow+e.height && e.height > 0 {
e.offsetRow = e.cursor.Row - e.height + 1
}
if e.offsetRow < 0 {
e.offsetRow = 0
}
for y := 0; y < e.height; y++ {
row := y + e.offsetRow
if row >= e.buffer.LineCount() {
e.drawString(0, y, "~", tcell.StyleDefault.Foreground(tcell.ColorBlue))
continue
}
line := e.buffer.GetLine(row)
e.drawString(0, y, line, tcell.StyleDefault)
}
e.renderStatusBar()
screenY := e.cursor.Row - e.offsetRow
screenX := e.cursor.Col
if screenX >= e.width {
screenX = e.width - 1
}
if screenX < 0 {
screenX = 0
}
if screenY >= e.height {
screenY = e.height - 1
}
if screenY < 0 {
screenY = 0
}
e.screen.ShowCursor(screenX, screenY)
e.screen.Show()
}
func (e *Editor) renderStatusBar() {
y := e.height
if y < 0 {
return
}
style := tcell.StyleDefault.
Background(tcell.ColorWhite).
Foreground(tcell.ColorBlack)
e.aiMutex.Lock()
inProgress := e.aiInProgress
e.aiMutex.Unlock()
if inProgress {
style = tcell.StyleDefault.
Background(tcell.ColorYellow).
Foreground(tcell.ColorBlack)
}
for x := 0; x < e.width; x++ {
if y >= 0 && y < e.height+2 {
e.screen.SetContent(x, y, ' ', nil, style)
}
if y+1 >= 0 && y+1 < e.height+2 {
e.screen.SetContent(x, y+1, ' ', nil, style)
}
}
statusMsg := e.statusMsg
if len(statusMsg) > e.width && e.width > 3 {
statusMsg = statusMsg[:e.width-3] + "..."
}
e.drawString(0, y, statusMsg, style)
modMark := ""
if e.buffer.modified {
modMark = " [+]"
}
filename := e.buffer.filename
if filename == "" {
filename = "[No Name]"
} else {
filename = filepath.Base(filename)
if len(filename) > 20 {
filename = filename[:17] + "..."
}
}
info := fmt.Sprintf("%s%s | Ln %d/%d | Col %d",
filename, modMark, e.cursor.Row+1, e.buffer.LineCount(), e.cursor.Col+1)
if len(info) > e.width && e.width > 0 {
info = info[:e.width]
}
if y+1 >= 0 && y+1 < e.height+2 {
e.drawString(0, y+1, info, style)
}
}
func (e *Editor) drawString(x, y int, s string, style tcell.Style) {
if e.screen == nil || y < 0 || y >= e.height+2 || x < 0 {
return
}
for i, r := range s {
posX := x + i
if posX >= e.width || posX < 0 {
break
}
e.screen.SetContent(posX, y, r, nil, style)
}
}
func main() {
ollamaURL := flag.String("ollama", "http://localhost:11434", "Ollama API URL")
model := flag.String("model", "llama2", "LLM model to use")
showVersion := flag.Bool("version", false, "Show version")
showHelp := flag.Bool("help", false, "Show help")
flag.Parse()
if *showVersion {
fmt.Printf("GoEdit v%s\n", version)
os.Exit(0)
}
if *showHelp {
printHelp()
os.Exit(0)
}
var filename string
if flag.NArg() > 0 {
filename = flag.Arg(0)
absPath, err := filepath.Abs(filename)
if err == nil {
filename = absPath
}
}
ed, err := NewEditor(filename, *ollamaURL, *model)
if err != nil {
log.Fatalf("Failed to create editor: %v", err)
}
if err := ed.Run(); err != nil {
log.Fatalf("Editor error: %v", err)
}
}
func printHelp() {
fmt.Println("GoEdit - A minimal yet powerful terminal text editor")
fmt.Println("\nUsage:")
fmt.Println(" goedit [options] [filename]")
fmt.Println("\nOptions:")
fmt.Println(" -ollama string Ollama API URL (default: http://localhost:11434)")
fmt.Println(" -model string LLM model to use (default: llama2)")
fmt.Println(" -version Show version")
fmt.Println(" -help Show this help")
fmt.Println("\nKeyboard Shortcuts:")
fmt.Println(" Ctrl+S Save file")
fmt.Println(" Ctrl+Q Quit")
fmt.Println(" Ctrl+F Find text")
fmt.Println(" Ctrl+G Go to line")
fmt.Println(" Ctrl+A Select all")
fmt.Println(" Ctrl+C Copy line")
fmt.Println(" Ctrl+X Cut line")
fmt.Println(" Ctrl+V Paste")
fmt.Println(" Ctrl+Z Undo")
fmt.Println(" Ctrl+Y Redo")
fmt.Println(" Ctrl+L Ask LLM (AI Assistant)")
fmt.Println(" Ctrl+K Insert LLM response at cursor")
fmt.Println(" Esc Cancel AI request")
fmt.Println(" Tab Insert 4 spaces")
fmt.Println(" Home/End Line start/end")
fmt.Println(" Page Up/Down Scroll page")
}
Build Instructions
Create a new directory and save these four files:
cursor.gobuffer.goollama.gomain.go
Then run:
go mod init goedit
go get github.com/gdamore/tcell/v2
go build -o goedit
To run:
./goedit myfile.txt
Or with a specific model:
./goedit -model codellama mycode.py
That's the complete, working source code for GoEdit! 🎉
No comments:
Post a Comment