Skip to main content

6 posts tagged with "LSP"

View All Tags

· 3 min read
Jeffrey Chupp

If you're just starting the LSP, you might wonder what language to build your Language Server (LS) with. This article will help you pick the right language. You can choose anything (seriously, I built a toy Language Server in Bash). There's no universally correct answer, but there’s a correct one for you.

Thinking...

Consideration 1: Audience

Your audience is the most important consideration. If you're writing a language server for a new Python web framework (Language Servers aren't just for languages, people), then implementing the language server in Java might raise a few eyebrows.

The audience for a Python framework is less likely to contribute to a language server written in a language they're less familiar with. There's nothing wrong with Java (IMHO), but the biases associated with languages could hurt adoption.

If your language server is for a specific language or tooling tied to a specific language, you should probably use the same language to build the server.

Consideration 2: Personal Preference

If you're building your language server as a hobby, the first user is yourself. Optimize for your own enjoyment.

You’re less likely to have fun building if you pick a popular language with unfamiliar (or poor) ergonomics. If you're not having fun, you're less likely to get very far with the project, and your language server won't ever matter to anyone else anyway.

This doesn't mean you should limit yourself to languages you're already an expert in -- building a language server is a great way to learn how a new language handles

  • stdin/stdout and other communication channels
  • concurrency and parallelism
  • error handling
  • testing, debugging, profiling
  • etc.

Consider picking a language you'll enjoy using.

Non-consideration: Performance

Unless you're building a language server to replace one that is demonstrably slow, you should probably avoid optimizing your decision for performance. Measure first before you start hand-coding assembly code.

You're a developer; I get it. You want to think performance matters. Suppose computationally intensive behaviors are required to calculate diagnostics/code actions/etc. In that case, you can always shell out to something tuned for performance and still keep the Language Server itself implemented at a higher level.

Don't worry about performance. It isn't important at first, and you have options later.

Non-consideration: Ecosystem and Libraries

Many languages already have libraries that provide abstractions to help you write language servers. These can jump-start your development but aren't going to make or break your project.

You have all the building blocks you need if you can read and write over stdin/stdout and encode and decode JSON.

Learn more and build alongside me in my LSP From Scratch series.

You can build a language server without third-party libraries.

What If There's No Clear Winner?

If the considerations above haven't helped you pick a clear winner, choose TypeScript (or, if you must, JavaScript).

The first-party libraries (e.g., vscode-languageserver-node) are in written TypeScript, and the community and ecosystem are excellent. A discussion on the vscode-languageserver-node project often leads to an update to the spec itself.

As a bonus, servers written in TypeScript (and JavaScript) can be bundled inside a VS Code extension and be available in the VS Code Marketplace as a single download. I've put up a Minimum Viable VS Code Language Server Extension repo where you can see how this all fits together.

All things being equal, choose TypeScript.

· 16 min read
Jeff Dwyer

Lograge is one of the all-time great Rails gems and is one of those 3-4 gems like devise, timecop or rails_admin that I typically install on every new Rails project.

So, lograge is great, but are better alternatives in 2024? I think so. We'll examine two options: an excellent free choice called Rails Semantic Logger and one I've built for the Prefab platform.

What Problem Do These Gems Solve?

First, let's take a quick look at what problem we're trying to solve. Here's the logging you get from Rails & Lograge out of the box at info level.

Started GET "/posts" for 127.0.0.1 at 2023-09-06 10:08:20 -0400
Processing by PostsController#index as HTML
Rendered posts/index.html.erb within layouts/application (Duration: 0.8ms | Allocations: 813)
Rendered layout layouts/application.html.erb (Duration: 19.3ms | Allocations: 34468)
Completed 200 OK in 30ms (Views: 21.1ms | ActiveRecord: 0.5ms | Allocations: 45960)

There are a number of annoyances with the Rails logging defaults:

  1. 5 lines of logging for a single request.
  2. "Completed 200 OK" doesn't tell us which request it was that completed.
  3. No timestamps on the log lines after the first one.

In contrast, lograge is clear, concise & grep-able.

(You can, of course, output in JSON as well; I'm not showing that because it's not as pleasant to look at.)

So this is the problem that lograge has been solving all these years, and it's a great fix.

More Than Just a Single Line

But, a single-line request summary is not the only interesting aspect of logging. What about debug logging? What about tagged and struct logging? Can any of these libraries help us fix our problems faster?

To exercise the other aspects of logging, here's the sample code we will run. It logs at all four levels and uses tagged and structlog to show how to add context to your logs.

class PostsController < ApplicationController
def index
@posts = Post.all
if can_use_struct_log?
logger.debug "🍏🍏🍏detailed information", posts: @posts.size #struct logging
logger.info "🍋🍋🍋informational logging", posts: @posts.size
else
logger.debug "🍏🍏🍏detailed information #{@posts.size}" # old-school
logger.info "🍋🍋🍋informational logging #{@posts.size}"
end
@posts.first.process_post
end
end

class Post < ApplicationRecord
def process_post
logger.tagged "process post" do
logger.tagged "post-#{id}" do #nested tagged logging
logger.debug "🍏🍏🍏details of the post"
logger.info "🍋🍋🍋 info about the post"
end
end
end
end

Comparing At INFO Level

Let's compare the output at the info level for each of the gems. It is a bit funky to see it on the web and not a wide terminal, but we'll do our best. Each of the gems does provide a way to provide a custom formatter to tweak things like date, time, etc, but this is what you get out of the box. Also, you'll probably want to use JSON output in production. All the gems support that; see the JSON output comparison in the appendix, but it's pretty straightforward.

Started GET "/posts" for 127.0.0.1 at 2023-09-06 10:08:20 -0400
Processing by PostsController#index as HTML
🍋🍋🍋informational logging 1
[process post] [post-2] 🍋🍋🍋 info about the post
Rendered posts/index.html.erb within layouts/application (Duration: 0.8ms | Allocations: 813)
Rendered layout layouts/application.html.erb (Duration: 19.3ms | Allocations: 34468)
Completed 200 OK in 30ms (Views: 21.1ms | ActiveRecord: 0.5ms | Allocations: 45960)

We see that Rails, by default, has no support for structured logging, though it does support tagged logging. We're outputting logging 1 instead of logging posts=1 and in JSON we won't have a nice {"posts": 1} to index.

It's also not clear what class or file informational logging 1 is coming from. This is annoying because it makes us need to type more detail into the error message just to help us locate / grep.

We do see helpful logging about what templates were rendered and how long they took.

Comparing At DEBUG Level

For completeness, let's compare the output at the debug level for each of the gems.

Started GET "/posts" for 127.0.0.1 at 2023-09-06 10:01:55 -0400
Processing by PostsController#index as HTML
Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:8:in `index'
🍏🍏🍏detailed information 1
CACHE Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:9:in `index'
🍋🍋🍋informational logging 1
CACHE Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:10:in `index'
CACHE Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:11:in `index'
Post Exists? (0.1ms) SELECT 1 AS one FROM "posts" LIMIT ? [["LIMIT", 1]]
↳ app/controllers/posts_controller.rb:12:in `index'
Post Load (0.0ms) SELECT "posts".* FROM "posts" ORDER BY "posts"."id" ASC LIMIT ? [["LIMIT", 1]]
↳ app/controllers/posts_controller.rb:13:in `index'
[process post] [post-2] 🍏🍏🍏details of the post
[process post] [post-2] 🍋🍋🍋 info about the post
Rendering layout layouts/application.html.erb
Rendering posts/index.html.erb within layouts/application
Post Load (0.1ms) SELECT "posts".* FROM "posts"
↳ app/views/posts/index.html.erb:4
Rendered posts/index.html.erb within layouts/application (Duration: 0.6ms | Allocations: 736)
Rendered layout layouts/application.html.erb (Duration: 7.9ms | Allocations: 14135)
Completed 200 OK in 15ms (Views: 8.4ms | ActiveRecord: 0.6ms | Allocations: 20962)

Rails debug for local dev gets pretty chatty with a lot more detail. In addition to the templates and layouts we got at the debug level, we can now see the executed SQL.

We also now get class and line number output for some of the logging, though not all of it. Our custom logging does not include any information about where it came from.

It's also a lot of output, and it's not particularly easy to grep for the important bits. As your app grows, this logging can start to get a bit overwhelming.

It may go without saying, but turning this on in production would be a very bad idea.

JSON Formatting

The JSON formatting of each of these libraries is pretty much what you would expect, but for completeness:

{
"method":"GET",
"path":"/posts",
"format":"html",
"controller":"PostsController",
"action":"index",
"status":200,
"allocations":45961,
"duration":37.91,
"view":29.63,
"db":0.42
}

Summary

The state of Rails logging gems in 2024 is good, and you've got great options. If you've been defaulting to just standard Rails logging or lograge for years, it may be time to take a look at what else is out there. Being able to quickly trace each log line back to its source is valuable, and structured logging feels much better than string interpolation and can be very powerful in conjunction with your log aggregation tool.

Rails Default Logger

Good:

  • Rails gives us a lot of great information out of the box.

Bad:

  • 5 lines of logging for a single request is too much in deployed environments.
  • "Completed 200 OK" doesn't tell us what request completed.
  • No timestamps on the log lines after the first one.
  • Difficult to fine-tune debug logging. Very "all-or-nothing".

Lograge

Lograge has been the default choice for years. If you don't use much logging and just want the basic request log, it's a good choice.

For more information, check out the GitHub for lograge

Good

  • Better 1-line Req Log
  • Supports tagged logging

Bad

  • No support for struct log
  • Unclear display of level by default formatter
  • Difficult to fine-tune debug logging
  • Doesn’t format your other logs, like Rails.logger.info "foo"
  • No file names/origin of logging

rails_semantic_logger

Rails Semantic Logger is an impressive piece of software that has been around for a long time. It's got a lot of features and is very configurable. It has everything you could want in a logger, save the ability to update it dynamically.

For more information, check out the docs for rails_semantic_logger

Good:

  • Adds Class Name
  • Optionally add file and line number
  • Better 1-line Req Log
  • Very configurable
  • Structlog & Tagged Logging
  • logger.measure_trace("Low level trace information such as data sent over a socket") do ... end is cool
  • logger.error("Oops external call failed", exception) is cool

Bad:

  • Can't change log levels on the fly

prefab-cloud-ruby

Prefab is a SaaS that provides dynamic configuration and uses that to support Feature Flag and Dynamic Log Levels. Prefab provides the same log cleanup and core features as SemanticRails, with the additional benefit of being able to quickly debug production by temporarily enabling debug logging for a specific user or job.

For more information, check out: Dynamic Log Levels

Good

  • Consistent display of class and method
  • Change log level instantly
  • Turn debugging logging on for just a single user or team
  • Better 1-line Req Log
  • Structlog & Tagged Logging

Bad

  • Not as comprehensive as rails_semantic_logger
  • Fewer integrations than rails_semantic_logger or lograge
  • Part of a paid product

Thanks for checking this out. If you have any questions, please reach out.

· 3 min read
Jeff Dwyer

We're super excited about our new Editor Tools! When Jeffrey first starting hacking around with the LSP we each had one of those whoa moments where you feel like you're seeing things in a whole new light.

I love being in my editor, but everything about Feature Flags has always required me to leave. We spent the past month asking "what would it be like to do it all from the editor?" and I love where we ended up.

Feature Flag Autocompletion

First up, the autocomplete feature for feature flags. A mistyped feature flag is a terrible thing and pretty annoying to debug since they default to false. Let your editor help with that. Autocomplete for flag and config names, and auto-create simple flags if the flag doesn't exist yet.

Feature Flag autocomplete

Evaluations on Hover

Writing the feature flag is often the easy part. The real question comes later. Is this thing on? Can I delete it? What value is set in production?

We wondered how excellent it would be to bring our evaluation data right into the editor and our answer is... very excellent! No more leaving your editor to answer a simple question. A simple hover, and you’ve got all the info you need.

Feature Flag in Editor Evaluation Summaries

In this picture you can see that the flag is true in staging and test, false in dev and a 66% rollout to true in production. And it looks like it's working too. Data over the past 24 hours shows that 67% of users are seeing the feature.

Personal Overrides

Grimace if you've ever committed if true || flags.enabled? "myflag" to version control. It's easy to do. You want to see what happens when a flag is enabled, but setting the flag is annoying or will change it for everyone, so you cheat and put a raw true in front of your flag and then forget to take it out.

What would be a better way? Well, could I just click on the flag and set it to true? Of course it should only be true for me on my personal machine so I don't screw anyone else up. That sounds nice right?

Feature Flag in Editor Evaluation Summaries

This personal overrides feature is tied to you developer account, so no more global toggles causing chaos. Set your value, test your feature, all without leaving your coding groove.

Learning More About LSPs

We've been learning a ton about Language Server Protocols (LSPs) lately. Jeffrey's been on a roll, sharing his insights on creating LSPs with tutorials like writing an lsp in bash and lsp zero to completion.

We're not just building these tools for you; we're building them for us, too. We're genuinely jazzed about these new features and the difference they're making in our coding lives.

Give these new tools a spin and let us know what you think. We'd love to hear about what else you think would make the in-editor experience for feature flags brilliant. Don't use VSCode? Don't worry, we're working on the other editors too. Go here to get notified about new editor releases.

· 7 min read
Jeffrey Chupp

Implementing a language server is so easy that we're going to do it in Bash.

You really shouldn't write your language server in Bash, but we'll show that you could.

The minimum viable language server needs to

  1. receive JSON-RPC requests from the client
  2. respond with JSON-RPC to client requests

Like a good bash program, we'll talk over stdin and stdout.

Mad scientist holding a bash language server

JSON-RPC

JSON-RPC is simply a protocol for communication over JSON.

  • A request message includes an id, params, and the method to be invoked.
  • A response message includes the id from the request and either a result or error payload.
  • The LSP adds on the additional requirement of a header specifying the Content-Length of the message.

An example request might look like

Content-Length: 226\r\n
\r\n
{"jsonrpc":"2.0","method":"initialize","id":1,"params":{"trace":"off","processId":2729,"capabilities":[],"workspaceFolders":null,"rootUri":null,"rootPath":null,"clientInfo":{"version":"0.10.0-dev+Homebrew","name":"Neovim"}}}

An example response might look like

Content-Length: 114\r\n
\r\n
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"capabilities": {
"completionProvider": {}
}
}
}

For our language server in bash, we'll write the following function:

respond() {
local body="$1"
local length=${#body}
local response="Content-Length: $length\r\n\r\n$body"

echo -e "$response"
}

This will take a JSON string as an argument and echo it out. The -e ensures our line breaks work as intended.

Listening for messages and parsing them

Our language server will listen for messages on stdin and write messages on stdout.

Let's name the bash script /tmp/bash-ls and chmod +x it.

I'll connect it to my editor, Neovim, using

vim.lsp.start {
name = "Bash LS",
cmd = {"/tmp/bash-ls"},
capabilities = vim.lsp.protocol.make_client_capabilities(),
}

Now, we'll work on our read/print loop.

We'll start with the Bash classic

while IFS= read -r line; do

This gives us a value for $line that looks like Content-Length: 3386

The content length will vary based on the capabilities of your editor, but the gist here is that we need to read 3386 characters to get the entire JSON payload.

Let's extract the content length number

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

We need to add 2 to the number to account for the initial \r after the content length header. So we'll length=$((length + 2))

Now we're ready to read the JSON payload:

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

Remember that JSON-RPC requires us to include the id of the request message in our response. We could write some convoluted JSON parsing in bash to extract the id, but we'll lean on jq instead.

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

# We need -E here because jq fails on newline chars -- https://github.com/jqlang/jq/issues/1049
id=$(echo -E "$json_payload" | jq -r '.id')

Now, we have everything we need to read and reply to our first message.

The initialize method

The first message sent by the client is the initialize method. It describes the client's capabilities to the server.

You can think of this message as saying, "Hey, language server, here are all the features I support!"

The server replies with, "Oh, hi, client. Given the things you support, here are the things I know how to handle."

Well, that's how it should work, anyway. For our MVP here, we'll provide a canned response with an empty capabilities section.

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

# We need -E here because jq fails on newline chars -- https://github.com/jqlang/jq/issues/1049
id=$(echo -E "$json_payload" | jq -r '.id')
method=$(echo -E "$json_payload" | jq -r '.method')

case "$method" in
'initialize')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"capabilities": {}
}
}'
;;

*) ;;
esac
done

We pluck out the request's method and use a case statement to reply to the correct method. If we don't support the method, we don't respond to the client.

If we didn't use a case statement here and always replied with our canned message, we'd make it past initialization, but then the client would get confused when we respond to (e.g.) its request for text completions with an initialize result.

That's all you need for a minimum viable language server built-in bash. It doesn't do anything besides the initialization handshake, but it works.

Adding completion

A language server that doesn't do anything is no fun, so let's teach it how to respond to textDocument/completion to offer text completions.

First, we'll need to modify our capabilities in our initialize response to indicate that we support completion:

          "result": {
"capabilities": {
"completionProvider": {}
}
}

We'll start with hardcoded results to verify things work. This is as easy as adding a new condition to our case statement.

  'textDocument/completion')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"isIncomplete": false,
"items": [
{ "label": "a sample completion item" },
{ "label": "another sample completion item" }
]
}
}'
;;
Hardcoded completions

That works as we hoped. Let's jazz it up a little by completing the first 1000 words from the dict file on macOS (your path may differ).

Here's the final version of the script:

#!/bin/bash

respond() {
local body="$1"
local length=${#body}
local response="Content-Length: $length\r\n\r\n$body"

echo "$response" >>/tmp/out.log

echo -e "$response"
}

completions=$(head </usr/share/dict/words -n 1000 | jq --raw-input --slurp 'split("\n")[:-1] | map({ label: . })')

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

# We need -E here because jq fails on newline chars -- https://github.com/jqlang/jq/issues/1049
id=$(echo -E "$json_payload" | jq -r '.id')
method=$(echo -E "$json_payload" | jq -r '.method')

case "$method" in
'initialize')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"capabilities": {
"completionProvider": {}
}
}
}'
;;

'textDocument/completion')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"isIncomplete": false,
"items": '"$completions"'
}
}'
;;

*) ;;
esac
done
Dictionary completions

Lovely.

Closing

In 56 lines of bash, we've implemented a usable (if boring) language server.

I wouldn't advocate anyone writing a serious language server in bash. Hopefully, this has illustrated how easy it is to get started with language servers and has made the LSP and JSON-RPC a little less magical.

What language would I recommend for writing a language server? That's probably a whole article in itself, but the short answer is

  1. All things being equal, choose TypeScript. The first-party libraries (e.g., vscode-languageserver-node) are in written TypeScript, and the community and ecosystem are excellent.
  2. If you don't want to use TypeScript, use whatever language you're most productive in. There's probably already a library for writing a language server in your preferred language, but if there isn't, you now know how easy it is to write it yourself.

If you'd like to be notified when I publish more LSP content, sign up for my newsletter.

· 7 min read
Jeffrey Chupp

In Hands-On LSP Tutorial: Building a Custom Auto-Complete, we built a language server to provide our text editors with GitHub Issue and PR completion.

Final code from that post
import fetch from "node-fetch";
import {
createConnection,
InitializeResult,
TextDocumentSyncKind,
TextDocuments,
} from "vscode-languageserver/node";
import { TextDocument } from "vscode-languageserver-textdocument";

const connection = createConnection();

const documents: TextDocuments<TextDocument> = new TextDocuments(TextDocument);

type Edge = {
node: {
title: string;
number: number;
url: string;
body: string;
};
};

type Response = {
data: { search: { edges: Edge[] } };
};

// Replace with your real GitHub access token
const accessToken = "YOUR_GITHUB_ACCESS_TOKEN";

const searchScope = "repo:facebook/docusaurus";

const search = async (input: string): Promise<Response> => {
const query = `
{
search(query: "${searchScope} ${input}", type: ISSUE, first: 10) {
edges {
node {
... on PullRequest {
title
number
body
url
}
... on Issue {
title
number
body
url
}
}
}
}
}
`;

const response = await fetch("https://api.github.com/graphql", {
method: "POST",
body: JSON.stringify({ query }),
headers: {
Authorization: `Bearer ${accessToken}`,
},
});

if (!response.ok) {
const error = {
status: response.status,
statusText: response.statusText,
body: await response.json(),
};

connection.console.error(
`Error fetching from GitHub: ${JSON.stringify(error)}`,
);
}

return await response.json();
};

connection.onInitialize(() => {
const result: InitializeResult = {
capabilities: {
textDocumentSync: TextDocumentSyncKind.Incremental,
// Tell the client that the server supports code completion
completionProvider: {
// We want to listen for the "#" character to trigger our completions
triggerCharacters: ["#"],
},
},
};

return result;
});

connection.onCompletion(async (params) => {
const doc = documents.get(params.textDocument.uri);

if (!doc) {
return null;
}

const line = doc.getText().split("\n")[params.position.line];

// return early if we don't see our trigger character
if (!line.includes("#")) {
return null;
}

const input = line.slice(line.lastIndexOf("#") + 1);

const json = await search(input);

const items = json.data.search.edges.map((edge) => {
return {
label: `#${edge.node.number}: ${edge.node.title}`,
detail: `${edge.node.title}\n\n${edge.node.body}`,
textEdit: {
range: {
start: {
line: params.position.line,
character: line.lastIndexOf("#"),
},
end: {
line: params.position.line,
character: params.position.character,
},
},
newText: `${edge.node.url}`,
},
};
});

return {
isIncomplete: true,
items,
};
});

// Make the text document manager listen on the connection
// for open, change and close text document events
documents.listen(connection);

// Listen on the connection
connection.listen();

It works, but searchScope (e.g., repo:facebook/docusaurus) and our GitHub access token are hard-coded. That's fine for local development, but to get this language server ready to share, we should make those configurable.

Approach

  1. We'll update server.ts to use editor-provided config instead of hard-coded values.
  2. We'll update our editor setup to provide config to the server.

Updating our server to use config

First, we remove previous assignments of accessToken and searchScope.

- // Replace with your real GitHub access token
- const accessToken = "YOUR_GITHUB_ACCESS_TOKEN";
-
- const searchScope = "repo:facebook/docusaurus";

Instead, we specify the shape of our configuration, a default number of max results, and default our config to an empty object.

interface Config {
githubAccessToken?: string;
searchScope?: string;
maxResults?: number;
}

const DEFAULT_MAX_RESULTS = 10;

let config: Config = {};

Next, we change our search query to the searchScope and maxResults from our settings.

- const search = async (input: string): Promise<Response> => {
- const query = `
- {
- search(query: "${searchScope} ${input}", type: ISSUE, first: 10) {
const search = async (input: string): Promise<Response> => {
const query = `
{
search(query: "${config.searchScope} ${input}", type: ISSUE, first: ${
config.maxResults ?? DEFAULT_MAX_RESULTS
}) {

We use the token from our config.

- Authorization: `Bearer ${accessToken}`,
Authorization: `Bearer ${config.githubAccessToken}`,

We'll add a getConfig function to get the current config from the editor. We look up our configuration by a prefix we specify (here, myLSP).

const getConfig = async () => {
config = await connection.workspace.getConfiguration("myLSP");
};

Finally, we call getConfig in our onCompletion function and we return early with an error if we're missing any required fields.

connection.onCompletion(async (params) => {
await getConfig();

if (!config.githubAccessToken || !config.searchScope) {
connection.console.error("Both githubAccessToken and searchScope required");
return;
}
// ...

That's all we needed to change for our server.

Updated full server.ts code
import fetch from "node-fetch";
import {
createConnection,
InitializeResult,
TextDocumentSyncKind,
TextDocuments,
} from "vscode-languageserver/node";
import { TextDocument } from "vscode-languageserver-textdocument";

const connection = createConnection();

const documents: TextDocuments<TextDocument> = new TextDocuments(TextDocument);

type Edge = {
node: {
title: string;
number: number;
url: string;
body: string;
};
};

type Response = {
data: { search: { edges: Edge[] } };
};

interface Config {
githubAccessToken?: string;
searchScope?: string;
maxResults?: number;
}

const DEFAULT_MAX_RESULTS = 10;

let config: Config = {};

const search = async (input: string): Promise<Response> => {
const query = `
{
search(query: "${config.searchScope} ${input}", type: ISSUE, first: ${
config.maxResults ?? DEFAULT_MAX_RESULTS
}) {
edges {
node {
... on PullRequest {
title
number
body
url
}
... on Issue {
title
number
body
url
}
}
}
}
}
`;

const response = await fetch("https://api.github.com/graphql", {
method: "POST",
body: JSON.stringify({ query }),
headers: {
Authorization: `Bearer ${config.githubAccessToken}`,
},
});

if (!response.ok) {
const error = {
status: response.status,
statusText: response.statusText,
body: await response.json(),
};

connection.console.error(
`Error fetching from GitHub: ${JSON.stringify(error)}`,
);
}

return await response.json();
};

const getConfig = async () => {
config = await connection.workspace.getConfiguration("myLSP");
};

connection.onInitialize(() => {
const result: InitializeResult = {
capabilities: {
textDocumentSync: TextDocumentSyncKind.Incremental,
// Tell the client that the server supports code completion
completionProvider: {
// We want to listen for the "#" character to trigger our completions
triggerCharacters: ["#"],
},
},
};

return result;
});

connection.onCompletion(async (params) => {
await getConfig();

if (!config.githubAccessToken || !config.searchScope) {
connection.console.error("Both githubAccessToken and searchScope required");
return;
}

const doc = documents.get(params.textDocument.uri);

if (!doc) {
return null;
}

const line = doc.getText().split("\n")[params.position.line];

// return early if we don't see our trigger character
if (!line.includes("#")) {
return null;
}

const input = line.slice(line.lastIndexOf("#") + 1);

const json = await search(input);

const items = json.data.search.edges.map((edge) => {
return {
label: `#${edge.node.number}: ${edge.node.title}`,
detail: `${edge.node.title}\n\n${edge.node.body}`,
textEdit: {
range: {
start: {
line: params.position.line,
character: line.lastIndexOf("#"),
},
end: {
line: params.position.line,
character: params.position.character,
},
},
newText: `${edge.node.url}`,
},
};
});

return {
isIncomplete: true,
items,
};
});

// Make the text document manager listen on the connection
// for open, change and close text document events
documents.listen(connection);

// Listen on the connection
connection.listen();

Providing config from the editor

As in the previous post, I will cover VS Code and Neovim here.

How to configure your server with VS Code

VS Code extensions specify configuration via the root package.json. Replace your contributes block with the following:

"contributes": {
"configuration": {
"title": "My Extension Name",
"properties": {
"myLSP.githubAccessToken": {
"type": "string",
"default": "",
"description": "Your GitHub access token"
},
"myLSP.searchScope": {
"type": "string",
"default": "repo:facebook/docusaurus",
"description": "Search scope for GitHub PRs/Issues"
},
"myLSP.maxResults": {
"type": "number",
"default": 10,
"description": "Max number of completion suggestions"
}
}
}
},

Note that the myLSP prefix on the properties matches the string we use in our getConfig function.

Once you've repackaged and reinstalled your extension, you can modify the settings by pressing ctrl+shift+p (cmd+shift+p on Mac) and typing "Open Settings UI"

Once in the Settings UI, search for myLSP to modify your settings.

vs-code example

Changes take effect immediately because we pull the settings in our server as needed.

How to configure your server with Neovim

For Neovim, we modify our vim.lsp.start invocation to add a settings table to our configuration.

vim.lsp.start {
name = "test-ls",
--- ...
settings = {
myLSP = {
githubAccessToken = os.getenv("GITHUB_ACCESS_TOKEN"),
searchScope = "owner:github",
maxResults = 15,
}
},
}

Since we only provide these settings in our vim.lsp.start, you'll need to restart your editor for changes to take effect.

What's next?

Next, we will look at distributing our language server to make it trivial for our coworkers and friends to install and use in their editors.

If you'd like to be notified when I publish more LSP content, sign up for my newsletter.

· 10 min read
Jeffrey Chupp

What we'll build in this tutorial

Today, we'll build out completion for linking to GitHub issues and PRs.

Developers can use this completion to reference issues and PRs from their code comments and commit messages.

What is a Language Server?

Language servers let you meet developers where they are: their editor. They give you the power of in-editor tooling without rebuilding your tools for each editor you want to support. They can implement features like completion, go-to-definition, code actions, and more.

The Language Server Protocol (LSP) specifies how a language server should interface with an editor.

Approach

  1. We'll build a language server prototype with hard-coded completion choices.
  2. We'll wire this to our editor and test it out.
  3. We'll replace the hard-coded choices with dynamic content from GitHub's API.

Prototyping our Language Server

We'll build this demo in Typescript.

We install a few dependencies to handle the language server implementation details. We'll also install node-fetch to help us make API requests.

npm install vscode-languageserver vscode-languageserver-textdocument vscode-languageclient node-fetch@2
npm install --save-dev @types/vscode @types/node-fetch

(Don't let the vscode bits confuse you. While Microsoft publishes these packages, your language server will be editor-agnostic.)

Now, we can start writing our code. We'll start with distilled boilerplate code borrowed from the Visual Studio Code Language Server Extension Guide.

import {
createConnection,
InitializeResult,
TextDocumentSyncKind,
TextDocuments,
} from "vscode-languageserver/node";
import { TextDocument } from "vscode-languageserver-textdocument";

const connection = createConnection();

const documents: TextDocuments<TextDocument> = new TextDocuments(TextDocument);

connection.onInitialize(() => {
const result: InitializeResult = {
capabilities: {
textDocumentSync: TextDocumentSyncKind.Incremental,
// Tell the client that the server supports code completion
completionProvider: {
// We want to listen for the "#" character to trigger our completions
triggerCharacters: ["#"],
},
},
};

return result;
});

// Make the text document manager listen on the connection
// for open, change and close text document events
documents.listen(connection);

// Listen on the connection
connection.listen();

After our imports, we set up a connection and some documents. The connection handles the communication between the editor (the client) and our code (the server). documents is a store of open files in our editor. documents stays in sync when files open, close, and change.

When the editor starts our server, onInitialize is fired. The editor sends some params describing what it can do (and what it wants our server to do), but we're not using those here, so we'll omit them. We reply with a list of things we can do (primarily, we offer completions triggered by #).

We tell documents to listen on the connection and the connection to listen for communication from the editor.

Our server could now run hosted by the editor (more on that later), but it doesn't do anything useful yet.

Let's add the initial hard-coded completion after our onInitialize.

connection.onCompletion(() => {
return [
{
label: "Cannot set properties of undefined (setting 'minItems')",
data: "https://github.com/facebook/docusaurus/issues/9271",
},
{
label: "docs: fix typo in docs-introduction",
data: "https://github.com/facebook/docusaurus/pull/9267",
},
{
label: "Upgrade notification command does not copy properly",
data: "https://github.com/facebook/docusaurus/issues/9239",
},
];
});

Now, we're ready to test this out.

Wiring this up to our Editor/IDE

While the LSP standardizes how to implement Language Servers and how they talk to editors, there's no good standard for making your editor aware of a language server so that it is ready to be used. For this post, I'll cover VS Code and Neovim.

How to connect your Language Server to VS Code

You connect a language server to VS Code by creating a thin extension layer. To make things as easy as possible for you, I've published a Minimum Viable VS Code Language Server Extension. Follow the directions in that repo and replace the entire contents of server/src/server.ts with the code we're writing in this post.

After following those directions, we'll also need to install node-fetch for the server since the node version packaged with VS Code is too old to include native fetch support.

cd server && npm install node-fetch@2 && npm install --save-dev @types/node-fetch && cd -

Now, you should be able to test your extension with the steps mentioned here. You should be able to open a new file, type #, and then see something like this:

vs-code example

How to connect your Language Server to Neovim

For Neovim, you pass some configuration to vim.lsp.start. For testing, I've put the following in ~/.config/nvim/after/ftplugin/ruby.lua. Note that this location means it only runs in ruby files. Feel free to start the server elsewhere.

local capabilities = vim.lsp.protocol.make_client_capabilities()

vim.lsp.start {
name = "test-ls",
cmd = {
"npx", "ts-node",
"/Users/ship/src/lsp-example-completion/server.ts",
"--stdio"
},
capabilities = capabilities
}

I'm using npx ts-node to run my server.ts without an intermediate build step, but you can use whatever approach you prefer.

You should be able to open a new file, type #, and then see something like this:

neovim example

If you encounter any trouble, check the output of :LspLog

Dynamic Completions

GitHub provides a GraphQL API. Get a GitHub access token and have it handy. It appears a classic token with "repo" permissions is required to get the search query to work correctly (at the time of writing).

Here's a query to search the Docusaurus repo for issues and PRs that include the word "bug.”

{
search(query: "repo:facebook/docusaurus bug", type: ISSUE, first: 10) {
edges {
node {
... on PullRequest {
title
number
body
url
}
... on Issue {
title
number
body
url
}
}
}
}
}

We could replace repo:facebook/docusaurus with owner:facebook to scope search every repo owned by Facebook (that we have access to).

You can replace the searchScope with your organization or repo name when applying the changes below. Be sure to swap out the placeholder YOUR_GITHUB_ACCESS_TOKEN with your token. In a future post, we'll cover language server configuration to pass in our search scope and access token.

import fetch from "node-fetch";

type Edge = {
node: {
title: string;
number: number;
url: string;
body: string;
};
};

type Response = {
data: { search: { edges: Edge[] } };
};

// Replace with your real GitHub access token
const accessToken = "YOUR_GITHUB_ACCESS_TOKEN";

const searchScope = "repo:facebook/docusaurus";

const search = async (input: string): Promise<Response> => {
const query = `
{
search(query: "${searchScope} ${input}", type: ISSUE, first: 10) {
edges {
node {
... on PullRequest {
title
number
body
url
}
... on Issue {
title
number
body
url
}
}
}
}
}
`;

const response = await fetch("https://api.github.com/graphql", {
method: "POST",
body: JSON.stringify({ query }),
headers: {
Authorization: `Bearer ${accessToken}`,
},
});

if (!response.ok) {
const error = {
status: response.status,
statusText: response.statusText,
body: await response.json(),
};

connection.console.error(
`Error fetching from GitHub: ${JSON.stringify(error)}`,
);
}

return await response.json();
};

// ...

connection.onCompletion(async () => {
const input = "bug";

const json = await search(input);

const items = json.data.search.edges.map((edge) => {
return {
label: `#${edge.node.number}: ${edge.node.title}`,
detail: `${edge.node.title}\n\n${edge.node.body}`,
insertText: edge.node.url,
};
});

return {
isIncomplete: true,
items,
};
});

Note the isIncomplete: true here. This tells the editor that the list we're returning isn't exhaustive and might change over time.

Our language server works!

vs-code example 2

But it uses our hard-coded search string "bug" instead of completing based on our input. Let's fix that.

connection.onCompletion takes a CompletionParams argument that looks something like this:

{
"textDocument":{"uri":"file:///private/tmp/example_file.rb"},
"context":{"triggerKind":3},
"position":{"line":2,"character":11}
}

With this information and the documents we initialized earlier, we have everything we need to understand what the user was typing.

connection.onCompletion(async (params) => {
const doc = documents.get(params.textDocument.uri);

if (!doc) {
return null;
}

const line = doc.getText().split("\n")[params.position.line];

// return early if we don't see our trigger character
if (!line.includes("#")) {
return null;
}

const input = line.slice(line.lastIndexOf("#") + 1);

// ...
vs-code example 3
vs-code example 4

After selecting our completion choice, you’ll notice that the # character hangs around. How do we prevent this? And why do we need a trigger character anyway?

If we didn't have a trigger character, this completion would trigger constantly as we type any code. That's wasteful for API requests and means we're doing unnecessary work -- we only want to show issues and PRs in very specific scenarios. So, the behavior should be opt-in, and a trigger character is beneficial.

To prevent the # from hanging around, we can replace our naive insertText with a TextEdit. TextEdits allow you to specify a range in the document to replace with new text. We'll modify our completion item formatting function:

const items = json.data.search.edges.map((edge) => {
return {
label: `#${edge.node.number}: ${edge.node.title}`,
detail: `${edge.node.title}\n\n${edge.node.body}`,
textEdit: {
range: {
start: {
line: params.position.line,
character: line.lastIndexOf("#"),
},
end: {
line: params.position.line,
character: params.position.character,
},
},
newText: `${edge.node.url}`,
},
};
});
// ...

With this change, we overwrite the # so that only the new content remains.

Final code

import fetch from "node-fetch";
import {
createConnection,
InitializeResult,
TextDocumentSyncKind,
TextDocuments,
} from "vscode-languageserver/node";
import { TextDocument } from "vscode-languageserver-textdocument";

const connection = createConnection();

const documents: TextDocuments<TextDocument> = new TextDocuments(TextDocument);

type Edge = {
node: {
title: string;
number: number;
url: string;
body: string;
};
};

type Response = {
data: { search: { edges: Edge[] } };
};

// Replace with your real GitHub access token
const accessToken = "YOUR_GITHUB_ACCESS_TOKEN";

const searchScope = "repo:facebook/docusaurus";

const search = async (input: string): Promise<Response> => {
const query = `
{
search(query: "${searchScope} ${input}", type: ISSUE, first: 10) {
edges {
node {
... on PullRequest {
title
number
body
url
}
... on Issue {
title
number
body
url
}
}
}
}
}
`;

const response = await fetch("https://api.github.com/graphql", {
method: "POST",
body: JSON.stringify({ query }),
headers: {
Authorization: `Bearer ${accessToken}`,
},
});

if (!response.ok) {
const error = {
status: response.status,
statusText: response.statusText,
body: await response.json(),
};

connection.console.error(
`Error fetching from GitHub: ${JSON.stringify(error)}`,
);
}

return await response.json();
};

connection.onInitialize(() => {
const result: InitializeResult = {
capabilities: {
textDocumentSync: TextDocumentSyncKind.Incremental,
// Tell the client that the server supports code completion
completionProvider: {
// We want to listen for the "#" character to trigger our completions
triggerCharacters: ["#"],
},
},
};

return result;
});

connection.onCompletion(async (params) => {
const doc = documents.get(params.textDocument.uri);

if (!doc) {
return null;
}

const line = doc.getText().split("\n")[params.position.line];

// return early if we don't see our trigger character
if (!line.includes("#")) {
return null;
}

const input = line.slice(line.lastIndexOf("#") + 1);

const json = await search(input);

const items = json.data.search.edges.map((edge) => {
return {
label: `#${edge.node.number}: ${edge.node.title}`,
detail: `${edge.node.title}\n\n${edge.node.body}`,
textEdit: {
range: {
start: {
line: params.position.line,
character: line.lastIndexOf("#"),
},
end: {
line: params.position.line,
character: params.position.character,
},
},
newText: `${edge.node.url}`,
},
};
});

return {
isIncomplete: true,
items,
};
});

// Make the text document manager listen on the connection
// for open, change and close text document events
documents.listen(connection);

// Listen on the connection
connection.listen();

We've only scratched the surface of what language servers can do. There's still plenty of the LSP left to explore.

Read the next article Hands-On LSP Tutorial: Configuration.

If you'd like to be notified when I publish more LSP content, sign up for my newsletter.