JVM Advent

The JVM Programming Advent Calendar

Live Programming with the GraalVM, the LSP, and VS Code

Last year, we showed you a Smalltalk live programming system running on top of GraalVM. This year, we bring some of the live programming experience from Squeak/Smalltalk to Visual Studio Code with the GraalVM and the Language Server Protocol (LSP).

Introduction

Before we show you some demos and take a look at some implementation details, we briefly give you a tour of “example-based live programming” (ELP).

Live programming is a form of interactive, exploratory programming. The goal of it is to help developers better understand the dynamic behavior of the programs that they build. For this, live programming provides short feedback loops during programming. Instead of having to recompile and re-run an application, live programming systems allow developers to evolve applications at run-time. Thereby, they can immediately observe the effects of their modifications.

One style of live programming is built around the idea of examples. Systems that support this allow developers to exemplify their code with annotations. Such examples usually define a receiver, arguments, and optional code for setup and teardown. Here’s what that looks like in Babylonian/JS, an ELP system we’ve built in Lively Kernel:

Babylonian Programming: Examples

These examples are then used by the programming environment to re-run annotated functions on every code change. Developers can add so-called probes to individual expressions to see intermediate results of all evaluations of an expression during the execution of the example. In the following screenshot, there are two probes – one defined on the y variable and the other one on this.ctx:

Babylonian Programming: Probes

A variety of different kinds of probes exists in example-based live programming systems. Assertion probes, for example, check the expression result against an assertion. Expression probes allow the evaluation of additional code inline. If you’d like to learn more about ELP systems, have a look at our research paper on Babylonian-style Programming.

While this makes for a rich, interactive programming experience, building ELP systems requires a lot of effort. Creating such systems can entail modifications throughout the whole stack of a programming ecosystem including the code editor, the reflection protocol, the parser and compiler, and the virtual machine. As a consequence, implementations are usually tightly integrated with a particular programming environment and virtual machine. So far, it was therefore necessary to start from scratch to build systems like this for other languages or editors.
As it turned out, GraalVM and the Language Server Protocol can dramatically reduce such efforts. More on that later, time to show you some demos of our latest ELP system!

Demos

A video is worth a thousand words, and it makes sense to see our new ELP system in action before we look at its implementation. Let’s start with a simple demo showing how our system can be used to interactively implement a Fibonacci function:

As you’ve seen, our ELP system provides immediate feedback as well as a consistent programming experience across the languages supported by GraalVM. Furthermore, it integrates other tools, such as a simple object inspector or a debugger. But what we think is even more interesting than this, is that the system can also be used for building polyglot applications – that is, building software in multiple languages at the same time:

Want to give it a try? Install our Polyglot Live Programming extension for VS Code. Read on to learn how it works…

GraalVM and Language-agnostic Tools

As you may already know, GraalVM is far more than just a JVM on steroids. Its ecosystem provides powerful tools for debugging and monitoring, the Truffle framework for building fast language interpreters, and a lot more. A key feature of most GraalVM tools is that they work across multiple programming languages. GraalVM supports the Debug Adapter Protocol, for example, which is what we’ve used to debug across languages in the second demo. This is only possible because the tool implementation does not rely on any language specifics. Instead, the implementation is entirely based on Truffle’s Instrument API, just like all other GraalVM polyglot tools.

Last year, we worked with the GraalVM team on a language-agnostic implementation of a language server for the LSP. With this language server, GraalVM can provide developers with common programming features (e.g., auto-completion or go to definition) for all its languages and within different programming environments. Here’s how this works:

As the developer modifies the program through the programming environment, the GraalVM language server analyzes the code of the program. When auto-completion is triggered by the developer, for example, the server can provide appropriate suggestions to the programming environment on request. Both communicate through the LSP.

All you need to try this out, apart from a GraalVM installation of course, is a code editor or IDE with LSP support. For Visual Studio Code, GraalVM ships an extension that helps you to set everything up.

Building a Live Feedback Loop on Top of the LSP and GraalVM

Together with some of our graduate students, we wanted to find out whether we can build a live programming system independent of both the language and the programming environment. If it is possible to build a language-agnostic debugger, we could probably use GraalVM and Truffle for building the live system independent of the language. And we could use the LSP to decouple the tool’s backend from the programming environment. However, a key requirement for a live system is a fast feedback loop. That usually means it should provide feedback in under a second, which is considered the threshold after which a user starts wondering if an interactive system is still responding1.

The LSP already communicates common file operations (e.g., open, modify, and close events) to a language server. It also supports commands that can be executed by a code editor. Notification messages can trigger events in the editor. Therefore, all we had to do is implement a command for the analysis of exemplified code in the language server. To inform the user about the progress of the analysis, the server can send appropriate notification messages. Conceptually, this works as follows:

Every time the developer modifies the program, the GraalVM language server runs our analysis. It reports results through the LSP to the programming environment, which visualizes them. Now, we only need to make sure this loop runs fast enough.

Implementing new LSP Commands

Let’s look at some code and implement a command for the GraalVM language server. New LSP commands can be added to the server with a class that extends TruffleInstrument and implements the LSPExtension interface. Such a class can then provide multiple LSPCommands. Here’s a
simple example:

class RudolphCommand implements LSPCommand {
  String getName() {
    return "rudolph";
  }

  Object execute(LSPServerAccessor server, Env env, List<Object> arguments) {
   return "Happy Holidays!";
  }
}

When we now run the GraalVM language server with our code put on the classpath, the server will automatically pick up our new command. In VS Code, we can then trigger the command as follows:

vscode.commands.executeCommand('rudolph').then((result) => {
  vscode.window.showInformationMessage(result as string);
});

As a result of this example, the user will see “Happy Holidays!” as an information message displayed in VS Code when the above is executed by an extension.

An ELP system needs to do a bit more work than that of course. To keep it simple, here’s some pseudo code that shows how the analysis works conceptually:

Object execute(LSPServerAccessor s, Env e, List<Object> a) {
  AnalysisResult result = new AnalysisResult();
  findExamplesAndEvaluateFilesOpenedInEditor(s, e, result);
  ScheduledFuture<?> f = sendUpdatesPeriodically(s, result);
  try {
    for (Example example : result.getExamples()) {
      runExampleInstrumented(e, result, example);
    }
  } finally {
    f.cancel(true);
  }
  return result;
}

First, all files opened in the editor are scanned for examples and probes, which are managed in a result object. The files are then evaluated using the instrument’s Env object. After that, all examples are executed and instrumented using ExecutionEventNodes from the Instrument API. While the examples are running, we periodically send the result object with intermediate results via notifications over to the programming environment. This way, the system can provide feedback in under a second, even when examples take more time to run. And finally, we return the result object signaling that the analysis is complete.

If you want to look at the full implementation, our project is up on GitHub. You can find our BabylonianAnalysisExtension here and the counterparts in our VS Code extension here and here. Feel free to reach out if you have any questions. And if you find this work interesting, have a look at our Onward! 2020 research paper. The full paper presentation is embedded below.

And that’s it for today! We hope you enjoyed this deep dive into live programming with the GraalVM, the LSP, and VS Code!

Have a wonderful festive season and a happy new year! 🎄🎅🎉


Acknowledgments
Many thanks to my co-author Patrick Rein, to Mani Sarkar for inviting us to write this guest post, and the GraalVM team for making it possible to build tools like this on top of their language server.

Onward! 2020 Research Paper Presentation

Further Reading

Author: Fabio Niephaus

Fabio Niephaus is a Ph.D. student from the Software Architecture Group, headed by Robert Hirschfeld, at the Hasso Plattner Institute at the University of Potsdam, Germany. He has strong interests in dynamic programming languages, virtual execution environments, and software development tools. As part of his Ph.D. thesis research, he works toward a better polyglot programming experience.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2021 JVM Advent | Powered by Jetbrains LogoJetBrains & steinhauer.software Logosteinhauer.software

Theme by Anders Norén

%d bloggers like this: