Architecture
This section provides an overview of Pallas's architecture and introduces core system concepts, including:
Cogs: a persistent process / database
PLAN: an ultra minimal, purely functional combinator interpreter (equivalent to "bytecode")
Sire: the default system language
Machines
A Pallas VM is colloquially referred to as a "machine". On the host filesystem, a machine consists of a directory containing a data.mdb
, lock.mdb
and a pins
directory:
Cogs
A "cog" is a persistent process running on a machine. Cogs interact with the world by making system calls - which are included as part of their state (thus a cog's set of system calls also resume after a restart). Much more on cogs.
PLAN
Nearly every core innovation of Pallas emerges from the design of PLAN. It's not necessary to understand PLAN to write applications, but if you understand PLAN, you'll understand the system.
PLAN is an evaluation model (you could think of this as the "machine code" of a Pallas VM) that implements a self-contained, purely-functional database with no external dependencies. A deeper discussion of the PLAN data structure will bring these two concepts together, but before we inspect PLAN itself we have to take a brief detour into how the Pallas VM achieves persistence.
Persistence; Event Log; Database Engine
Let's say you wanted a write a small program that manages a list of numbers, starting with an empty list.
In these steps, we started with an empty list, but if we had started with [1, 2]
and done append 3
the result would have been [1, 2, 3]
. Likewise, if we had started with [1]
and done append 2
the result would have been [1, 2]
, etc.
The pattern to notice here is: given a current state and an input, we can reliably compute a next state. Taking that step further: if you have a starting state, the proper transition function that modifies the state for a given input, and a log of all inputs, you have a strategy for recovering the current state.
Persistence and Event Sourcing
Let's start by explaining the core concept of event sourcing using a simple representation:
In this model, our transition function T
takes an input and the current state, and produces outputs along with a new state. This representation is intuitive for understanding basic event sourcing:
We have a current state
We receive an input
We apply the transition function T, which gives us:
Outputs
A new state
By logging all inputs and starting from an initial state, we can always reconstruct the current state by replaying these inputs through our transition function.
A Self-Upgrading System
Pallas supports self-upgrading code, where the system can modify its own behavior over time. To represent this capability, we need a slightly different model:
In this representation:
We still have a state and an input.
We produce a new state, but instead of static outputs, we now produce a new transition function
T'
.
This new T'
can have modified behavior compared to the original T
. It represents the system's ability to upgrade itself based on inputs and current state.
Why This Representation?
Keep in mind that this is not a formal definition of Pallas, but a representation to help illustrate some concepts. This representation was chosen because:
It shows state management, which is necessary for understanding persistence.
It shows upgradeable code.
It strikes a balance between simplicity and accuracy, making it accessible to newcomers while still representing key advanced features.
Persistence "for free"
You may have gotten the wrong idea: that the programmer has to include
some kind of event log library or manually cache the current state. No, the persistence strategy outlined above is handled by the runtime automatically for all applications in Pallas. It only needs to be implemented once and it's trivially available to all applications. Because of this, it also means that optimizations happen in the runtime and are also available to all applications.
One such optimization is snapshotting the current state to avoid recomputing from the event log on restarts.
But how can you take a "current state snapshot" if there are partially-applied functions like T
? How do you store a partially applied function?
With that question on the table, we're finally ready to explain PLAN by way of closures.
Closures and Supercombinators
The Lambda Calculus provides a formalism which would we could use to serialize and then persist a function. But there is a problem with using the lambda calculus directly: If you don't use an environment that tracks free variables, it is inefficient; but if you do use an environment, you've introduced implicit state.
We want to be able to easily write to and read from disk without any risk that there are free variables or assumed environment. In order to deal with that apparent contradiction, we must store closures. We want to store functions together with their environment.
The name for a function with zero free variables and no environment is a supercombinator. PLAN is a data structure for supercombinators. Every function will always have all the context it could possibly need because they're all closures.
With PLAN at the bottom of the system, the same data structure is used on-disk and in-memory during execution. As a data structure, PLAN strikes a balance between:
Human readability
Candidacy for functional compile target
Good memory representation
Good on-disk representation
Other systems present alternative approaches for optimizing one (or maybe two) of the above, but we believe PLAN is the best solution for accomplishing all of the above well. Other systems solve each of these in isolation, which necessitates complicated transitions between specialized formats. That is obviated with PLAN, allowing the user more direct control over the system and "proximity to the metal" without loss of expressivity or performance.
PLAN
PLAN is concrete, concise, and relatively readable, considering it's essentially a compiler binary (try reading the compiler binary of other systems).
It's also fast to compile to and easy to map back and forth between memory and disk - which is how you get a single-level store that essentially makes no distinction between in-memory and on-disk. Unplug it while it's running, move it to another physical machine, turn it back on and it picks up right where it left off.
Formally, it looks like this:
Where a Nat
is a natural number and a Law
is a user-defined function. () / App
denotes function application, {} / Law
is a list of values and <> / Pin
is a sort of runtime hint that has to do with optimizing memory layout.
We'll talk about what this means in a while, but for now let's just make clear that this is the entire data model of our system and anything that users write using this system.
Bootstrapping
You've seen the terms "compile" and "binary" thrown around a few times. We've also shown you this strange-looking PLAN data structure and made the case that if you just use this enhanced-lambda-calculus data structure you can have persistence, memory/disk ambiguity and readable compiler binaries for free. So at this point you might be asking yourself: "Do you expect me to write entire programs using that weird data structure?"
No. You've (hopefully) gotten used to thinking about this system as a database engine, but now we're going to show you that it's also a virtual machine and a language platform.
Sire
Sire is a sort of Haskelly-Lisp whose purpose is to provide an ergonomic experience sitting between a programmer's goals and the resulting PLAN that achieves these goals (We'll get into programming with Sire itself a little later). Sire compiles itself to the PLAN data model we saw above.
Below is the entire PLAN specification. Remember, PLAN is basically just the lambda calculus but without any need for an implicit environment. Don't get scared off or try to understand it just yet (or even ever, if you so choose), we're just showing off that it can fit on one page:
A plucky computer science student could translate this to C, Rust, Python - whatever language they prefer. A minimal but performant Haskell implementation is 180 lines.
The Sire compiler is just 2000 lines of Sire. Pallas has a compiled version of the Sire compiler (that's Sire-in-PLAN
) that we feed to the runtime system, thereby bootstrapping a complete, extensible development environment.
We aren't asking you to trust our Sire-in-PLAN
file. Since PLAN code is readable, a programmer familiar with the system can verify it directly.
This is the PLAN code for the foldr
function. It's going to look only slightly less scary than the spec above, but read on so we can un-scare you:
These are all the dependencies that the foldr
function relies on.
Take a look at the (id a)=a
line. It's a function named id
that takes a single value a
and simply returns it.
Now look at _Not
above. It appears to be a function that takes an argument a
, and _If
a
is true, it returns 0
(or false), otherwise it returns 1
(or true). Not so bad.
Other bits are a little less clear to us right now, but the point remains: A programmer familiar with this system could verify the "compiler binaries" without trusting anyone. There is nowhere for malicious code to hide.
Next, we'll learn a bit about the runtime:
Last updated