, , , , ,

In a previous post I wrote about iterative function systems and how to generate fractals. This approach used matrices that represented lines via a set of points. An alternative approach is to use a grammar defined by a set of production rules to iteratively replace a prior string. Systems that use re-write rules of this sort for simulating plant growth are known as Lindenmayer systems (L-systems for short). It turns out that there is a deep connection with grammars defined for human language as well as with RNA processes. With L-systems, lines can be drawn directly as each token in the grammar can represent a drawing command. Here is an archetypical L-system [1].

plant 1

L-systems are essentially composed of two things: an initial state (the axiom) and a set of production rules (the grammar). With this it is possible to create arbitrarily long sequences of characters with remarkable complexity. The iterative nature of the process means that the productions are self-similar and hence possess a fractal nature. Here is one of the simplest L-systems which models the growth of algae:

\mbox{axiom}: a
\mbox{rules}: a \Rightarrow ab, b \Rightarrow a

Successive iterations result in this pattern:


One of the interesting things about this sequence is that its length follows the Fibonacci sequence.

There are numerous implementations for L-systems, but as is typically the case they follow an imperative algorithmic style. While not inherently bad, these types of algorithms are unnecessarily complicated. Simulations are usually iterative, so it makes sense to explore an algorithm using a functional programming style.

function dol_step(init, rules) {
  var iterate = function(acc, x) {
    var p = rules[x]
    if (p == undefined) { p = x }
    p.split('').map(function(a) { acc.push(a) })
    return acc
  return init.reduce(iterate, [])

function lsystem(n, init, rules, step_fn) {
  if (typeof step_fn === 'undefined') step_fn = dol_step
  return seq(n).reduce(function(acc) { return step_fn(acc, rules) }, init)

What’s nice about this algorithm is that there are no array indices to track nor loops to worry about. It’s also easy to understand since it only uses the canonical higher-order functions. This code is executed like so: lsystem(4, ['a'], { a:'ab', b:'a' }).


Productions are nice, but they are difficult to appreciate. Particularly when additional tokens are added to the grammar. Visualizing the L-system requires a function that knows how to translate the production into drawing commands. Drawing commands are interpreted as “Turtle” commands in the sense of the Logo programming language [2]. The turtle can move forward in a straight line and also change direction, or heading. For many simulated plants there is a somewhat standard grammar used, which includes

F: Move forward (and draw a line)
+: Rotate left
-: Rotate right
[: Push current state onto a stack
]: Pop stack and replace current state

The stack is used for essentially picking up the turtle and putting it somewhere else on the board. As an aside, notice that commands are reminiscent of a Turing machine!

By visualizing the structure, it is easier to see how iterative applications of the grammar produce self-similar results. Here are steps 2, 3, and 4 of the system defined by a single production rule

\mbox{axiom}: F
\mbox{rules}: F \Rightarrow FF-[-F+F+F]+[+F-F-F]

Successive iterations of an L-system

Like the derivation of a system, this process is also iterative since the drawing begins with the left-most character in the sequence and then proceeds over the rest of the sequence. Recognizing this behavior points to a fold/reduce process that takes a sequence and yields a single result. In our case the single result is the accumulated turtle state necessary for drawing. The reason we do this is that the output will be an SVG path that we’ll draw using d3.js. Using SVG is perfectly suited, since the path commands behave just like a turtle.

function plant(length, theta) {
  if (typeof theta === 'undefined') theta = -25 * Math.PI / 180
  return function(acc, code) {
    switch (code) {
    case 'X': break
    case 'F':
      acc.state.x -= length * Math.cos(acc.state.theta)
      acc.state.y -= length * Math.sin(acc.state.theta)
      acc.svg.push("L" + acc.state.x + "," + acc.state.y)

    case '+':
      acc.state.theta += theta

    case '-':
      acc.state.theta -= theta

    case '[':
      acc.stack.push($.extend({}, acc.state))

    case ']':
      acc.state = acc.stack.pop()
      acc.svg.push("M" + acc.state.x + "," + acc.state.y)
    return acc

In this function each token of the grammar is associated with a turtle command. Only some result in actual SVG path instructions. Manipulating the state via a stack enables branching patterns to form. The mechanics of this process is really no different from the moveto command in SVG paths.

There’s another class of examples that use an unmentioned token in the grammar. These are node-replacement models, where nodes are defined by an X character. In the above drawings, each line drawing command is effectively an edge in a graph. So an F represents the edge. The grammar then defines how to replace the edge with a new string. This is known as edge-replacement. With node-replacement, the grammar defines rules for replacing the nodes X. The replacement strings contain instructions for drawing edges (an F), but edges are only replicated to provide additional length (but no structure). In the rendering of the system the nodes are ignored.

Node replacement

For those following along in code, the rendering code is the following. Note that there isn’t much work here. The d3.js part is simply drawing the path we constructed via the handler.

function linit(x,y) {
  var state = { x:x, y:y, theta:Math.PI/2 }
  var svg = [ "M" + x + "," + y ]
  return { state:state, svg:svg, stack:[] }

function ldraw(state, target) {
  if (typeof target === 'undefined') target = "#svg"
    .attr("d", state.svg.join(" "))

function lrender(origin, sequence, handler) {
  var state = linit(origin.x, origin.y)
  state = sequence.reduce(handler, state)
  return state

Stochastic L-systems

The power of Lindenmayer systems is readily apparent, but in a large group, the deterministic regularity puts these artificial plants squarely in the uncanny valley. One way to address this is by using a stochastic L-system. Here each production rule is replaced with a set of productions with a corresponding probability for each.

splant 1

In the code below I provide two representations of probabilities. If you want uniform probabilities, then use an array for the production rules. Otherwise, use a hash where the rule is the key and the explicit probability is the value. Hence, the following two grammars are (essentially) equivalent:

{F: ['F[+F]F[-F]F', 'F[+F]F', 'F[-F]F'] }
{F: {'F[+F]F[-F]F':.33, 'F[+F]F':.33, 'F[-F]F':.34} }

The implementation for the stochastic L-system is a bit more involved. Only the single step function needs an implementation as the remainder is handled by the existing machinery. Another way of saying this is that with first-class functions it is trivial to swap one step implementation with another, without modifying any of the other code.

function sol_step(init, rules) {
  var get_production = function(x) {
    var rule = rules[x]
    if (rule == undefined) { return x }
    if (rule instanceof Array) {
      var vs = rule
      var ps = cumsum(rep(1/rule.length, rule.length))
    } else {
      var vs = Object.keys(rule)
      var ps = cumsum(vs.map(function(v) { return rule[v] }))
    return sample(vs, 1, ps)[0]
  var iterate = function(acc, x) {
    var p = get_production(x)
    p.split('').map(function(a) { acc.push(a) })
    return acc
  return init.reduce(iterate, [])

The main difference is that there is a distinct function for extracting the probabilities. For those familiar with R, I ported some functions to work with vectors and sampling, which provides vectorization semantics to Javascript. These functions are in a separate library called arbitrage.js. A separate post on that is forthcoming.

Stochastic L-systems open the door to a whole other avenue of exploration. The RNA connection I mentioned at the beginning of the article is really the inverse problem. In this formulation you are given the derivation that represents an end state and need to find the axiom that produced it. This problem is characterized by a stochastic context-free grammar, of which there are numerous algorithms that solve this problem. I have some ideas on applying Church numerals to this problem and will write more later.


[1] P. Prusinkiewicz and Aristid Lindenmayer. 1990. The Algorithmic Beauty of Plants. Springer-Verlag New York, Inc., New York, NY, USA.
[2] Wikipedia, Logo Programming Language