Introduction

Motivation

Since blockchain is a sequence of transactions it is far from trivial to get a fast read only access to data. Scrolls is a tool for building and maintaining read-optimized collections of Cardano's on-chain entities. It crawls the history of the chain and aggregates all data to reflect the current state of affairs. Once the whole history has been processed, Scrolls watches the tip of the chain to keep the collections up-to-date.

Reducers

Scroll Reducers are Map/Reduce algorithms that turn block data into a relevant information to be queried by end-users.

There are several ways to implement reducers in Scrolls:

  • Typescript: using custom Typescript code to implement the map/reduce logic
  • Rust: using custom Rust code to implement the map/reduce logic
  • Golang: using custom Golang code to implement the map/reduce logic
  • Python: using custom Python code to implement the map/reduce logic

Using Typescript to build custom reducers

With the Typescript reducer is possible to create a custom reducer for logics that the builtin reducers don't have support. This reducer is only enable with the deno feature on build.

To build reducer using Typescript, we need to code our custom logic as any other Typescript module and transpile it into JS code using Deno. The resulting file (.js) can then be referenced by Scrolls configuration so that it's loaded dynamically at the moment of execution.

To transpile your Typescript code using Deno, run the following command:

deno bundle reducer.ts reducer.js

Configuration

Example of a configuration

[reducer]
type = "Deno"
main_module = "./examples/deno/reducer.js"
use_async = true

Section: reducer

  • type: the literal value Deno.
  • main_module: the js file with the reducer logic
  • use_async: run the js in async mode

Run code

To run the code with the deno will be necessary to use deno feature

cargo run --features=deno -- daemon --config ./examples/deno/daemon.toml

Using Rust to build custom reducers

To build reducer using Rust, we need to code our custom logic as any other Rust-based program and compile it with a WASM target. The resulting file (.wasm) can then be referenced by Scrolls configuration so that it's loaded dynamically at the moment of execution.

Configuration

Example of a configuration

[reducer]
type = "Wasm"
main_module = "./examples/wasm/enrich.wasm"

Section: reducer

  • type: the literal value Wasm.
  • main_module: the wasm file containing the reducer logic

Run code

To run a custom WASM reducer, it will be necessary to trigger scrolls enabling the wasm feature

cargo run --features=wasm -- daemon --config ./examples/wasm/daemon.toml

Building a Golang Reducer

This guide explains how to use Golang to build a custom reducer for Scrolls.

How it works

To build our custom reducer we'll leverage an Oura pipeline that sync data from a Cardano node, filters the data through a Wasm plugin and finally persists the records in an Sqlite db.

Requirements

Procedure

1. Create the project scaffold

There's some boilerplate code required to setup our reducer. The Scrolls SDK cli provides a command to automatically generate the basic file structure that can be later customized to specific needs.

Run the following command in your shell to scaffold a new Golang reducer.

scrolls-sdk scrolls-sdk scaffold --template reducer-oura-golang {NAME}

where:

  • {NAME} is the name of your reducer

2. Customize your DB schema

Your custom reducer will require a custom DB schema that reflects your particular requirements. In this scenario we're using Sqlite as our relational data persistence mechanism.

Edit the init.sql file inside your reducer code to define your schema using SQL:

CREATE TABLE my_reducer (
    slot INTEGER NOT NULL,
    {{custom fields}}
);

CREATE INDEX idx_my_reducer_slot ON my_reducer(slot);

where:

  • {{custom fields}} is the definition of the custom fields required by your reducer

3. Edit your Golang reducer logic

The core of your reducer is the business logic you use to map blocks & transactions from the chain into relevant data for your use case. In this scenario we're using Golang code that compiles to WASM.

Edit the main.go file inside your reducer code define your business logic:

package main

import (
	"github.com/extism/go-pdk"
)

//export map_u5c_tx
func map_u5c_tx() int32 {
	// unmarshal the U5C Tx data provided by the host
	var param map[string]interface{}
	err := pdk.InputJSON(&param)

	if err != nil {
		pdk.SetError(err)
		return 1
	}

  // you can log info to see it in the debug output of
  // the pipeline:
	//pdk.Log(pdk.LogInfo, fmt.Sprintf("%v", param))

	// Here is where you get to do something interesting
  // with the data. In this example, we just extract the
  // fee data from the Tx:
	// fee := param["fee"].(interface{})

	// As default, we just resend the exact same payload
	output := param

	// Use this method to return the mapped value back to
  // the Oura pipeline.
	err = pdk.OutputJSON(output)

	if err != nil {
		pdk.SetError(err)
		return 1
	}

	// return 0 for a successful operation and 1 for
  // failure.
	return 0
}

// you need to keep the main entry point even if we
// don't use it
func main() {}

4. Edit the Oura config file

The Cardano node, the WASM plugin and the Sqlite DB is connected together via an Oura pipeline. This pipeline will ensure that data goes through each the required steps (stages) in performant and resilient way.

Edit the oura.toml in your custom reducer folder to configure your pipeline:

[source]
type = "N2N"
peers = ["relays-new.cardano-mainnet.iohk.io:3001"]

[[filters]]
type = "SplitBlock"

[[filters]]
type = "ParseCbor"

[[filters]]
type = "WasmPlugin"
path = "plugin.wasm"

[sink]
type = "SqlDb"
connection = "sqilte:./scrolls.db"
apply_template = "INSERT INTO {{reducer}} (slot, {{custom fields}}) VALUES ('\{{point.slot}}', '{{custom values}}');"
undo_template = "DELETE FROM {{reducer}} WHERE slot = \{{point.slot}}"
reset_template = "DELETE FROM {{reducer}} WHERE slot > \{{point.slot}}"

where:

  • {{custom fields}} is the list of custom fields in your schema
  • {{custom values}} is the expressions to access the custom values in the object returned by your custom logic
  1. Run your pipeline

Now that everything has been configured, you can start your indexing pipeline. This requires that you compile your Golang code into wasm and that the sqlite db is created with the corresponding schema. With all requirements in place, the Oura process can be started.

The Makefile provided in your custom reducer files provides a shortcut to trigger the pipeline making sure that all requirements are in place.

Run the following command:

make run

Building a Python Reducer

This guide explains how to use Python to build a custom reducer for Scrolls.

How it works

To build our custom reducer we'll leverage an Oura pipeline that sync data from a Cardano node, filters the data through a Rust-based Python interpreter plugin and finally persists the records in an Sqlite db.

Requirements

Procedure

1. Create the project scaffold

There's some boilerplate code required to setup our reducer. The Scrolls SDK cli provides a command to automatically generate the basic file structure that can be later customized to specific needs.

Run the following command in your shell to scaffold a new Python reducer.

scrolls-sdk scrolls-sdk scaffold --template reducer-oura-python {NAME}

where:

  • {NAME} is the name of your reducer

2. Customize your DB schema

Your custom reducer will require a custom DB schema that reflects your particular requirements. In this scenario we're using Sqlite as our relational data persistence mechanism.

Edit the init.sql file inside your reducer code to define your schema using SQL:

CREATE TABLE my_reducer (
    slot INTEGER NOT NULL,
    {{custom fields}}
);

CREATE INDEX idx_my_reducer_slot ON my_reducer(slot);

where:

  • {{custom fields}} is the definition of the custom fields required by your reducer

3. Edit your Python reducer logic

The core of your reducer is the business logic you use to map blocks & transactions from the chain into relevant data for your use case. In this scenario we're using Python code that will be interpreted using RustPython.

Edit the main.py file inside your reducer code define your business logic:

def map_u5c_tx(tx):
	# the Tx param holds the data of the tx to map
	
  
	# Here is where you get to do something interesting
  # with the data. In this example, we just extract the
  # fee data from the Tx:
	# fee = tx["fee"]

	# As default, we just resend the exact same payload
	output := param

	# Return a new Dict that holds the data that will
  # continue down through the Oura pipeline
	return output
}

4. Edit the Oura config file

The Cardano node, the WASM plugin and the Sqlite DB is connected together via an Oura pipeline. This pipeline will ensure that data goes through each the required steps (stages) in performant and resilient way.

Edit the oura.toml in your custom reducer folder to configure your pipeline:

[source]
type = "N2N"
peers = ["relays-new.cardano-mainnet.iohk.io:3001"]

[[filters]]
type = "SplitBlock"

[[filters]]
type = "ParseCbor"

[[filters]]
type = "PythonPlugin"
path = "main.py"

[sink]
type = "SqlDb"
connection = "sqilte:./scrolls.db"
apply_template = "INSERT INTO {{reducer}} (slot, {{custom fields}}) VALUES ('\{{point.slot}}', '{{custom values}}');"
undo_template = "DELETE FROM {{reducer}} WHERE slot = \{{point.slot}}"
reset_template = "DELETE FROM {{reducer}} WHERE slot > \{{point.slot}}"

where:

  • {{custom fields}} is the list of custom fields in your schema
  • {{custom values}} is the expressions to access the custom values in the object returned by your custom logic

5. Run your pipeline

Now that everything has been configured, you can start your indexing pipeline. This requires that that the sqlite db is created with the corresponding schema. With all requirements in place, the Oura process can be started.

The Makefile provided in your custom reducer files provides a shortcut to trigger the pipeline making sure that all requirements are in place.

Run the following command:

make run