Chainlink “Decentralized Data Model” : Truffle, Kubernetes

Amine El
24 min readAug 4, 2021

--

Step-by-step demonstration of a decentralized ETH/USD price feed

Decentralized oracle network — picture taken from chainlink blog

Introduction

This blog is a continuation of my previous blog Run Chainlink “Basic Request Model” locally: Truffle, Ganache, Kubernetes where I focused on the following:

  • What is the Oracle problem and why is it important. There you can find references to official chainlink documentation and interviews/podcasts
  • Running basic request model locally . This has helped me - and hopefully others- understand how all the pieces stick together. This has also laid out all the foundations (local kubernetes, local node, nodejs event watcher, local postgresql..Etc) which can be used for future demonstrations

This blog focuses on another oracle pattern : Decentralized Data Model . There are 3 sections in this blog:

  • Explanation of decentralized oracle network pattern
  • Explanation of the tutorial: highlevel design, dive into the smart contracts
  • Hands-on demonstration. Source code can be found in here . Remark: I’m not using Ganache as local blockchain as I hit a blocking limitation(Posted an issue on stackoverflow for more details). Hence I’ve deployed the smart contracts on Rinkeby testnet. The reader can use another Rinkeby or another testnet

Important Disclaimer: I’m not part of the chainlink engineering team. This blog is the result of chainlink source code browsing, chainlink doc reading and many trial and errors. Please feel free to comment and report any mistakes and I will do my best to complete.

What is a decentralized Oracle Network?

In my previous blog , I have explained the Oracle problem and through a basic chainlink request model, I demonstrated how a smart contract can request ETH/USD price from an external API.

Although this pattern helps solving the oracle problem by connecting smart contracts with offchain data , it suffers from two major drawbacks:

  • Every time a consumer smart contract requests ETH/USD price , it has to call an Oracle contract which will emit an event. An offchain Oracle node will query an API and replies by sending a transaction to the network. This transaction costs some gas. Even if the price doesn’t change, the oracle node has to reply every time. Economically , this is not optimal for consumers as Oracles are paid everytime they reply (even though the price didn’t change)
  • More importantly, one of the benefits of blockchain is decentralization. With the basic request model, this benefit is somehow inhibited as data is not decentralized. In fact, consumer contract is at the mercy of one oracle node which can decide to reply or not , which can decide to use a trustworthy and high quality data sources or to use a low quality/non reliable data source.

Chainlink has then introduced the decentralized data model , which is the basis of the Chainlink price feeds.

Chainlink Price Feeds source high-quality price data from reputable data aggregators by pulling data from their APIs. The data is then delivered on-chain via a decentralized oracle network composed of multiple independent security-reviewed node operators. Decentralization at both the data aggregation and oracle network levels protects against data manipulation and ensures availability for smart contracts that depend on highly reliable price oracles. Because of their security and reliability guarantees, Chainlink Price Feeds have become the industry standard oracle solution for DeFi, now securing billions of dollars in value and supporting a wide variety of use cases and markets, from derivatives to stablecoins.

I’m going to reference here some chainlink documentation which will help the reader understand the problem:

What this tutorial contains

We are going to request ETH/USD price from a smart contract. Main difference with the previous blog, is that we are going to apply several layers of data aggregation to not rely on a sole oracle. This is schematized here:

The 3 levels of aggregation
  • Price data aggregator: we will rely on coinapi and coingecko to obtain price data. In fact, it is important to pull data from premium data providers as their core business is to pull raw data from several exchanges, clean it(remove outliers, volume-price adjustment..) and aggregate it in order to provide refined price points
  • Chainlink Node operators: we will deploy 2 independent oracle nodes. Each of them will be responsible of querying ETH/USD price from coinapi and coingecko, aggregating the responses and posting the price point on-chain — Aggregation is done by calculating the median value. Please note that in reality, critical data feeds will depend on several independ nodes: For instance, ETH/USD reference data feed use more than 30 independent oracles
  • Oracle Network aggregation: A reference ETH/USD smart contract will be responsible of regularly receiving price points from independent oracles and aggregating them — Aggregation is done by calculating the median value

High level view

A brief explanation of all the building blocks:

  • Smart contracts run on Rinkeby testnet but any ethereum testnet will work fine. Remark: I tried to run everything locally but found a limitation on ganache. Posted an issue on stackoverflow for more details
  • Developer communicates with the blockchain using truffle scripts — cfr. Run the tutorial section for more details
  • 2 independent oracle nodes run on kubernetes. This is used to demonstrate Oracle Network Aggregation which involves aggregation of all individual nodes’ responses to create a single reference data point on-chain
  • Postgres (running on kubernetes) used to persist configuration and state of every oracle node
  • External adapters run on kubernetes as well. These are used to communicate with Coinapi and Coingecko to fetch ETH/USD price. In fact, every oracle node queries several data providers (2 in this example) and aggregates the responses . This is called Node Operator Aggregation
  • An Nginx ingress controller runs on kubernetes. This allows to directly communicate with Oracle node without using “kubectl port-forward”
  • a simple nodejs server is deployed. Its only purpose is to listen to events , format them and log them to the console

Focus on the blockchain layer

Remark: in the following diagrams, emitted events are displayed as dashed arrows

Contracts overview

the flow of requesting ETH/USD price is exactly similar to the one shown in the official documentation

there are 3 main contracts:

  • PriceConsumer: responsible of querying last price data. An example of PriceConsumer can be found in here
  • AggregatorProxy: Using the proxy pattern allows to deploy new versions of the logic layer (FluxAggregator here) without interrupting the consuming contracts. The source code of AggregatorProxy can be found here
  • FluxAggregator: ETH/USD reference contract. responsible of aggregating responses from different oracles and storing the aggregate on-chain. The source code of FluxAggregator can be found in here

Important to note that the process of receiving ETH/USD prices from oracles and aggregating them is completely decoupled from requesting ETH/USD prices by consumers.

Next we are going to detail every important step needed for this architecture to work

Funding FluxAggregator

Oracles must be paid for their work as they submit a new ETH/USD price value. Hence , FluxAggregator must possess some Link tokens.

In order to credit FluxAggregator, we will use LinkToken- which implements ERC677 standard. The simple summary of ERC677 is the following:

Allow tokens to be transferred to contracts and have the contract trigger logic for how to respond to receiving the tokens within a single transaction.

In fact, when calling transferAndCall method of LinkToken, this will transfer link tokens to FluxAggregator then trigger onTokenTransfer method of FluxAggregator. This can be seen in the source code

  • LinkToken validates the receiver (validRecepient) then calls ERC677 method transferAndCall (super.transferAndCall)
cfr. LinkToken
  • ERC677 implements transferAndCall. It calls transfer method in order to credit the receiver (FluxAggregator in this case) with some Link tokens. If the receiver is a contract account (which is the case of FluxAggregator) then onTokenTransfer method is called
cfr. ERC677
  • FluxAggregator implements onTokenTransfer. which in turn calls updateAvailableFunds in order to update the amount available to payout the oracles
cfr. FluxAggregator
cfr. FluxAggregator

Add Oracles

Now that the FluxAggregator has some Link tokens to payout the Oracles, contract owner has to set allowed Oracles so that only approved Oracles can submit new data points.

This is done by calling changeOracles method. Via this method, the contract owner can remove Oracles or add new Oracles. We will focus on the adding part.

cfr. FluxAggregator

Note that adding oracles require to provide oracle address and also an admin address.

cfr. FluxAggregator

The administrator address is required in order to withdraw Oracle funds. This is shown in withdrawPayment. This method, checks that the caller is the oracle’s administrator. The administrator can choose which address to credit link tokens to.

cfr. FluxAggregator

Submit answer

Now that Oracles are approved by FluxAggregator contract owner, they can start submitting ETH/USD prices.

As depicted above, oracles will call two methods:

  • oracleRoundState: This one provides an oracle with all the info it needs. it is called for instance to check the latest on-chain price so that it submits a new price only if certain conditions are met (for instance: if the deviation off-chain / on-chain price is more than 0.5 % then the oracle submits the new off-chain price)
cfr. FluxAggregator
  • submit: called by Oracles when they think there is a need to update. Either the oracle submits for an existing open round (A new round can be initialized)
cfr. FluxAggregator

After data validation and initialization of new round (if required), many things happen here:

  • Submission of every oracle is recorded
cfr. FluxAggregator
  • Once the minimal of submissions reached, the round’s price is updated by calculating the median of all submissions. this internal method returns the new value along with a flag to inform that a new value has been calculated
cfr. FluxAggregator
  • Pay oracle for its work. As one can notice, pull-over-push pattern is used in here. In fact, Oracle funds are increased. These funds can be retrieved later on by the Oracle admin
cfr. FluxAggregator
  • delete round details (close round) if maximum of submissions has been reached
cfr. FluxAggregator
  • validateAnswer is then called if an external validator is setup (which is not mandatory)
cfr. FluxAggregator
cfr. FluxAggregator — Note that the validator is set up when deploying the contract , it is optional

Run the tutorial

Dependencies

  • Tests have been performed with node version 12.22.1 and npm version 7.15.0
  • Run in your terminal:
git clone https://github.com/aelmanaa/chainlink-local-kubernetes && cd chainlink-local-kubernetes
npm install
  • I advise to create a test profile on your browser and have Metamask installed in the test profile. This is discussed in here. This is a good practice as it separates your Metamask used in production from the ones used for testing. Please note the seed phrase (mnemonic) as you will use it later on
  • I’m assuming that you know how to receive ETH in public testnets. You can use rinkeby faucet if you are testing on Rinkeby. Please send some ETH to your first account . this will be required later on as we are going to use some scripts to credit our oracle nodes. you can confirm on metamask that your 1st account is well funded (see below , got some ETH on Rinkeby)
  • get a free API key from coinapi
  • Register to Infura in order to connect to Rinkeby. Note that you can also use your own nodes or another service but you will have to slightly modify the connection settings in the project
  • Please follow “Run the tests” section of the previous blog (if you haven’t tested it yet) in order to ensure you get everything running ( Nginx ingress controller, postgresql ..Etc). I’m assuming in here that we are starting from a local kubernetes with Nginx and Postgresql deployed
  • Install pgAdmin which is a rich client for postgresql. It offers a user friendly graphical interface to run queries , which is nicer always nicer than the command line
  • in the root directory of your project, create an .env file. Write in there your Infura API Key and Metamask mnemonic
touch .env
echo INFURA_API_KEY="<YOUR_INFURA_KEY>" >> .env
echo MNEMONIC="<YOUR_METAMASK_MNEMONIC>" >> .env

Project structure

  • contracts/ : This folder holds solidity smart contracts. PriceConsumer can be found in here. Once built, compiled contracts are put in build/contracts folder
  • kubernetes/: This folder holds kubernetes manifests. We will be mainly working with manifests in kubernetes/decentralized-data-model folder
  • migrations/: This folder holds a migration script. This first script 1_deploy.js deploys “LinkToken” , “Oracle” and “Consumer” contracts. While the second script 2_deploy_flux_aggregator.js deploys “FluxAggregator” , “AggregatorProxy” and “PriceConsumer” . (Remark: Actually Oracle & Consumer are not needed. they are used in Run Chainlink “Basic Request Model” locally: Truffle, Ganache, Kubernetes)
  • scripts/decentralized-data-model: Set of scripts that will be used to communicate with smart the contracts
  • server/decentralized-data-model: contains nodejs code for the “event watcher”
  • config/ : when migrating contracts into the blockchain, a file “addr.json” containing all contract addresses will be put in here. Those addresses will be used by the “event watcher” in order to monitor events emitted by the deployed contracts
  • package.json & package-lock.json: use to manage project dependencies
  • truffle-config.js: configuration file of our truffle project. As one can notice, the mnemonic and infura keys are expected. These are provided in .env file

Migrate contracts

run this truffle command in order to migrate contracts into the blockchain

truffle migrate --network rinkeby

Once migrated, config/addr.json will be updated . We are interested in the addresses of rinkeby property

You can verify that the contracts were deployed on https://rinkeby.etherscan.io/ . Let’s check the FluxAggregator (0x56a1CB8672C6Deb270C764B0Dd5142B7726751F1 in my case)

Update kubernetes namespaces and deploy ingress controller

Please ensure you have all the required namespaces and that you have a running nginx ingress controller (in case you haven’t worked on my previous blog)

kubectl apply -f kubernetes/namespaces.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml

You can check available namespaces:

kubectl get namespaces

The following namespaces will be used in this tutorial:

  • adapters: used for external adapters and which will communicate with coinapi & coingecko
  • chainlink-rinkeby: used for oracle nodes connected to Rinkeby. chainlink namespace is used for the previous blog. hence not relevant in here
  • ingress-nginx: used for nginx ingress controller
  • storage: used for postgresql database

You can check that Nginx ingress controller have been installed using kubectl get all -n ingress-nginx . Pod is running and exposed through a service available on localhost

kubectl get all -n ingress-nginx

postgresql setup

ensure postgreql is running

if you haven’t done my previous blog then please check deploy postgres paragraph (Run the tests section) in Run Chainlink “Basic Request Model” locally: Truffle, Ganache, Kubernetes

At this point, you should have a running postgresql installation. You can check

kubectl get pv
kubectl get pvc -n storage
kubectl get secret -n storage
kubectl get configmap -n storage
kubectl get pod -n storage
kubectl get svc -n storage

setup databases

Now let’s setup our postgresql. In fact, there will be 2 independent oracle nodes and hence each one of them needs its own database. I’m going to keep the next steps pretty short as the purpose of this section is not to learn how to use pgAdmin . Moreover, there are lot of documentation online

  • postgresql service is not exposed outside of kubernetes so let’s temporarily expose it using kubectl port-forward just during configuration . when running the command below, you should get a prompt confirming that any requests to your loopback IP (127.0.0.1) at port 5432 will be routed to 5432 of postgres service
kubectl port-forward -n storage svc/postgres 5432:5432
  • now that postgreql service is reachable from your machine, please open pgadmin and create a new server. The host is 127.0.0.1 , port is 5432, username/password are the administration settings you setup when deploying postgreql instance
  • now we will create 2 databases chainlink-rinkeby and chainlink-rinkeby2 used respectively by the 1st and 2nd oracle nodes. so please follow the official tutorial to create the 2 databases (chainlink-rinkeby and chainlink-rinkeby2)
  • Once both databases are created, you should grant access to both oracle nodes. In the previous blog, we had one Oracle node which user is clnode . for simplicity , we are going to keep the same user for both oracle nodes
  • At this stage, you should see the databases present in the interface
  • If you check the properties>security tab of each database , you will see that the user clnode has admin privileges on the database

Deploy external adapters

instead of writing my own proxies to communicate with coingecko or coinapi, I’m going to use adapers provided by chainlink . Please clone the following repo

git clone https://github.com/smartcontractkit/external-adapters-js
cd external-adapters-js

if you browser packages/sources folder , you will find many adapters. we are interested only in coingecko and coinapi and that’s what will do in this section:

  • Build coinapi and coingecko adapters docker images
  • Deploy these docker images on kubernetes
  • Expose them through Nginx ingress controller — in order to confirm they are working as expected

Build adapters’ docker images

we are going to follow github documentation.

  • In the root folder , run
yarn generate:docker-compose
  • a new file will be generated docker-compose.generated.yaml
  • build coinapi adapter
docker-compose -f docker-compose.generated.yaml build coinapi-adapter
  • build coingecko adapter
docker-compose -f docker-compose.generated.yaml build coingecko-adapter
  • check that the adapters’ docker images are present in your machine
docker images | grep -E 'coinapi|coingecko'
  • please note down the image version in case it changes in the future

Deploy adapters on kubernetes

  • go back now to the root folder of this tutorial
  • In the prerequisites section, you had to register for a free account on coinapi and note down the api key. Remark: coingecko doesn’t require an api key for free accounts
  • In a terminal, replace <API_KEY> by your account key
COINAPI_API_KEY='<API_KEY>'
  • Earlier I asked to note down the docker images versions. If you open kubernetes/decentralized-data-model/adapters/coingecko.yaml or kubernetes/decentralized-data-model/adapters/coinapi.yaml , you will notice the image name and version in the deployment part. change it if required . For instance, this is the deployment manifest for coinapi , the image version is 0.0.6
  • deploy coingecko-adapter deployment,service and ingress
kubectl apply -f kubernetes/decentralized-data-model/adapters/coingecko.yaml
  • deploy coinapi-adapter deployment, service and ingress
cat kubernetes/decentralized-data-model//adapters/coinapi.yaml | sed -e "s|__API_KEY__|${COINAPI_API_KEY}|g" | kubectl apply -f -
  • check pods and services
kubectl get all -n adapters
  • check coinapi ingress. the screenshot shows that the service is available at localhost on path /coinapi-adapter
kubectl describe ingress -n adapters coinapi-ingress
  • check coingecko ingress. the screenshot shows that the service is available at localhost on path /coingecko-adapter
kubectl describe ingress -n adapters coingecko-ingress

Test adapters

we can check that the adapters reply correctly. to do so, you can perform curl or wget requests on your command line or use a user friendly client such as postman

Remark: external adapters follow strict specifications in order to interface them later on with chainlink oracle nodes. The specifications can be found here

{ "id": 1,"data": {"base": "ETH","quote": "USD"}}

hit send and you should get ETH/USD price (1ETH = 2489.73 USD in this case)

{ "id": 1,"data": {"base": "ETH","quote": "USD"}}

hit send and you should get ETH/USD price (1ETH = 2491.28 USD in this case)

You can see now that every data aggregator has its own price. Hence we understand why chainlink uses several oracle node and every oracle node use several data aggregators for its reference price feed.

Now that adapters are running, we can work on oracle nodes.

Deploy 2 independent oracle nodes

in this section , we will do the following:

  • Deploy the 2 oracle nodes on kubernetes. we will also expose them via nginx ingress controller so they are accessible from our machine
  • Configure both oracle nodes

Remark: please feel free opening and reading all the manifests before executing the commands.

Deploy oracle nodes

in your terminal execute (please ensure to replace the placeholders <..>)

CHAINLINK_DB_USER=clnodeCHAINLINK_DB_PASSWORD='<CHAINLINK_DB_USER_PASSWORD>'CHAINLINK_DB=chainlink-rinkebyCHAINLINK_USER_EMAIL=dummy@gmail.comCHAINLINK_USER_PASSWORD='<CHAINLINK_USER_PASS>'CHAINLINK_WALLET_PASSWORD='<CHAINLINK_WALLET_PASS>'CHAINLINK_LINK_ADDRESS='<LINK_CONTRACT_ADDRESS>'CHAINLINK_ETH_URL='wss://rinkeby.infura.io/ws/v3/<INFURA_KEY>'CHAINLINK_ETH_CHAIN_ID='\"4\"'

where:

  • <CHAINLINK_DB_USER_PASSWORD> is the database password of user clnode
  • <CHAINLINK_USER_PASS> is the password of dummy@gmail.com . it will be used to connect to chainlink node interface
  • <CHAINLINK_WALLET_PASS> used to unlock oracle node private key. when deploying the oracle node for the first time, it will create required chainlink tables and also generate a private key which will be used to sign transactions
  • <LINK_CONTRACT_ADDRESS> can be found in config/addr.json at rinkeby > linkTokenAddress (this was created when you migrated the contracts to the blockchain)
  • <INFURA_KEY> can be fetch from infura. this URL is used to communicate with Rinkeby testnet through Infura

let’s deploy now the configurations of each node

  • In your terminal, create 1st node’s configmap node-local-config
kubectl create -f kubernetes/decentralized-data-model/node/config.yaml — dry-run=client -o yaml | sed -e "s|__CHAINLINK_LINK_ADDRESS__|${CHAINLINK_LINK_ADDRESS}|g" | sed -e "s|__CHAINLINK_ETH_CHAIN_ID__|${CHAINLINK_ETH_CHAIN_ID}|g" | sed -e "s|__CHAINLINK_CONFIG_NAME__|node-local-config|g" | kubectl apply -f -
  • In your terminal, create 2nd node’s configmap node-local-config-2
kubectl create -f kubernetes/decentralized-data-model/node/config.yaml  --dry-run=client -o yaml | sed -e "s|__CHAINLINK_LINK_ADDRESS__|${CHAINLINK_LINK_ADDRESS}|g" | sed -e "s|__CHAINLINK_ETH_CHAIN_ID__|${CHAINLINK_ETH_CHAIN_ID}|g" | sed -e "s|__CHAINLINK_CONFIG_NAME__|node-local-config-2|g" | kubectl apply -f -
  • In your terminal, create 1st node’s secret node-local-secret
kubectl create -f  kubernetes/decentralized-data-model/node/secret.yaml --namespace chainlink-rinkeby --dry-run=client -o yaml | sed -e "s|__DATABASE_URL__|postgresql://${CHAINLINK_DB_USER}:${CHAINLINK_DB_PASSWORD}@postgres.storage:5432/${CHAINLINK_DB}?sslmode=disable|g" | sed -e "s|__USER_EMAIL__|${CHAINLINK_USER_EMAIL}|g" | sed -e "s|__USER_PASSWORD__|${CHAINLINK_USER_PASSWORD}|g" | sed -e "s|__USER_PASSWORD__|${CHAINLINK_USER_PASSWORD}|g" | sed -e "s|__WALLET_PASS__|${CHAINLINK_WALLET_PASSWORD}|g" | sed -e "s|__CHAINLINK_ETH_URL__|${CHAINLINK_ETH_URL}|g" | sed -e "s|__CHAINLINK_SECRET_NAME__|node-local-secret|g"  | kubectl apply -f -
  • In your terminal, create 2nd node’s secret node-local-secret-2 (note that we set CHAINLINK_DB to a new value , which is the 2nd database’s name)
CHAINLINK_DB=chainlink-rinkeby2kubectl create -f  kubernetes/decentralized-data-model/node/secret.yaml --namespace chainlink-rinkeby --dry-run=client -o yaml | sed -e "s|__DATABASE_URL__|postgresql://${CHAINLINK_DB_USER}:${CHAINLINK_DB_PASSWORD}@postgres.storage:5432/${CHAINLINK_DB}?sslmode=disable|g" | sed -e "s|__USER_EMAIL__|${CHAINLINK_USER_EMAIL}|g" | sed -e "s|__USER_PASSWORD__|${CHAINLINK_USER_PASSWORD}|g" | sed -e "s|__USER_PASSWORD__|${CHAINLINK_USER_PASSWORD}|g" | sed -e "s|__WALLET_PASS__|${CHAINLINK_WALLET_PASSWORD}|g" | sed -e "s|__CHAINLINK_ETH_URL__|${CHAINLINK_ETH_URL}|g" | sed -e "s|__CHAINLINK_SECRET_NAME__|node-local-secret-2|g"  | kubectl apply -f -
  • In your terminal, deploy kubernetes service
kubectl apply -f kubernetes/decentralized-data-model/node/service.yaml --namespace chainlink-rinkeby
  • In your terminal, deploy oracle nodes
kubectl apply -f kubernetes/decentralized-data-model/node/deployment.yaml --namespace chainlink-rinkeby
  • In your terminal , expose oracle nodes via Nginx ingress controller
kubectl apply -f kubernetes/decentralized-data-model/node/ingress.yaml --namespace chainlink-rinkeby

One last thing we have to do now. If you open kubernetes/decentralized-data-model/node/ingress.yaml , you will notice that both nodes are reachable on different hosts: rinkeby1.local for the 1st node and rinkeby2.local for the 2nd one (see below) . Hence, we need to setup our local host file.

  • if you are running on mac or linux, you have to update /etc/hosts file . If you are running on windows , the file is C:\Windows\System32\drivers\etc\hosts . Either cases, you will need admin privileges in order to change it. Please add the following lines to your host file . this will act as local DNS on your computer and resolve these hostnames to your loopback IP
127.0.0.1 rinkeby1.local
127.0.0.1 rinkeby2.local

Configure nodes

  • login to both of your nodes. you can use dummy@gmail.com in Email and password used during setup
  • Next step is to credit your nodes with some ETH. This will be required so that they can pay for gas when submitting a transaction. your node accounts can be found in Keys > account addresses . Don’t worry if you see 0 ETH. Next we are going to get some ETH for your nodes
  • Please note down the account address of every node, you will need them
  • Now let’s credit both account . In your terminal run , create an environment variable which contains oracle nodes accounts
export ORACLE_NODE_ADDRESS=0xBc0904408F0bCf6F654B2A68fF51D447C05373a9,0x163253b9a8aB15dcC9D44BDaB147EFaBc0b3c4f4
  • Run now a truffle script scripts/decentralized-data-model/fund-oracle-nodes.js to credit both accounts . you can read the code: It takes the 1st account and send 0.5 ETH from it to every node. Remark: The sending account should have some ETH (please check the prerequisites section)
truffle exec scripts/decentralized-data-model/fund-oracle-nodes.js --network rinkeby
  • If the command is successful then you will notice that both of your nodes were redited with some ETH

Run the event watcher

Now that everything is setup, we are going to run the event watcher in order to confirm that smart contracts emit events as expected ( see above)

Open a new terminal and run

node server/decentralized-data-model/server.js --network rinkeby

Setup FluxAggregator

There are 2 important parts:

  • As seen in the 1st part of this blog, the flux aggregator is responsible of aggregating responses and paying out oracles for their work. Hence it needs some Link token for payout
  • We must whitelist oracle nodes so that they are allowed to submit new price points

Fund FluxAggregator

in the root folder of your project run truffle script scripts/decentralized-data-model/fund-aggregator.js

truffle exec scripts/decentralized-data-model/fund-aggregator.js --network rinkeby

If successful then you should see the following log in your terminal:

You can also check logs written by the event watcher and which confirms:

  • AvailableFundsUpdated event triggered by FluxAggregator
  • Transfer event triggered by LinkToken

Whitelist oracle nodes

We are going to use truffle script scripts/decentralized-data-model/set-oracles.js .

The script takes both node addresses and call changeOracles method of FluxAggregator . For simplicity , we are going to consider that the 1st account (which is our Metamask 1st account and also the one used to deploy the contracts) is admin of both oracles. We also setup the minimal required submissions (1) and maximal required submissions (which is 2 in this case). restartDelay is the number of rounds an Oracle has to wait before they can initiate a round. For simplicity, restartDelay is set to 0

in the root folder of your project run truffle script scripts/decentralized-data-model/set-oracles.js

truffle exec scripts/decentralized-data-model/set-oracles.js --network rinkeby

once the sript finished running, you sould see new events logged in the event watcher:

  • OraclePermissionUpdated: informing of new oracle and linked administrator account
  • RoundDetailsUpdated: informing the payment amount, the minSubmissionCount, maxSubmissionCount , restartDelay and timeout.

Configure bridges and job

Now that FluxAggregator has enough Link tokens and that oracle nodes are whitelisted, let’s go configure the final step of our oracle nodes

Define Bridges

Now in the console of each of your nodes, create 2 bridges which basically define the connexion to the external adapters

  • coingecko_cl_ea
  • coinapi_cl_ea

Create job

now go to job tab and create a new job. Here is the json configuration (please replace <FLUXAGGREGATOR_ADDRESS>. the address can be found in config/addr.json at rinkeby > fluxAggregatorAddress.

Remark: FluxMonitor initiator is explained in here

{"initiators": [{"type": "fluxmonitor","params": {"address": "<FLUXAGGREGATOR_ADDRESS>","requestData": {"data": {"from": "ETH","to": "USD"}},"feeds": [{"bridge": "coinapi_cl_ea"},{"bridge": "coingecko_cl_ea"}],"threshold": 1,"absoluteThreshold": 1,"precision": 8,"pollTimer": {"period": "5m0s"},"idleTimer": {"duration": "1h0m0s"}}}],"tasks": [{"type": "NoOp"},{"type": "multiply","params": {"times": "100000000"}},{"type": "NoOp"},{"type": "ethuint256"},{"type": "NoOp"},{"type": "ethtx"}]}

Check runs

Jobs are configured to run every 5min but only submit a value if the change between on-chaine vs off-chain price is over 1% . Otherwise there is another condition (idleTimer) which will force the oracle to submit price value 1h after last change. After few minutes you can check the runs in Jobs > Runs. You can check for both of your oracle nodes

Open the last one and you will see the details. In my case:

  • The 1st node got 2621.82381356047565 USD for 1 ETH
  • The 2nd node got 2620.5791504592687 USD for 1 ETH

Let’s see now events logged by the event watcher

many interesting things can be seen here:

  • NewRound event tells that there is a NewRound (in my case 6 as I’ve been running this example for a while but it will probably be 1 in your case). It was initiated by 0xBc0904408F0bCf6F654B2A68fF51D447C05373a9 , which is the 1st oracle node
  • 1st SubmissionReceived event tells that 0xBc0904408F0bCf6F654B2A68fF51D447C05373a9 submitted 262182381356 (precision is 8 so 2621.82381356 USD per ETH) — which is consistent with what we saw in the oracle node interface
  • 1st AnswerUpdated emits the current ETH price . there was one submission so it is equal to 1st node submission: 262182381356
  • 1st AvailableFundsUpdated event tells that the available funds have been decreased by 1 link token. In fact, 1st oracle node has been paid out
  • 2nd SubmissionReceived event tells that 0x163253b9a8aB15dcC9D44BDaB147EFaBc0b3c4f4 (2nd oracle node address) submitted 262057915046 (precision is 8 so 2620.57915046 USD per ETH) — which is consistent with what we saw in the oracle node interface
  • 2nd AnswerUpdated emits the current ETH price . there have been 2 answers so far so the calculated price is 2621.20148201 USD for 1 ETH. This is actually the average value of 2621.82381356 and 2620.57915046 (as there are only 2 values, the medium is equal to the average) . This is exactly what we expected, FluxAggregator has aggregated submissions!
  • 2nd AvailableFundsUpdated event tells that the available funds have been decreased by 1 link token. In fact, 2nd oracle node has been paid out

Retrieve data from blockchain

let’s now perform some interesting queries

Request latest round data

in the root folder of your project run truffle script scripts/decentralized-data-model/req-latest-round-data.js

truffle exec scripts/decentralized-data-model/req-latest-round-data.js --network rinkeby

This tells us what is the latest round id and ETH/USD price for the latest round

Request round state

As discussed in the 1st section, oracles call oracleRoundState method of FluxAggregator to retrieve all data they need and which allow them to decide whether to submit a new price point or not. let’s run truffle script scripts/decentralized-data-model/req-round-state.js

truffle exec scripts/decentralized-data-model/req-round-state.js — network rinkeby

this is the output of the script . it tells each oracle if it is allowed what is the next open roundID, if it is eligible to submit for the open round, available funds, number of oracles and the payment amount for every submission (1 link token in here)

Request latest price from a consumer

Now let’s try to request ETH/USD price from our PriceConsumer contract. Remember, PriceConsumer does not communicate directly with the FluxAggregator, it communicates with a proxy. that can be seen in the migration script migrations/2_deploy_flux_aggregator.js where we passed the AggregatorProxy address (and not the FluxAggregator address) to the constructor of PriceConsumer

PriceConsumer is very straightforward , it has a method getThePrice which returns the latest price . You can open contracts/PriceConsumer.sol to verify

Let’s now run the truffle script scripts/decentralized-data-model/req-latest-price.js — which calls this method. This can be seen in here

now run in your terminal

truffle exec scripts/decentralized-data-model/req-latest-price.js --network rinkeby

the output shows that we got the PriceConsumer got the expected price 2621.20148201 USD for 1 ETH

Check Oracles’ funds

As discussed in the 1st part, a pull-over-push pattern is used. Hence, oracle payouts are stored in FluxAggregator storage. Oracles’ Administrators will have to pull the funds themselves.

One can check the withdrawable payments for every oracle. You can open the script scripts/decentralized-data-model/req-withdrawable-payment.js. The script calls withdrawablePayment method of FluxAggregator to check the withdrawable amount for every oracle node

now run in your terminal

truffle exec scripts/decentralized-data-model/req-withdrawable-payment.js --network rinkeby

In my case , the script output is the following

and that’s it! our PriceConsumer contract can retrieve decentralized price points and oracle nodes are paid for their valuable work! 😀

--

--

Amine El
Amine El

Written by Amine El

Cloud architect , new into Blockchain. Passionate about Defi, Oracles & NFTs. Views are my own

No responses yet