Ijonas Kisselbach


Fetching Strings through a Chainlink Oracle

Photo by Kier In Sight on Unsplash

Fetching Strings through a Chainlink Oracle

Ijonas Kisselbach's photo
Ijonas Kisselbach
·Apr 7, 2022·

3 min read

The standard “Hello, World!” of testing a Chainlink Node is to execute the “Get > Uint256” job that’s referenced in the documentation. Its simple, the smart contract sends a request to the oracle contract and it in turn executed the “Get > Uint256” job, fetching a single number from a JSON web api.

Fetching a string-value from that same JSON web api has turned out to me a bit more difficult, limiting and frustrating.

TLDR: the solution can be found here

Get > Bytes32

What an innocuous little title for a job that has caused me so much grief today 😑. Lets just share it in all of its glory:

type = "directrequest"
schemaVersion = 1
name = "Get > Bytes32 v14"
maxTaskDuration = "0s"
contractAddress = "0x17A8182DBF79427801573A3D75d42eB9B1215DEF"
minIncomingConfirmations = 0
observationSource = """
    decode_log   [type="ethabidecodelog"
                  abi="OracleRequest(bytes32 indexed specId, address requester, bytes32 requestId, uint256 payment, address callbackAddr, bytes4 callbackFunctionId, uint256 cancelExpiration, uint256 dataVersion, bytes data)"

    decode_cbor  [type="cborparse" data="$(decode_log.data)"]
    fetch        [type="http" method=GET url="$(decode_cbor.get)"]
    parse        [type="jsonparse" path="$(decode_cbor.path)" data="$(fetch)"]
    encode_data  [type="ethabiencode" abi="(bytes8 value)" data="{ \\"value\\": $(parse) }"]
    encode_tx    [type="ethabiencode"
                  abi="fulfillOracleRequest(bytes32 requestId, uint256 payment, address callbackAddress, bytes4 callbackFunctionId, uint256 expiration, bytes32 data)"
                  data="{\\"requestId\\": $(decode_log.requestId), \\"payment\\": $(decode_log.payment), \\"callbackAddress\\": $(decode_log.callbackAddr), \\"callbackFunctionId\\": $(decode_log.callbackFunctionId), \\"expiration\\": $(decode_log.cancelExpiration), \\"data\\": $(encode_data)}"
    submit_tx    [type="ethtx" to="0x17A8182DBF79427801573A3D75d42eB9B1215DEF" data="$(encode_tx)"]

    decode_log -> decode_cbor -> fetch -> parse -> encode_data -> encode_tx -> submit_tx

The import individual steps of the job are in the observationSource section. After two steps to decode the incoming request, the node performs a fetch retrieving the URL specified in the job request payload. The request data is returned and pushed through parse which extracts a specific value from the JSON.

The interesting and ultimately disappointing steps happen during encode_data and encode_tx. In the version above the encode_data step attempts to convert the string-value returned from parse as a bytes8 which is great if your string-value fits into that data structure.

"Coinbase"   # is returned from parse as bytes8
"ETH"        # is returned from parse as bytes3

So when in encode_data you try and pass a bytes3 into a ethabiencode you get an error :

expected 8, got 3: bad input for task: bad input for task

Frustrating that ethabiencode can’t upscale the bytes3 into a bytes8.

My next approach was to see if I could make those string-values convert to dynamic rather than fix-length data structures, so I rewrote encode_data:

# changing the abi to a dynamic bytes array - "(bytes value)"
encode_data  [type="ethabiencode" abi="(bytes value)" data="{ \\"value\\": $(parse) }"]

This got me one step further by upscaling any string-value into a bytes96. However this is where encode_tx abruptly grounded to a halt, as the fulfillOracleRequest(....., bytes32 data) method requires the data to be passed back as a bytes32.

In summary

Right now I can’t see anyway around having to specify exact fixed-size datastructures, i.e. bytes3 when parse returns a small string, and the slightly larger bytes8 when it returns a bigger string.

This doesn’t feel like a workable solution. I’ve got a couple of further options to pursue, most notably the concept of larger responses.

Follow Up

I finally managed to fetch a large string using the large responses technique mentioned above. Instructions can be found here.

Share this