Only this pageAll pages
Powered by GitBook
1 of 58

Astral

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

ARCHIVE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Ecological State

Ecological Index is an on-chain variable that represents the value of the ecological asset stored in the GeoNFT. Depending on the application's requirements and data collected, it could be a measurement of area, soil health, biodiversity, carbon sequestration, or other calculation.

Area Calculation

Area is a simple way of calculating ecological value using the topological data represented by the GeoJSON. For instance, if a steward of an asset has agreed to follow regenerative practices, the size of that natural space plays a key role in its value. When using fractionalization, token supply would be controlled by this value, so it's important to accurately calculate area using on-chain functionality.

Below is a description of how we calculated the ellipsoidal area of Polygon and MultiPolygon geometries on-chain.

Implementation

The area calculation of a surface can be approached in different ways. A very simple and straightforward one is to display the coordinates on a flat plane. Doing so, the area calculation is similar to any irregular geometric figure. It’s very common to use the planar representation of maps (called “projected” coordinate systems) because they are easier to view and work with. One example is Google Maps, which uses the (or Web Mercator) projection. It was initially conceived as a variant of the Mercator projection (the one used on typical physical maps), and has become the standard for mapping applications.

In our implementation, however, we calculate the ellipsoidal area based on the spherical representation of the Earth instead of on a flat plane. Calculating the area over a sphere is more difficult, but the result is more accurate.

Challenges

Calculating the area on-chain using Solidity has some challenges that are not present in most standard programming languages. There are two aspects that make it difficult:

  1. decimal handling and irrational constants (such as pi)

  2. trigonometric functions (sine function is needed in the area calculation algorithm).

Decimal Handling

Handling decimals is a well-known topic in the Solidity community: you need to multiply the value by some factor, do the calculations and, finally, divide by that same factor. In the end, the result should be valid (with some error deviation). This is used, for example, on the input coordinates. The range of valid numbers are [-90, 90] for latitude and [-180, 180] for longitude. Operating without decimals in this context is useless because of the lack of precision, so we are applying a factor of to operate with integer values. This is also applied to constant values such as the earth radius, pi, etc.

Trigonometric Functions

Using trigonometric functions in Solidity is not straightforward. Typically, trigonometric functions (sine, cosine, tangent) work with radians, and the result lives bounded in a very specific domain. For example, the range of sine and cosine values is [-1, 1]. Given that, it is impossible to operate without decimals, and there is no built-in sine function in Solidity.

The solution to this was found by Lefteris Karapetsas for his app which was based on the C library named that, in turn, was an implementation of Scott Dattalo's . In all cases, the problem was the same: calculate the sine of an angle using only integer values. In summary, the solution is to pre-calculate all the values and store them in a hash table. This may be done in a programming language that supports standard sine calculation (e.g. Python) and export the hash table in a format compatible with Solidity. That format is a .

But, how is that table built? Excluding decimal values, a circle can be split in 360 parts, each one representing an angle of 1 degree. We could calculate the real sine value for each degree and store it on the hash table. But this doesn’t give us a lot of precision, especially in a geospatial coordinate system context (68.1º and 68.9º will relate to the same value).

A way of improving this is to split the circle into more units, thus increasing precision. Splitting the circle into 16,384 units allows storing the angle parameters as unsigned 16-bit data types (as used in the trigint library). In our case, we needed even more precision in order to accurately calculate the area. We used a 256-bit data type to store the values of up to 1,073,741,824 angle units in a circle. With this method we were able to achieve an error deviation of < 0.1% on very small areas.

Future Plans

Ecological state calculations will be expanded to include other indexes based on data collected for an ecological asset. These values could be determined by machine learning models or other algorithms.

As such, we have separated the area calculation into a separate library from the GeoNFT contract, thus making it easy to replace when adding other ecological calculations.

Spatial Data Registry

A Spatial Data Registry is a way of organizing and storing spatial data to run geospatial queries. This registry could be used across a variety of Web3 use cases: spatial governance systems, sustainability-linked bonds, parametric insurance policies, and location-based Web3 games. For MRV (measurement, reporting, and verification) applications, it's an essential tool for querying and identifying all data relevant to an asset, making it difficult, for example, to overstate carbon offsetting by concealing greenhouse gas emissions.

For a truly decentralized geospatial search, we have created the On-chain Spatial Data Registry where the geospatial index is stored on-chain.

We've also created a more performant version called the Anchored Verifiable Spatial Data Registry that makes use of MongoDB's spatial capabilities and validates the data by anchoring of hashes on-chain.

10910^9109
Pseudo-Mercator
Sikorka
trigint
sine wave routine for the PIC microcontroller
string of hexadecimal values
Graph of the sine function. By Geek3 - Own work, Public Domain, Link
30 degree reference angles. By Adrignola - Own work, CC0, Link
bytes constant SIN_TABLE = "\x00\x00\x00\x00\x00\xc9\x0f\x88\x01"; //truncated for readability

Astral Litepaper

Astral

For the past few years we’ve been investigating what we see as a new galaxy in the Web3 universe, at the intersection of spatial data and consensus technologies. We’ve explored the boundaries of the design space researching, prototyping and building community. This is culminating in our work to build what we now see as the key primitives to underpin a new category of Web3 applications using spatial and location data. Our work at Astral is focused on promoting the evolution of an open, human-centered and composable location-based decentralized web.

The opportunity space is pretty vast. We believe that the vision for the user-controlled internet is incomplete without peer-to-peer alternatives to services like Uber, Google Maps, Airbnb, Tinder, Craigslist, Amazon and others. Even more exciting is the opportunity to replicate the functionality required for systems of local taxation, voting, and physical security. Perhaps the most exciting of all is the notion that our systems of value exchange can be configured to promote the preservation of life and ecological health — the nascent “regenerative finance”, or ReFi, movement, leveraging natural capital currencies and other tokenized natural assets. We’ve been finding that all of this is possible if we can reliably tie information about physical reality — where someone or something is, or measures of environmental conditions — to smart contracts, especially contracts capable of computational geometry.

Rather than working at the application layer, at Astral we are designing open source tools, public goods intended to underpin this category to A: make building location-based dapps easier, B: create spatial data storage systems fit for web3 (i.e. verifiable, uncensorable, permissionless, etc). We believe that if the ecosystem can converge on using these tools and design patterns, the location-based decentralized web will exhibit similar emergent composability that we’re witnessing in DeFi.

So what does this specifically mean?

  • We’re developing verifiable spatial data registries, smart-contract based registries of vector or raster spatial data assets. Initially this was to store polygons representing geographic jurisdictions on chain, though we see use cases for points, lines, polyhedra and raster datasets as well.

  • We’ve also been designing a verifiable location claim, or a “universal location check-in”, a verifiable claim that serves as an attestation that someone / something was at a particular place at a particular time. This VC can be held privately by its creator, and submitted to any compatible location-based dapp or service. We are uncovering dimensions of trust, security and privacy, and that different applications will demand different procedures for creating these VCs.

  • We’ve also built a proof-of-concept for a

What Astral enables

As we see it, these primitives enable two key pieces of functionality:

Responding to ecological state changes

One is tying smart contract behavior to ecological state changes.

This is powerful technology, which makes it very dangerous. We are approaching the design and development of natural capital currencies with great humility. Initial thinking suggests that if we can look at a natural ecosystem from three perspectives. First, and most importantly by a large margin, from the people who live in, near and with the ecosystem, especially indigenous people. Supporting the human assessments of ecosystem state / health, we believe that data collected from proximate connected sensors deployed in the environment, along with remote sensing data captured from drones or satellites, can be analyzed to reach a reasonably accurate understanding of ecological conditions.

This is exciting because it means on-chain systems (DeFi, digital governance etc) can adapt to changes in ecosystem health, for example by rebasing a currency or tapping a community treasury to deploy capital when conditions approach threshold parameters. We are especially excited at the idea of detecting leading indicators of ecosystem degradation so we can support more targeted and less extreme interventions to preserve health.

Local contracts

The second: local contracts, smart contracts that can use location as a condition in its control flow, i.e. require(pointInZone(coordinates, zoneGeometry)), with the coordinates from a verifiable location claim and the zoneGeometry polygon stored in the verifiable spatial data registries. This binds smart contract behavior to spatial extents.

We believe with this functionality we can recreate tools of territorial governance that are the standard in the legacy system. We are working on , a suite of geometric and topological functions in Solidity. We’ve also designed and prototyped a spatial governance protocol for connected devices, which could serve as a global (or, really, universal, within any spatial reference system) territorial governance system for self-sovereign jurisdiction administrators. Applications include e-mobility and intelligent transport, supply chain and logistics, autonomous vehicle governance and so on.

Serving the universe and the metaverse

Astral is intended to underpin any dapps using spatial or location data. This includes the metaverse. In fact, we feel quite strongly that doing early experimentation and prototyping in the metaverse is the best way to understand and develop the technology.

This summary has focused on the technology, but we are placing a greater emphasis on people: on building the community of people who will be building with the public goods Astral is creating.

Where we are heading

We've received funding as members of the , and we’re engaging with thought leaders and prospective users to triangulate as versatile a design for these primitives as possible.

So — what do you think? What are we missing? Do you know of anyone working on related things? What should we be thinking about? Who might be interested — can you connect us?

Reach out on or join our !

Celo/EVM Anchored Spatial Data Registry

Introduction

The purpose of this MVP was to demonstrate a workflow from an MRV data provider, supplying raw measurement data based on both stationary and mobile sources, to a backend with the following features:

  • an ingestion process including:

timeseries and geospatial location (both point and polygon-based) indexing

  • a flexible data schema to accommodate multiple data types and units of measure

  • hashing of raw measurement data on ingestion

  • an anchoring process including:

    • summary hashing of data hashes

    • anchoring of summaries to the database and Celo blockchain

  • a validation process verifying the hashing and anchoring processes

  • account-based authentication and authorization, separating the above functionality by role

  • a means to create a public HTTP endpoint to list the results of queries on the Ocean marketplace

  • a test layer covering the above functionality

  • This MVP builds on the CosmWasm Anchored Geospatial Data Registry.

    Web3-native geospatial data storage system
    on IPFS. This is for storing sensor data, which is key to many of the spatial finance applications we’ve been exploring like natural capital currencies, local / complementary / community currencies, parametric insurance, sustainability-linked bonds, carbon accounting systems etc.
    More here →
    Spatial.sol
    Climate Collective
    Twitter
    Discord
    More here →

    GeoNFT

    Geospatial Non-fungible Token

    A GeoNFT represents geospatial assets by extending the non-fungible token (ERC-721) contract with location information.

    Geospatial data is defined as a GeoJSON string defining a FeatureCollection of one or more Features (Polygon or Point):

    {
      type: "FeatureCollection",
      features: [
        {
          type: "Feature",
          geometry: {
            type: "Polygon",
            coordinates: [
              [
                [-68.8906744122505, 12.147418397582491],
                [-68.8907468318939, 12.147347599447487],
                [-68.8907213509083, 12.14723615790054],
                [-68.8905939459801, 12.147198136656193],
                [-68.89051884412766, 12.147280734524921],
                [-68.89055103063583, 12.147379065287602],
                [-68.8906744122505, 12.147418397582491],
              ],
            ],
          },
        },
      ],
    }

    Additionally, GeoNFTs contain an Ecological Index as a measure of ecological state:

    struct EcologicalIndex {
            string indexType;
            int256 indexValue;
    }

    The Ecological Index is an on-chain variable representing the value of the asset defined by the GeoNFT. A common design pattern is to fractionalize an NFT into fungible ERC-20 tokens for usage within community reserves and currencies. The Ecological Index could be a parameter to determine the amount of ERC-20 tokens that are created. As the Ecological Index changes, the supply of tokens may be responsive to this value.

    Validation

    Validation process highlighted

    The validation process, running under the 'validator' role, retrieves a requested number of unvalidated anchors, and for each one, retrieves the associated data points, recalculates the summary hash.

    The current counts and hash results are then compared to both the Celo contract and database counts and hash results established by the anchor process and any discrepancies are reported.

    Relevant code:

    Technical Design

    Celo Verifiable Spatial Data Registry MVP - technical design

    Overall design diagram shows all moving parts; data flow is from left to right. Detail pages for each process follow.

    Functionality is managed and secured with authentication and authorization via HTTP interceptors. Account creation and management runs under the 'super' role:

    Relevant code:

    Ingestion

    Ingestion layer highlighted

    A client MRV Provider authenticates and receives a token containing the 'provider' role. With this token applied to an HTTP client header 'x-access-token', they will send (post) data in the form described by this flexible schema example:

    Details of ingested data point

    Location can be an array of points or polygons. See the measurements interface definition below for currently supported data types and units of measure.

    Relevant code:

    On-chain Spatial Data Registry

    The fully decentralized spatial data registry stores all data on-chain. Due to technology limitations, we built a lightweight solution to organize and query the geospatial data using the geohash geocode system to build a data structure called a GeoTree.

    Geohash

    Geohashing is a method that encodes coordinates as a string of characters. In the geohash system, the Earth's surface is divided into 32 rectangles, each one corresponding to a specific character. The characters include all numbers and letters except a, i, l and o. Each top level rectangle is subsequently divided into 32 rectangles, representing a second level of detail. This pattern can be continued for an infinite number of nested rectangles, so that every level corresponds with its identifier character — but the convention defines a maximum level of 12. The more characters we specify, the smaller the rectangle will be. For example, the geohash de2f7 with five characters is five nested rectangles deep.

    In the Solidity contract, latitudes and longitudes are stored as signed 64-bit integers. At this level of precision, the maximum level of geohashing is 12, providing a geolocation precision of 3.7 * 1.9 cm.

    The short alphanumeric format of a geohash lends itself well to a Solidity implementation that doesn't support floating point numbers. Additionally, it's convenient to remove characters from the end of a hash to query locations that are less and less precise.

    Point Locations

    Our current implementation of the spatial data registry only indexes a single latitude/longitude point. For polygons, we calculate the centroid off-chain and use that value to register a location.

    The registry stores data at level 8, which corresponds to a square of 20x20 meters. Any point inside that square will be resolved to the same geohash.

    GeoTree

    A GeoTree is a type of data structure that permits indexing of data that exist at different levels. Below is an example of GeoTree indexing two-dimensional data using geohashes. The top nodes of the tree correspond to geohash level 1 — we call them parent nodes. Child nodes are the bottom nodes of this diagram, and they represent level 2. In the GeoTree, nodes only contain one character, as the child nodes automatically inherit their parent’s value. By traversing from the root node to the end node, we can access each geohash character and build the complete geohash.

    The tree allows us to query data assets at any resolution by picking any node and traversing through all of its children to find all enclosed assets. For example, the level 5 geohash gbsuv represents a rectangle of approximately 5x5 km. To find all assets inside this area, we would select all child nodes with geohashes beginning with gbsuv. Let’s assume the end nodes of our system are the following:

    • gbsuv7dq

    • gbsuv7zw

    • gbsuv7zy

    The GeoTree data structure would look like this:

    Note that we’ve created all the intermediate nodes of the tree, including d and z, even though they didn’t contain data. This way, we can query any subtree at any level and get all of its children.

    Optimization

    The above data structure allows us to query the GeoTree on a time complexity O(n). We made the decision to improve the structure by caching all the enclosed assets at each intermediate node. This means that for every insertion, the asset will be stored on the end node and every parent of that node. The following illustration shows this:

    Even though this variant may have some drawbacks, we choose it because it pays off in terms of query efficiency. Time complexity would be as follows:

    • Insertion: Process would be executed on every level (k). In a system with a depth of 8, k would be 8, so time complexity is O(1).

    • Query: Given that every node contains all its children assets, the querying process would only need one step, making time complexity O(1).

    Regarding the storage, this modification will increase its size from n to (k+1)*n. However, the depth of the tree (k) would usually be a low number, making the memory requirement O(n) — the same as the non-cached version.

    Solidity Implementation

    Nested data structures such as trees are hard to implement in Solidity due to its limited functionality. However, the cached implementation of the GeoTree was achieved with a simple hash map. The key is the full geohash mapped to an array of asset values. Our previous example would be represented as follows:

    Anchoring

    Anchor process highlighted

    The anchor process, running under the 'super' role, retrieves a requested number of unanchored raw data points which have been hashed at ingestion. It then creates a "summary hash" by hashing the individual data point hashes.

    The created anchor instance id and summary hash are then stored in both the database and on the Celo contract, with the Celo transaction hash also applied back to the anchor instance on the database. The anchor instance id is then applied to each of the raw data points that comprise the transaction.

    Relevant code:

    Welcome

    Astral - Building tools to enable the location-based decentralized web.

    Welcome to Astral!

    Our vision is to build a protocol and community dedicated to creating an ecosystem of location-based Web3 applications and spatial contracts. We believe that the Web3 vision is incomplete without location services, and are working to develop technologies that will make it easy for devs to build user-controlled dApps leveraging location data.

    We're taking a modular approach, architecting our tools so each component useful on its own, but all designed to work together.

    We've been exploring the intersection of spatial data and consensus technologies for the past few years. We're dedicated to building in the open and sharing our findings — so we are leaving everything we've worked on available to read.

    We are embarking on the next phase of development of Astral, which is independent from past work. The docs here are NOT accurate to what we're building now. Instead, they show the meandering path we've taken to this point.

    For an up-to-date summary of what we are working on now, read our litepaper:

    List on Ocean Marketplace

    The sample queries take a polygon, date range, and provider id and return the average temperature, and the raw data points, respectively. The query endpoints are public so no access token is required to run:

    The data query is more appropriate for the Ocean marketplace than the previous average temperature query, which is more an example of a composable intermediate endpoint for an inference toolchain.

    Data from the second query was listed on the Ocean Görli test network:

    Summary

    This MVP demonstrates the power of geospatial queries and the analytics potential of raw timeseries data which is anchored and validated on-chain.

    Possible next steps include a full analytics layer with composable endpoints and integration with active inference workflows.

    Thank you to for funding this MVP.

    Progress

    Weekly updates from the Astral team

    15 Jun 2022

    • Finalized Spatial Data Registry Data Structure (Kolektivo)

    • MongoDB v6 evaluation of timeseries, indexing, and analytics (MRV Collective)

    CosmWasm Anchored Spatial Data Registry

    With (Junø/CosmWasm contract) and (HTTP REST Server) we address the feasibility of providing performant and scalable access to geospatial data in geojson format, leveraging MongoDB’s . When a data item is ingested into the database, a hash of the relevant data is generated and stored on a CosmWasm contract which implements a indexed datastore. Validation results are stored on both the database and the contract. Validators would each run a replica instance, with an independent compute instance performing the same hashing algorithm.

    Current PoC implementation:

    # sample post with curl
    export PROVIDER_PASS=<provider password>
    export HOST=https://<host>/api
    
    eval "$(jq -M -r '@sh "ACCESS_TOKEN=\(.tokenData.token)"' <<< "$(curl -H 'Content-Type: application/json' -X POST -d '{"email":"provider@iwahi.com","password":"'"$PROVIDER_PASS"'"}' $HOST/53f889/2cfae1)")"
    echo $ACCESS_TOKEN
    
    curl -s \
         -w '\n' \
         -H "Content-Type: application/json" \
         -H "x-access-token: $ACCESS_TOKEN" \
         $HOST/94cbae/4d46c9 \
         -d '{"metadata": 
                {"location": {"geometries": [
                    {
                    "coordinates": [
                    -73.9132,
                    40.68405
                    ],
                    "type": "Point"
                    }
                ],
                "type": "GeometryCollection"
                },
                "model": "mri-esm2-ssp126",
                "project_id": "proj_29lo8RFQiVowh4u5WHdbFSLKExL",
                "provider": "tSuqRPkLVfDqQG3mgr0x4",
                "source": "station xxxxx"
      },
      "ts": "'"`date +"%Y-%m-%dT%H:%M:%S%z"`"'",
      "measurements": [
        {
          "type": "T",
          "unit": "C",
          "value": 20
        },
        {
          "type": "H",
          "unit": "P",
          "value": 30
        }
      ]
    }'
    # return value, with id/data hash:
    {"geots":{"_id":"tcXoR-E50jQg-j3iUapcS","metadata":{"source":"station xxxxx","model":"mri-esm2-ssp126","project_id":"proj_29lo8RFQiVowh4u5WHdbFSLKExL","anchor":null,"ip":"127.0.0.1","provider":{"_id":"tSuqRPkLVfDqQG3mgr0x4","name":"Sample Provider","path":"4818b0","id":"tSuqRPkLVfDqQG3mgr0x4"},"location":{"type":"GeometryCollection","geometries":[{"type":"Point","coordinates":[-73.9132,40.68405]}]}},"ts":"2022-09-07T20:21:10.000Z","measurements":[{"type":"T","unit":"C","value":20},{"type":"H","unit":"P","value":30}],"hash":"3a968d77d2864f6c5e85d287fa8c7c9b12d04dfef5956f03b7ee4218bf8b4076","__v":0}}
    Astral Litepaper
    MRV Collective
    Validation endpoint
    Validation service
    Contract service
    Celo Contract
    Contract anchor and validation integration test
    User role test
    Super role test
    Super endpoint
    Endpoint test
    Endpoint Implementation
    Service Implementation
    Top level data schema interface definition
    Metadata interface definition
    Location interface definition
    Measurements interface definition
    Anchor model test
    Anchor endpoint
    Anchor service
    Contract service
    Celo Contract
    Contract anchor and validation integration test
    geohash level 1 with "d" divided to show level 2. Base map by Strebe - Own work, CC BY-SA 3.0, Link
    level 1 node with two child nodes at level 2, full geohash value in parentheses
    intermediate node creation in a geotree
    caching asset information at every node
    Data NFT on Opensea Görli Testnet

    Note that you need Görli ETH and Görli OCEAN tokens to purchase. Faucet for Görli OCEAN

    Relevant code:

    • Sample client query

    • Public endpoint

    Sample query test listing
    Listing query data on Ocean marketplace highlighted
    {
        "gbsuv"    =>  {1, 2, 3}
        "gbsuv7"   =>  {1, 2, 3}
        "gbsuv7d"  =>  {1}
        "gbsuv7dg" =>  {1}
        "gbsuv7z"  =>  {2, 3}
        "gbsuv7zw" =>  {2}
        "gbsuv7zy" =>  {3}
    }
    # sample analytics query:
    # query one month's 3 hourly data for a polygon and provider, return average temperature.
    export HOST=https://<host>.com/api
    curl -s \
         -w '\n' \
         -G \
         -H "Content-Type: application/json" \
         -d 'polygon={"type":"Polygon","coordinates":[[[-3.7025,40.4165],[3,60],[6,90],[-3.7025,40.4165]]]}' \
         -d 'startdate=2019-01-01' \
         -d 'enddate=2019-01-02' \
         -d 'providerId=tSuqRPkLVfDqQG3mgr0x4' \
         $HOST/596090/00833a
    # raw data on which above query is based:
    curl -s \
         -w '\n' \
         -G \
         -H "Content-Type: application/json" \
         -d 'polygon={"type":"Polygon","coordinates":[[[-3.7025,40.4165],[3,60],[6,90],[-3.7025,40.4165]]]}' \
         -d 'startdate=2019-01-01' \
         -d 'enddate=2019-01-02' \
         -d 'providerId=tSuqRPkLVfDqQG3mgr0x4' \
         $HOST/596090/7afb79

    8 Jun 2022

    • Mocked up GeoNFT and Spatial Data Registry Contracts with hardhat testing (Kolektivo)

    • https://github.com/ openmrv-server Currently building out provider/role/user security infrastructure (MRV Collective)

    1 Jun 2022

    • Ingestion layer first look (MRV Collective)

    • Added backend service for circuit and contract generation (zkMaps)

    23 May 2022

    • Began development of MVP for decentralized MRV registry (MRV Collective)

    20 May 2022

    • Completed Astral Geospatial Data Registry PoC (dClimate)

    18 May 2022

    • Met about use cases for GeoNFT minting, approval, metadata, and ecological data (Kolektivo)

    • Met about ArcGIS and weather station data (Kolektivo)

    27 Apr 2022

    • Supported the Oika project at the Planet Positive NFT Hackathon

    • Met with Regen Network on Data Module v4

    • Met with dMeter

    20 Apr 2022

    • Presented Astral GeoNFT specs (Kolektivo)

    13 Apr 2022

    • Completed data registry requirements document (Kolektivo)

    6 Apr 2022

    • Reviewed ArcGIS food forest survey app (Kolektivo)

    • Completed Ocean Protocol v4 test dataset and paper (Kolektivo)

    • Completed lo-fi map of Curacao and food forests (Kolektivo)

    23 Mar 2022

    • First meeting of the "Survelliance Squad" now known as "dMeter"

    • 3rd and final meeting of KERNEL Regen Guild, discussed locus of control

    16 Mar 2022

    • Reviewed use cases related to badges, weather stations, and monetization with Ocean Protocol (Kolektivo)

    • Second KERNEL Regen Guild meeting, discussed Donella Meadows leverage points and ran an ecosystem mapping exercise with Ale Borda

    9 Feb 2022

    • Hardhat/React dApp for GeoNFT (Kolektivo)

    2 Feb 2022

    • Draft GeoNFT Contract (Kolektivo)

    12 Jan 2022

    • PoC decentralized data storage using Ceramic on Alfajores (Kolektivo)

    5 Jan 2022

    • Defined initial bounds for computation and tree data storage limits (Kolektivo)

    27 April 2021

    • Call with Ryan John King, CEO of FOAM space, a pioneer in the geospatial decentralized web. FOAM recently launched their FOAM Lite Ethereum transaction relay device, and is working on trustless "presence claims".

    • Reviewed the draft whitepaper for the Kolektivo Framework, a crypto-institutional framework that seeks to promote socio-ecological sustainability through cryptoeconomic mechanisms. Watch this space ...

    • Exploring ways to keep pushing forward with the Web3-native geospatial vision, which we first explored with a Filecoin development grant. More soon ...

    13 April 2021

    • Met with jabyl from Distributed Town, where they're building a DAO for DAOs and a universal "skills wallet" enabling you to port your reputation with you in the mutual credit network .

    • Spencer and the Commit Pool team joined the Astral Discord - really exciting work designing an app to help people hold themselves accountable. Set goals and stake money - by exercising claim your crypto back.

    • On Discord we're discussing verifiable impact claims - a common problem that many teams are solving. One take: Impact NFTs.

    6 April 2021

    • Submitted the GeoDID Explorer client application to Filecoin, completing our Development Grant. You can explore the Astral Studio - live on Ropsten - here. Working on docs.

    • Lots of community building this week:

      • Max from Earthify is building a data DAO that is crowdsourcing, standardizing, and publishing public / government records, with a focus on real estate data. We are advising Max and got to listen in to his design sprint with @RaidGuild.

      • Connected with David from KERNEL0x, started a conversation about privacy and data markets.

      • Another check-in with Grant and Pylyp from Copernic Space, building a Web3 marketplace for space assets including satellite imagery.

      • One-to-one with community member @BlairVee talking through verifiable impact claims and community inclusion currencies.

    • Drafted a concept note: , which represent impact claims made by people doing some environmental or social project. Reviewers would analyze evidence referenced by the NFT - threshold approvals could trigger bounty transfers to Impact NFT minters.

    • Community

      • 207 followers on Twitter

      • 58 Discord members

    31 March 2021

    • Completed initial implementation of IPLD-encoded GeoTIFFs in Typescript for our Filecoin Development Grant - docs here.

    • Finishing the GeoDID Explorer client interface.

    • Spoke on Web3 Spatial at the Open Geospatial Consortium Blockchain Domain Working Group - recording forthcoming.

    • Met with the innovation team at a national land registry to talk about their research and development efforts to digitize the real estate conveyancing process.

    • gave a talk with Grassroots Economics at the MetaFEST - find it .

    • Community:

      • 205 followers on Twitter

      • 24 new Gitcoin grant supports

      • 50 Discord members

    23 March 2021

    • Work week.

    16 March 2021

    • Coming into the final weeks of our Filecoin development grant - developing IPLD-encoded GeoTIFFs, which will complement the GeoDID spec we designed. The idea is to use DIDs to create permanent, resolvable identities for satellite images. More soon.

    • We started outlining v0.2 of the GeoDID spec - a brief summary below.

      We want to build DIDs that support spatial querying and raster clipping, so a DID can represent a subset of a larger spatial dataset.

    • We met with@KERNEL0x 2 Fellow @hollygrimm, who is building @0xDynamiculture - a DAO for Indigenous tribes to track their environmental projects using satellite imagery and sensor inputs We'll be looking at using GeoDIDs in their technical architecture

    • Advisory calls with - we'll be helping him design a spatial data crowdsourcing protocol with

    • The community put together a response to a on local + community tokens. Astral member is working with on this, and prototyped a location-aware smart contract wallet at ETHParis 2019.

    • Community:

      • 200 followers on Twitter

      • 7 new Gitcoin grant supporters

      • 31 Discord members

    11 March 2021

    • Still building for our Filecoin Development Grant - working on IPLD-encoded GeoTIFFs and a front end GeoDID browser for Web3-native satellite imagery ...

    • Working with a new community member building to collect land parcel data - watch this space

    • Initial security reviews of Spatial.sol - much more to come.

    • Architecting verifiable spatial data registries built on GeoDIDs and IPFS. Who do we know who would find a smart contract registry of geographic zones useful? Congestion zones - insurance protocols - local currencies - voting - what else??

    4 March 2021

    • Initial tests for Spatial.sol. Point in Polygon is *mostly* working.

    • Submitted initial implementation of the @astralprotocol Typescript modules for the Filecoin Development Grant. GeoDID client libraries and solidity contracts are functional.

    • Release of the vision and roadmap for community review.

    • Community

      • 176 Twitter followers

      • 5 new Gitcoin grant supporters

      • 23 Discord members

    23 February 2021

    • Our Discord is gathering steam. We've seen new members this week from startup projects, the GIS world, economics graduate schools and more. Join if you want to learn more: https://discord.gg/9Kv8tRvWVG

    • Discord member Econometrie raised some great points about how systems built on Astral could solve an interesting problem: how do we know where we can park rental e-bikes and e-scooters? We are working out how to adapt Hyperaware to help solve this.

    • A new member - Kiran - joined and shared a draft post he's writing about applying principles of token engineering to manage road network usage.

    • We have written a master document with overviews of the different tools and protocols we are building. This is open - feedback very welcome:

    • Community

      • 160 Twitter followers

      • 18 Discord members

      • 5 new Gitcoin grant supporters - plus a 246.98 DAI CLR match

    16 February 2021

    • Commits on modules for working with GeoDIDs, plus a check-in call with @filecoin on development grant progress.

    • Call with @blairvee and Jonny, talking about GIS, Grassroots Economics and truly local currencies.

    • Core community call. We talked through the road up to this point, and where it should lead.

    • We created a Discord! If you are doing *anything* at the intersection of spatial / location data and Web3, join and say hi.

    • Community:

      • 146 Twitter followers

      • 6 new Gitcoin grant supporters

    9 February 2021

    • Broke ground on a research report on the intersection of spatial data and Web3.

    • Progress on the GeoDID Browser interface development and Astral modules for working with GeoDIDs - part of our work on a Filecoin Development Grant.

    • Spoke with an American entrepreneur working to onboard land parcel data onto Ethereum.

    • Fairlaunch session in with - hard-earned words of wisdom on early community growth.

    • Advisor calls.

    • Community:

      • 132 Twitter followers

      • 5 new Gitcoin grant supporters

    2 February 2021

    • Submitted the GeoDID Method Specification for feedback - if you're interested, read it here, and give feedback by creating an issue on the Github repo.

    • Fixed a bug in our prototype implementation of the Hyperaware Protocol - a spatial governance protocol for connected devices.

    • KERNEL Fairlaunch seminar with Brian Flynn from Rabbit Hole - got us thinking about the value of community.

    • Community

      • 113 Twitter followers

      • 13 new Gitcoin grant contributors

    26 January 2021

    • Completed the draft GeoDID Method Specification - now we're architecting software modules for working with GeoDIDs

    • Spoke with @KERNEL0x Fellow @mattgcondon - the grandfather of NFTs - about trusted location proofs and using location in smart contracts.

    • Kicking ideas around with @naz about NFT markets and the NFT community.

    • First talk from @fairlaunch Capital in KERNEL Block 2 - learning about newfound paths to independent sustainability.

    • Early tests on , a Solidity library of geometric and topological functions - watch this space ...

    • Community

      • 89 Twitter followers

    19 January 2021

    • Refinements of the draft GeoDID Method Specification.

    • Chat with a VC about the past, present and future of the spatial Web3.

    • Chatting about trusted location proofs with a security-minded developer in KERNEL.

    • Community

      • 80 Twitter followers

      • 2 new Gitcoin grant contributors

    12 January 2021

    • Joined KERNEL 2 - Fairlaunch track!

    • Published our Gitbook with Astral documentation.

    • Drafting GeoDID Method Specification, working on scaling problems with IPLD-encoded GeoTIFFs for the Filecoin Development Grant.

    • Advisory call with Andrew Hill, CEO at Textile - great advice on how to approach to building Web3-native geospatial technologies.

    • Community

      • 61 Twitter followers

      • 29 Gitcoin grant contributors

    5 January 2021

    • Astral received a Filecoin Development Grant to develop a few key pieces of the Astral Protocol: a GeoDID Method Specification (more soon!) and IPLD-encoded GeoTIFFs, for Web3-native geospatial imagery. Our grant details.

    • The Astral team won the Ceramic Bounty for the SkyDB hackathon for building a version of Geolocker, a verifiable spatial data registry.

    • Launched our new website, https://astral.global. (Thanks for the awesome template, @marrrguerite!)

    Fully written in Rust, using axum and cosmos-rust.

  • Role-based access to rest endpoints: admin role creates data items, validator role validates, and user role can query.

  • Polymorphic (polygon, point, line, etc.) geospatial data geometries within a single 2dsphere index.

  • Validation compute currently occurs via a REST endpoint based on userid/role and is not yet tied to a validator’s specific replica instance.

  • Integration tests for both geodata-anchor and geodata-rest run against a local instance of Junø via Docker; we have not yet deployed to a testnet.

  • Verifiable Spatial Data Registry PoC

    Possibilities:

    • Evolve hashing strategies via multi-hash.

    • Larger datasets could leverage MongoDB sharding features.

    • Reverse anchor hash from contract to database.

    • Move most data onchain, queried directly via secondary indexes, with links to IPFS, Arweave or MongoDB Atlas for large objects.

    • 3D geospatial index design and implementation with cw-storage-plus secondary .

    Tokenomics:

    • Mostly beyond the scope of this PoC, but with interchain accounts and the Gravity Bridge on Cosmos, many possibilities exist for application-specific blockchains.

    Thank you to dClimate for funding this PoC and providing valuable technical direction.

    geodata-anchor
    geodata-rest
    geospatial data and queries
    cw-storage-plus
    MongoDB Atlas

    Spatial Data Primer

    What exactly *is* spatial data?

    A spatial data asset is any data asset that contains spatial or location information. This term is intentionally broad, and GeoDIDs are deliberately flexible enough to identify current and legacy spatial data types, as well as ones that haven't been developed yet.

    Generally, spatial data assets to fall into two categories: raster and vector.

    Raster data are composed of grid cells identified by row and column. The whole geographic area is divided into groups of individual cells, which represent an image. Satellite images, photographs, scanned images, etc., are examples of raster data ().

    A (very) simplified representation of a 3x3 pixel raster image in Python:

    To start, GeoDID modules will natively support GeoTIFF raster datasets.

    Introduction

    Framing our work at Astral

    At Astral we are creating the tools developers will need to build an ecosystem of location-based dApps and composable spatial contracts. We see a vast opportunity space here, with exciting early initiatives from , , , , and others.

    Our vision at Astral is to build tools and help establish standards that will enable a composable location-based decentralized web. We're quite early in our journey, and have an orienting question prompting these investigations - what if we could use an entity's physical location as a condition in a smart contract? And relatedly, how can we connect verifiable insights about physical reality with smart contract logic?

    Our investigations into this question has hardened our understanding of the tools and components needed. Our goal here is to share these ideas openly - gather some feedback, and see if you or anyone you know would be interested in supporting one or more of these initiatives with expertise and / or capital. Our strategy is to design a modular architecture of Web3 spatial primitives - each useful in their own right, but designed to function best together.

    We have built a few Astral dApp prototypes, and have architected a few more. This work has helped us identify versatile tools that would have accelerated development, regardless of the application. Based on these insights we are conceptualizing the design space as a three layer stack.

    Motivation

    Spatial data contains information relevant to locations in the physical world. Different locations have different rules - depending on where you are you have to abide to a different regulatory framework.

    To create decentralized applications that leverage spatial data and location information, we need to be able to store and access spatial data in ways that ensure it is simple and reliable for Web3 developers to work with.

    All in all, geospatial data matters, but there are issues with the way it is being used:

    • Lack of transparency between spatial data providers and end users. How do we know if spatial data can be trusted?

    Value

    Astral is developing the standards and tools developers need to create location-based dApps and spatial contracts. We're thinking big - we aim to enable an ecosystem of applications by creating flexible and developer-friendly technologies that are native to Web3.

    The past 15 years have seen the smartphone revolution - in 2021, it's hard to imagine life without location-based applications. Maps, dating, social networking, mobility, peer-to-peer goods markets - the usefulness of so many of the apps we use every day relies on some location element.

    Concurrently, the past decades have seen a revolution in Earth observation technologies - a greater number of more sophisticated remote sensing satellites are orbiting our planet every year. These sensors - combined with advancements in techniques for analyzing spatial data - are enabling us to glean new insights about our world at a profound level of detail. Insights like wildlife movements and illegal fishing, greenhouse gas emissions, deforestation, consumer behavior and so on are becoming an increasingly important factor in the decision-making of businesses and governments.

    Background

    Introduction

    Location is one of the fundamental properties of any physical object. Where something is, and where it is in relation to other objects, is an intrinsic attribute of its identity and determines many of its capabilities and responsibilities. What’s more - information about physical objects is carried on signals in the form of light, sounds, electrical currents, and so on. If some entity - organic or synthetic - receives these messages, they can perceive this information and learn about its origin.

    Based on these phenomena, humanity has created an incredible network of connected sensors for communication and observation. Air and water quality monitors, thermometers, seismographic instruments, microphones, remote sensors - on, within or orbiting the planet - measure electromagnetic radiation, and others - these all capture empirical observations of the Earth every instant, forming a global surveillance system witnessing our shared, physical reality. This is happening constantly, in real time, and in a format that can be analyzed and interpreted by machines.

    As these monitoring networks are developing, so is a parallel phenomenon. Consensus networks are creating a durable shared reality in the informational domain, controlled by no one and governed by a strict, transparent set of rules. Over a decade since Satoshi’s seed first sprouted, the Web3 universe is beginning to blossom. We believe that the fruit that will ripen will complement - and eventually, in some ways outcompete - our legacy systems and come to underpin the functioning of our global society.\

    So naturally, we ask:

    What opportunities exist at the intersection of these technologies?

  • How could spatial data and Web3 technologies fit together?

  • How might we make use of this convergence, to serve us in our task of improving measures of human dignity, and our ability to act as stewards of the planet?\

  • As a response to our enquiry into these questions, we are creating the Astral Protocol.

    indexes

    Oracles

    A vital part of the Astral protocol is the integration of an Oracle system that can trustlessly fetch spatial data from multiple sources into smart contracts.

    Spatial data from a range of sources - satellites, IoT devices, vehicles, mobile phones, and more - can be very useful, but is prone to failure and subject to cheating. Due to the potential spatial economies that can be built with our protocol, it is paramount to have trust in the spatial data submitted to the blockchain. We're researching ways to ensure this trust at the Data and Oracle layers of the Astral stack.

    Taking the example of the GPS data, it is widely known how spoofable the system is. The usage of a protocol such as FOAM in combination with GPS data can provide a more reliable and trustless measure of the position of an object or a person in the real world. Combined with other methods of validating one's position, such as biometrics, it is possible to reduce to a great extent the possibility to cheat the system.

    In addition to the different data sources, the usage of multiple Oracle networks, such as Chainlink or API3, as the gateway to the Astral Protocol will further reduce the possibility of mutability of the data and the potential of failure of any of the aforementioned networks.

    Data

    Spatial data comes in many different formats, from myriad sources, containing different information. As Astral wants to make as few assumptions as possible about the use cases that the protocol will be used for, we are leaving room for devs to work with any spatial data formats - including ones that haven't been developed yet.

    To achieve this, and to make sure that spatial data used in Astral is reliable and controlled by the user, we are designing a DID Method specifically for identifying spatial data assets.

    The GeoDID Method Specification will act as the default Web3 specification for working with geo-spatial data sets. Each DID Document will reference one or many spatial data assets endpoints and its respective metadata. The core spec is very lightweight - support for different formats are built in as Extensions.

    GeoDIDs are designed to work with any spatial data assets, leaving the user to decide if they trust the data identified. We are designing best practices and advanced extensions that will help data consumers trust that satellite imagery is not tampered with, that locations are trustworthy and so on.

    In a vector dataset, features are individual units in the dataset, and each feature typically represents a point, line or polygon. These features are represented mathematically, usually by numbers that signify either the coordinates of the point, or the vertices (corners) of the geometry - read more here.

    Example of vector features from Saylor Academy, https://saylordotorg.github.io/text_essentials-of-geographic-information-systems/s11-geospatial-analysis-i-vector-o.html.

    The vector data formats most commonly used on the web are SVGs and GeoJSON files. SVGs - scalable vector graphics - do not have a geographic referencing system - but GeoJSON datasets do.

    For the alpha implementation of the GeoDID specification, we chose GeoJSON files as our natively-supported vector filetype. For reference, here's a simple GeoJSON Polygon Feature - notice the array of vertices in the geometry attribute, similar to the polygon variable shown above:

    Spatial data assets are data assets - binary files, or directories of files - that contain spatially-referenced information. For v0.0, GeoDIDs natively support GeoTIFF and GeoJSON files, which are commonly-used raster and vector data formats respectively. In the future, spatial data of any format can be identified using a GeoDID, and these format extensions can be built for the @astral-geodid software modules.

    More on GeoJSON and geojson.io, a tool for creating and exploring GeoJSON files.

    More information about GeoTIFFs, plus a sample.

    Learn more background theory on spatial data here.

    Janipella et al 2019
    img = [[ 1, 0, 1 ],
           [ 0, 1, 0 ],
           [ 1, 0, 1 ]]

    Spatial Contracts - smart contracts that use location or spatial data in some way

  • Data - capture, storage and perhaps processing

  • Oracles - to connect the two

  • Our goal is to produce a suite of tools, libraries and SDKs that would let developers efficiently build location-based dapps without having to deal with the complexities of managing spatial data in the Web3 paradigm. A second order effect is that if the community adopts standards, applications become composable.

    To achieve this goal, we are currently working on a few initiatives:

    • Spatial.sol, a Solidity library of topological and geometric functions

    • Verifiable spatial data registries

    • GeoDIDs

    • IPLD-encoded raster and vector spatial datasets

    FOAM
    Regen Network
    Grassroots Economics
    IoTeX
    IBISA Network
    No Web3 app has truly been able to connect smart contracts to the “real world”
  • Supported by spatial data, new capabilities are possible in diverse fields like finance, mobility and identity - as long as there is a proven way to ensure the validity of geospatial data.

  • In sum, we need a better way to access and archive satellite and sensor data - fit for a resilient, user-controlled web.

    We believe that bringing advanced spatial data technologies into the Web3 fold will enable an ecosystem of spatial dApps to support our transition to a just, sustainable and resilient world.

    Earthrise - a blue dot.
    Astral

    Astral is creating tools and standards to work with geospatial and location data in the Web3 universe. To be truly Web3 native, we need to create technologies that are trustless, independently verifiable, next to impossible to take down and that empower the user. To do this, we are delving into the bleeding edges of some of the most exciting technologies on the web - blockchains, smart contracts, decentralized identifiers, verifiable claims, cryptography, token engineering and more.

    Enabling an ecosystem

    The applications enabled by the spatial data layer of the decentralized web are wide-ranging and revolutionary - and we have barely touched the surface. We are learning by building - some examples:

    • We prototyped Spatial.sol - a library of geometric and topological functions in Solidity.

    • We prototyped Geolocker, a verifiable spatial data registry on 3Box and Ethereum, with a team at ETHLondon 2020.

    • We designed the Hyperaware Protocol - a spatial governance protocol for connected devices - and built a prototype implementation - a congestion zone system running on IoT + smart contracts.

    • We participated in KERNEL's Genesis Block, where a team formed and we built a **** **** on Ethereum and IPFS, which aligns financial and ecological incentives by adjusting the amount a borrower needs to pay each year based on a measurement of environmental health.

    • We started work on "geographic decentralized identifiers", or GeoDIDs, during the APOLLO Fellowship, and won prizes at the ETHOnline and the SkynetDB hackathons for our work prototyping tools to work with satellite imagery stored using GeoDIDs.

    All of these projects point towards our goal, which is to develop the capability to work with spatial data on the decentralized web. We've realized that we need to develop the specifications and tools needed to work with this type of data in the web3 space, without having to opt for a web2 alternative.

    Once we develop this, we open a brand new world of possibilities within the web3 space, that will allow: developers to leverage geospatial data within their applications; data providers to store and distribute their data efficiently and effectively; data scientists to manipulate, analyze, and share their findings in a more user friendly way.

    Spatial data for the 21st century and beyond.
    Impact NFTs
    https://hackmd.io/@astral/B1Cl4YUBd
    @BlairVee
    here
    @MaxGlass
    @RaidGuild
    post on ethresear.ch
    @BlairVee
    @grassEcon
    @johnx25bd
    https://hackmd.io/0p7uwOijSMuFXcNE_anDWA
    https://discord.gg/W2nFZF75
    @KERNEL0x
    @0xMaki
    Spatial.sol

    The Stack

    In order to realize our vision of an ecosystem of location-based and spatial decentralized applications providing a more just and resilient means for human and machine coordination on Earth, we are designing the Astral Protocol **** and building a corresponding stack of software tools. Our aim is to create a simple and delightful experience for the location-based dapp developer, enabling the community to drive innovation and build this ecosystem.

    The Astral Protocol provides the bindings between the spatial data domain and the Web3 universe. We intend to make no assumptions about the needs or use cases of developers building on Astral; instead, our effort is directed at creating a simple, versatile way of connecting both raster data and vector geometries, and relevant metadata, to smart contract and dApp front end interfaces. We believe that these spatial data primitives, along with the means to integrate them into smart contracts, will provide the soil from which the Web3 spatial ecosystem will sprout.

    At a high level, there are three layers to the Astral stack, stitched together to enable this new spatial dApp ecosystem:

    • Data

      • Capture - from a range of edge devices like satellites, IoT devices and mobile phones.

      • Storage - ideally on distributed and verifiable systems like IPFS / Filecoin. Geographic decentralized identifiers wrap spatial data assets in DIDs so they can be controlled by the user in a standardized way

    • Oracle - to reliably bring the spatial data into smart contracts

      • Analysis may occur in the oracle, or upstream - but often spatial data needs to be processed before it can be used in expensive smart contracts.

    • Spatial contracts - smart contracts developed to use location and spatial data in contract logic.

    Astral is working on connecting these components using open, versatile tools. We are actively working on each of the layers of the stack, so if you're interested in contributing, get in touch - we're on Twitter or available via email at contact@astral.global.

    GeoTIFFs and IPLD

    Testing whether we could use IPLDs DAG-DBOR encoding and IPFS to compliment the Internal File Directory(IFD), for the TIFF file.

    Introduction

    One of Astral's main goals was to bring cloud native Geospatial capabilities to the Web3 space. While working with Protocol Labs tech for the past few months, we gained some insight into how, IPLD data structures, libp2p, IPFS, FFS, can enable us to make the aforementioned a reality.

    We knew we had to experiment with different pieces of tech, in order to understand what's possible, and what we could do to improve the existing tools. With Raster Imagery being so important to the Geospatial community, we challenged ourselves to figure out how we could bring Cloud Optimized GeoTIFF-esque type functionality to IPFS/FFS. In order to understand what we're trying to accomplish, we need to first understand what is a TIFF and GeoTIFF.

    TIFFs, GeoTIFFs, and COGs

    What is a TIFF?

    A TIFF or TIF, Tagged Image File Format, represents raster images that are meant for usage on a variety of devices that comply with this file format standard. The defines a framework for an Image File Header (IFH), Image File Directories (IFDs), and associated bitmaps. Each IFD and its associated bitmap are sometimes called a TIFF subfile. The TIFF is capable of describing bilevel, grayscale, palette-color and full-color image data in several color spaces. It supports lossy as well as lossless compression schemes to choose between space and time for applications using the format. The format is not machine dependent and is free from bounds like processor, operating system, or file systems.

    What is a GeoTIFF?

    A GeoTIFF is a public domain metadata standard which allows georeferencing information to be embedded within a TIFF file. The potential additional information includes map projection, coordinate systems, ellipsoids, datums, and everything else necessary to establish the exact spatial reference for the file.

    What is a Cloud Optimized GeoTIFF?

    A is a regular GeoTIFF file, aimed at being hosted on a HTTP file server, with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging ​HTTP GET range requests to ask for just the parts of a file the client needs. “COG is the ideal pair for a STAC Item” - the two standards were designed to complement one another.

    By pre-processing the GeoTIFF and breaking it into several pieces, a number of internal ‘tiles’ are created inside the actual image, instead of using simple ‘stripes’ of data. With tiles, much quicker access to a certain area is possible, so that just the portion of the file that needs to be read is accessed.

    In addition, during the pre-processing, multiple ‘overviews’ will be computed and incorporated into the image file - basically several downsampled versions of the same image - so that you can query the highest quality version depending on the level of resolution desired.

    In order to achieve this, the client uses HTTP GET Range requests to request the range of bytes that are within the zoom scope, or map viewport, from the server. This method is also called byte serving, where the client can request just the bytes that it needs from the server. On the broader web it is very useful for serving things like video, so clients don’t have to download the entire file to begin playing it.

    TIFF Internal Overview

    The TIFF format allows you to store more than one image, the same way a PDF can store more than one page. This can be used to create the overviews or pyramids. Each overview will divide the image in four from the previous level, so a smaller amount of data can be read. In our case, a thumbnail of a huge GeoTIFF could be easily shown without reading all the pixels.

    This is independent of the tiling part, but combining both allows to make efficient files for reading a small part of the image and for zooming out efficiently at the same time.

    • Overviews create downsampled versions of the same image.

    • "Zoomed out" versions of the original image.

    • Lesser detail & smaller size.

    • Multiple overviews, to match different zoom levels.

    Image Pyramids

    The use of image tiling and image pyramids supports the display of high-resolution images with a high level of performance. An image pyramid consists of a base image and a series of successively smaller sub-images, each at half the resolution of the previous image. The following figure shows the tiled base image (Level 0) and successively smaller sub-images. The sub-images correspond to lower resolution levels.

    IPLD

    IPLD is an ecosystem of formats and data structures for building applications that can be fully decentralized. This ecosystem has a set of tools that allow us to serialize pieces of data, into CIDs and its respective binary, then encode them with a codec that creates a "linked data model" between the various CIDs. If you would like to know more about the specifics of IPLD, please refer to the official .

    Replacing the GeoTIFFs IFD with IPLD

    Essentially our goal is to take a GeoTIFF (that is a STRIPE image in this stage), pre-process the image by tiling the STRIPE image and then creating the respective overviews for each tile. Instead of the tiles and overviews being stored in the TIFF's IFD (Image File Directory), we’re thinking we can use IPLD to store the tiles and overviews instead. With each tile/overview having their own CID, we can then use these to query the proper tiles/overviews.

    In theory it sounds like it would work, and we know there will be some downsides to this approach (speed, efficiency, and lack of adoption for right now). But we would still like to see where this could go and if IPLD could be used to enable CID GET Range requests for geospatial raster data.

    Mental Model

    Below are some visualizations of the mental model of the IPLD TIFFs. Each block, regardless of size or color, has a associated with it, meaning that it contains a cid and binary data field.

    Tile Overview

    Figure 3, is a generic example of a GeoTIFF image and what it looks like when tiled. If you're zoomed out the client will most likely pull the whole image(yellow). But say that you only want the raster imagery corresponding to a small area within the GeoTIFF. Then the client should at this point use the bounding box data to fetch the appropriate tile and the proper overview (ex. [A2, B1, A4, B3]).

    Relationship Overview

    Figure 4, shows the relationships that we tried to emulate, but ended up creating a variant of this model. This model is just for you to visualize how we can wrap sub tiles within larger ones. Just like how the IFD creates tags at the offsets for the data Overviews and TIFF metadata. The DAG-CBOR encoded Blocks, also create tags to identify nested CIDs within it. By leveraging the tag feature, we can query and quickly access the information we need.

    Example of the implementation in the package:

    Conclusion

    By using IPLD and pre processing the GeoTIFF, we were successfully able to replicate the IFD functionality. With IPLD tagging the nested CIDs of the serialized tiles, we can resolve to the proper tile that we need if we have the path configured ahead of time. This is how we approached enabling "byte-serving" capabilities on IPFS, but we acknowledge that there are some improvements that could be made in future iterations. These improvements could drastically improve the performance of the tool, and user experience.

    What could be improved?

    As of right now, the pre-processing of the image isn't as effective as it could be. Basically we are trying to tile the image ahead of time, but there are infinite possibilities within the image. So realistically we cannot really select ONLY the bits we need. We have to request the bits and some, as we need to query the tiles that encompass the bbox/window. Another disadvantage with our current solution is that we are actually pinning duplications of the image at different overview/ tile sizes. This is because we completely get rid of the IFD within the GeoTIFF. The problem with this approach is that we have to process the image multiple times. Instead we hope to process the image less than before, and combine the IPLD and IFD, so that they can complement one another.

    How we plan to further our research?

    In order to further our research, we would like to develop another iteration of IPLD-encoded GeoTIFF, which can hopefully be extended to provide byte serving capabilities to other files types as well. We want to experiment with a custom IPLD codec that is specifically meant for TIFF filetypes. Maybe the codec's structure could complement the TIFFs structure?

    We also need to incorporate the function/package that will auto-resolve to the proper piece of data, so that the UX is easier and the end user doesn't have to know paths and CIDs beforehand. Instead we want to enable CID GET Range requests, where the selected/targeted bytes are encoded within the CID, for ease of access from the client.

    We'd also like to apply the technique to vector tiles, which use a similar tiling system to create downsampled PBF files of vector geometries.

    These will each enable a more effective way to query spatial data from IPFS by significantly reducing downloading times, costs, and resource use - and serve as a step towards Web3-native geospatial technology.

    We plan to focus on the following:

    • Chunking and Distribution of Large Files (LANDSAT)

    • Real Time & Dynamic Processing

    • Minimize Data Duplication

    • See if Custom Codec is necessary, or if DAB-CBOR still suffices

    point =   [ 45.841616, 6.212074 ]
    
    line =    [[ -0.131838, 51.52241 ],
               [ -3.142085, 51.50190 ],
               [ -3.175046, 55.96150 ]]
    
    polygon = [[[ -43.06640, 17.47643 ],
                [ -46.40625, 10.83330 ],
                [ -37.26562, 11.52308 ],
                [ -43.06640, 17.47643 ]]]
                # ^^ The first and last coordinate are the same
        {
          "type": "Feature",
          "properties": {},
          "geometry": {
            "type": "Polygon",
            "coordinates": [
              [
                [
                  -0.0986060,
                  51.5326047
                ],
                [
                  -78.639101,
                  35.7803929
                ],
                [
                  -8.6094188,
                  41.1398493
                ],
                [
                  -0.0986060,
                  51.5326047
                ]
              ]
            ]
          }
        }
    Spatial oracles
    Universal location proofs
    Satellite

    Spatial Contracts

    With trustworthy spatial data assets stored on distributed, fault-tolerant systems controlled by the user, we are positioned to pull that information into smart contracts and use them in contract logic. We are intending to apply principles of composability to these contracts, enabling an ecosystem of interoperable Web3 location services much as the "money legos" are accelerating the pace of innovation in DeFi.

    Working with spatial data in smart contracts has its social, technical and economic challenges. We are actively researching and experimenting to understand how these spatial contracts could be used to replace brittle and inefficient Web 2.0 systems, and what innovative new capabilities they might create.

    Cloud-optimized GeoTIFF

    GeoJSON

    Under construction - stay tuned! @AstralProtocol

    GeoTIFF

    Under construction - stay tuned! @AstralProtocol

    STAC

    Under construction - stay tuned! @AstralProtocol

    STAC ItemsSTAC Catalogs

    IPLD-encoded GeoTIFF

    STAC Items

    Adds to overall file size, but is served much faster.

    Leverage IPLD Selectors to query data efficiently.

  • Explore the IPLD's Advanced Data Layouts

  • Compression

  • IPLD Byte Serving Spec

  • TIFF specification
    Cloud Optimized GeoTIFF (COG)
    documentation
    IPLD Block
    Figure 1: Overview of Image Pyramids
    Figure 2: IPLD Overview
    Figure 3: The same image with different sized tiles.
    Figure 4: Tree showing the relationships between the nested Tiles and their parents.
    prototype sustainability-linked bond
    @AstralProtocol
    Astral Protocol: Web3 spatial data standard

    Decoding the GeoTIFF

    Process to Decode GeoTIFF and Retrieve Tile

    // bbox that is sent from client
    const request = [
        -28493.166784412522,
        4224973.143255847,
        2358.211624949061,
        4255884.5438021915
    ];
    
    // convert to window to round to nearest tile size
    const targetWindow: ImageMetadata = await GeoUtils.bboxtoWindow(max_window, max_bbox, request);
    
    // Use GetGeoTile to obtain the tile that you would like
    const tiff_of_tile = await getGeoTile(ipfs, cid, ires.max_Dimensions);

    Using IPLD to resolve tile paths

    We used the interface-ipld-format to figure out the utility functions for the IPLD Blocks. The following utility function in the package is:

    const iter = await dagCBOR.resolver.tree(block.data);

    The iterator iter contained all possible paths , in the object to outline the tree. In order to access the serialized binary of the tile, we just have to use the corresponding path with its respective /data tag.

    Using resolver.resolve(binaryBlob, path) ,another utility function, we can take one of the paths in the Array and use it to query the data we need.

    const path = '0,240,240,480/data';

    const result = await dagCBOR.resolver.resolve(binary, path);

    If we pass in a path with a /data tag, the deserialized binary will represent the source raw binary of the tile. The serialized binary data of the Tile that we encoded earlier.

    We need to deserialize the binary data in order to get back the source binary.

    const raw_binary = await dagCBOR.util.deserialize(tile_binary.value);

    After we get the data back to source binary we can use the binary data and some metadata in to write back into a TIFF, with :

    A rationale for GeoDIDs

    Astral did research on geographic decentralized identifiers — GeoDIDs — in early 2021. This is research from that period. We're not actively working on the GeoDID project, but have taken many learnings about how to store and fetch spatial data on Web3 data storage systems forward.

    GeoDIDs

    Geographic decentralized identifiers, or GeoDIDs, are DIDs designed to identify spatial data assets and to be compatible with any distributed ledger or network. In creating a GeoDID, data controllers permissionlessly create irrevocable, cryptographically-verifiable identities for spatial data assets that can be useful in decentralized applications - a Web3-native way to identify spatial data.

    The work on geographic decentralized identifiers is being progressed by the team at , supported by a development grant from the . The team recently released an alpha version of a GeoDID Method Specification, which defines an approach in creating, reading, updating and deleting identifiers for these assets using DIDs. However, through this work the team has learned that it may not be appropriate to use its own DID Method Specification for identifying these data assets, but to instead publish a metaspecification: a standard data format that any DID could incorporate. This would mean that any decentralized identifier could identify a spatial data asset, unlocking a large amount of flexibility in how GeoDID technology could be developed.

    A GeoDID effectively wraps a reference to an existing spatial data asset in a DID Document. The endpoint or content identifier (CID) where the spatial data asset can be retrieved from is included in the DID Document's service array. The core specification is intentionally very lightweight, designed to support legacy and future spatial data formats. These data formats will be supported by a list of GeoDID Extensions, which standardize how GeoDIDs identifying assets of that format are structured - meaning code can be written to work with those standard formats by uncoordinated developers or, more likely, software packages can be written to work with data compliant with the standard. GeoDIDs can be public or peerwise.

    A Rationale for GeoDIDs

    The GeoDID is intended to imbue identifiers for spatial data assets with the standards that Web3 applications demand. "Don't trust - verify". To enable location-based decentralized applications and smart contracts that are trustless and immutable for self-sovereign users, the data used in the system must be reliably there.

    This might sound like an overengineered solution. Why store a reference to a spatial dataset on a blockchain? Why store the spatial dataset itself on a decentralized, permanent substrate like Arweave? Aren't HTTPS URLs and Amazon S3 buckets much easier?

    We anticipate that in the coming century, spatial / sensor data technologies will come to underpin significant portions of the world economy. The emerging field of - "the application of geospatial data technologies to financial theory and financial practice" (Caldecott) - shows great promise to undermine a timeless problem and realign financial incentives with our moral imperative to preserve the health of our Earth. If spatial finance continues to grow and traditional finance is consumed by decentralized finance - and we expect that it will be to a great extent, given the profound efficiency improvements DeFi enables - then a wide range of spatial DeFi applications will emerge in the ecosystem. Imagine a £100M sovereign bond linked to sustainability metrics derived from satellite imagery: the importance of Web3-native geospatial technologies for trustworthy verification of the bond become clearer.

    Furthermore, GeoDIDs are intended to be useful in smart contracts. Blockchains are expensive places to store data - it is unlikely that most spatial datasets will be stored on-chain in full. However, if a GeoDID is written to a smart contract, the contract has an immutable reference, a persistent identifier to a spatial data asset that is controlled by the registrant. This technology is nascent, and its designers do not comprehend what its implications may be - but the opportunity space is vast.

    GeoDIDs under the hood

    The GeoDID is inspired by the SpatioTemporal Asset Catalog (STAC) specification and utilizes a similar linked data structure. The structure alleviates a handful of problems associated with traversing large datasets, and allows for ease of use for the end user. Spatial data assets are identified in the service endpoints of the GeoDID document. These service endpoints can be classed as either Collections or Items. Each "Collection" contains a number of child Collections or Items; and each "Item" will contain several service endpoints that resolve to geospatial data assets. This hierarchy of encapsulating linked data within the GeoDIDs will allow for user's to find or create the data/datasets that they need easily.

    The alpha implementation of the GeoDID specification is under development at . For the initial version, only public GeoDIDs are supported. A mapping of GeoDID => GeoDID Document URLs is stored in a smart contract, granting DID Controllers sovereignty over their DIDs and ensuring that permissionless, permanent resolution is possible. Testing is ongoing on Ethereum's Ropsten testnet, but the team expects to deploy the first version of production GeoDID contracts to an Ethereum sidechain, likely Polygon (formerly Matic). This will ensure that the cost of registering a GeoDID is low.

    It is important to note: GeoDIDs are blockchain agnostic. A blockchain is not required. This is an intentional design decision, intended to accommodate users who, for whatever reason, may not be able to rely on public blockchains in their data architectures. We believe that public blockchains offer significant advantages that should not be overlooked - but we also recognize that organizations must operate within technical and regulatory constraints that may require them to avoid their usage until the technology is more mature. GeoDIDs and GeoDID Documents can be easily configured to resolve on private distributed ledger instances or centralized databases. It should be stressed that this might undermine some important qualities of the DID. See Decentralized Identifiers for further explanation.

    Future Work

    Plans to develop an IPLD encoding for vector datasets are being developed, but many technical challenges are foreseen. Content identifiers present interest opportunities for identifying geographic extents - hierarchical topologies naturally adhere to a parent-child-like tree structure. A national border could be represented as a CID; child nodes would identify subsidiary jurisdictions like states or regions, counties, cities, land parcels, and individual buildings. Early ideas around developing capabilities to compare the topologies of CID-encoded vector geometries are being discussed by the team.

    Wales, Welsh Electoral Divisions and Welsh Parish Regions, from .

    Additionally, the next phase of research and development will be for GeoDIDs that support spatial querying and clipping. DIDs support selectors, paths, query parameters and fragments. These additional details that can be included in a GeoDID offer a powerful way to efficiently represent and store large spatial datasets in a much more resource-constrained manner that is still persistent, cryptographically verifiable and optionally private.

    For example, consider GeoDID representing a collection of satellite imagery. We should be able to specify a a spatial and temporal subquery in the GeoDID itself. That way, a user could store a single GeoDID that specifies a single image, clipped to a particular area, extracted from the GeoDID Collection. The user would not need to store that clipped image, but only the GeoDID with query parameters - they would still have the confidence that the GeoDID would resolve to the same clipped image permanently. The same principles apply to vector datasets. A Web Feature Service request contains selection parameters in the URL query string's xml filter, and resolves to a user-defined subset of the dataset served by the WFS. A GeoDID containing a query would likewise resolve to such a subset, but with the persistence, user control and cryptographic verifiability DIDs afford.

    This functionality seems to be crucial for an efficient geospatial Web3. One use case: auditing spatial finance applications. A satellite image might prove that a particular green infrastructure project was completed by a certain date, or that some insured natural capital warranted a payout. A GeoDID selecting a vector polygon - the project site - could be stored on chain, as could a series of GeoDIDs representing a sequence of satellite images clipped to that area, before, during and after construction. Investors could audit their green bonds with such a system, confident that audit records will be available for decades to come.

    The Web3-native geospatial web is nascent, but ripe with potential to serve as a core component of a more resilient Internet. Much work within the Web3 ecosystem is focusing on how programmable money can be applied to solve intractable problems that have always plagued humanity. Integrating geospatial insights into this mechanism design could be a limited - but potent - tool in the toolkit.

    GeoDID Core

    We are early in developing the GeoDID spec. For now, we are focused on storing geojson vector spatial data structures, and geotiff raster data. The GeoDID specification is designed to be flexible and identify any spatial dataset in any format - even ones that haven't been developed yet.

    Abstract

    Geographic decentralized identifiers, or GeoDIDs, are DIDs designed to identify spatial data assets and to be compatible with any distributed ledger or network. Spatial data has unique properties that require special treatment - the GeoDID Method Specification defines an approach in creating, reading, updating and deleting identifiers for these assets using DIDs. In creating a GeoDID, data controllers permissionlessly create irrevocable, cryptographically-verifiable identities for spatial data assets that can be useful in decentralized applications

    The objective of the GeoDID is to encourage contribution to the DID specification and Linked Data Signatures to identify and ensure trustable spatial data. This will allow rapid development of extensions to these without requiring the usage of trustless infrastructures such as blockchains or other distributed systems.

    The GeoDID is inspired by the and utilizes a similar linked data structure. The structure alleviates a handful of problems associated with traversing large datasets, and allows for ease of use for the end user. Spatial data assets are identified in the service endpoints of the GeoDID document. These service endpoints can be classed as either Collections or Items. Each "Collection" contains a number of child Collections or Items; and each "Item" will contain several service endpoints that dereference to geospatial data assets. This hierarchy of encapsulating linked data within the GeoDIDs will allow for user's to find or create the data/datasets that they need easily.

    This data model is based on the STAC specification, which was designed for cataloging spatiotemporal data assets including satellite images, UAV imagery, LIDAR scans etc.

    For the alpha version of the specification we did not consider the OGC API - Features specification, which is better optimized for representing vector spatial data. Future versions of the GeoDID Method Specification should evolve so that vector and raster data assets are identified according to the most appropriate specification - work to be done.

    1. GeoDID Method

    The namestring that shall identify this DID method is: geo.

    A DID that uses this method MUST begin with the following prefix: did:geo. Per the DID specification, this string MUST be in lowercase. The remainder of the DID after the prefix, is specified below.

    2. Namespace Specific Identifier:

    did:geo:<geo-specific-identifier>

    All GeoDIDs are base58 encoded using the Bitcoin / IPFS alphabets of a 16-byte UUID.

    geo-did = "did:geo:" + geo-specific-identifier geo-specific-identifier = new CID(0, 'dag-pb', hash, 'base58btc') hash = multihash(byte, sha2-256) byte = new TextEncoder().encode(ethereum address + UNIX Time)

    Namestring Generation Method

    For the draft version of this specification, <geo-specific-identifier> referenced above is created by computing a sha2-256 multihash on the byte representation of the DID controller's ethereum address + Unix time: multihash(ethAddress + time, sha2-256). Then we create a new CID Block by encoding the multihash with a base58 encoding. This will return a cid that will act as the identifier for the GeoDID.

    This <geo-specific-identifier> generation procedure achieves our design goals of enabling one Ethereum address to control multiple GeoDIDs. However, in future versions of the specification we intend to investigate the potential of encoding more information into the identifier, including a hash or checksum of the spatial data assets identified, similar to the .

    We also could encode some segment of the identifier to indicate which blockchain the GeoDID is registered on, a possible approach to achieve our design goal of platform agnosticism.

    Fundamentally, the GeoDID identifier should not change even if GeoDID Document contents are subsequently updated by the GeoDID controller.

    Identifying the correct GeoDID

    The service array in the GeoDID will contain several references to other GeoDIDs and/or assets. The idea is that if the GeoDID is the root DID in the hierarchy, regardless of its type, then it has the base DID identifier. If the GeoDID is a sub-collection or sub-item then it is referenced via path, and if it is an asset within the sub-item's service array, then it is referenced via fragment.

    Standalone or Root GeoDIDs using the Base DID Identifier:

    did:geo:9H8WRbfd4K3kQ2NTxT6L2wTNyMj1ARCaVVsT5GJ87Jw2

    Paths reference other GeoDID sub-Collections or sub-Items:

    did:geo:9H8WRbfd4K3kQ2NTxT6L2wTNyMj1ARCaVVsT5GJ87Jw2/sub-collection-A/sub-item-1

    Fragments reference assets within the GeoDID sub-Items:

    did:geo:9H8WRbfd4K3kQ2NTxT6L2wTNyMj1ARCaVVsT5GJ87Jw2/sub-collection-A/sub-item-1#raster-image-1

    did:geo:9H8WRbfd4K3kQ2NTxT6L2wTNyMj1ARCaVVsT5GJ87Jw2/sub-collection-A/sub-item-1#thumbnail

    In future versions of the specification we hope to incorporate spatial, temporal and other query parameter capabilities in the GeoDID fragment, so we could retrieve matching features. By incorporating this querying, filtering and masking capability many advanced spatial decentralized applications would be enabled - though technical feasibility has not be assessed.

    3. CRUD Operation Definitions

    Create (Register)

    In order to create a GeoDID, a method specific identifier must be created, which will be used to build the document. After the method specific identifier is created, the user will need to select the type of Document they would like to create, Collection or Item.

    Required Assets

    • ECDSA keypair. (for the alpha version of the specification - future versions may be agnostic to which digital signature algorithm is used).

    • Spatial data asset(s), or URI(s) resolving to spatial data asset(s), along with relevant metadata / attribution.

    • If GeoDID Collection, some information about the relationships between the spatial data assets being identified.

    Proof of Concept Process

    1. Create Method Specific Identifier described in (2), above.

    2. User chooses which type of GeoDID they want to create (Collection or standalone Item).

    3. If the user decides to create a standalone Item then they just upload the assets, did-metadata information, and item-metadata they want in the DID. The GeoDID will be built, pinned on IPFS, and anchored on the Ropsten Testnet.

    4. If the user decides to create a Collection then the client will build a collection GeoDID and return the GeoDID ID. The GeoDID will be built, pinned on IPFS, and anchored on the Ropsten Testnet.

    In the near future, we will also create automation features to create trees, by uploading folders with files in it. We hope this will kill two birds with one stone, so the user will only need to prepare the data once, and upload it in bulk.

    Read (Resolve)

    In the alpha implementation of the specification a GeoDID document can be resolved by invoking the resolve(<GeoDID ID>) method at contract address <0x___TBD___> on Ethereum's Ropsten testnet. This contract method will first verify that the user has access to this GeoDID by checking to make sure that his address registered the GeoDID via the create method. The contract will store a mapping from the user's address to GeoDID IDs.

    Once the user has been authenticated, the contract will trigger an event that the astral-protocol-core package will be listening for. From there the geo-did-resolver will handle the rest, and dereference to the proper GeoDID Document.

    The GeoDID Document can then be parsed and analyzed by the client, or spatial data assets can be fetched from their respective service endpoints. Do note that sometimes data assets will be identified by CIDs and stored on the IPFS network, while other service endpoints may be HTTP URLs - appropriate resolution methods will be required.

    Controller Address

    Each identity always has a controller address. To check the read only contract function identityOwner(address identity) on the deployed version of the ERC1056 contract.

    The identity controller will always have a publicKey with the id set as the DID with the fragment #key appended.

    An entry is also added to the authentication array of the DID document with type Secp256k1SignatureAuthentication2018.

    Service Endpoints

    Service Endpoints are relevant in both GeoDID Controllers **<Collections?>**¸ and Items. It exists to list relevant relationships to and from itself. <?Each object in the> service array will contain a required link field and several that contain the GeoDID ID, its relationship to the DID ID, and a reference link if the controller needs to dereference it. The purpose of the link field is to enables browsers and crawlers to access the sets of Items, in an organized and straightforward way. These service endpoints can also contain references to assets that are related to a specific item.

    The GeoDID Document identified by the CID can the be resolved using a browser with native IPFS support (ipfs://<CID>), or by resolving via a gateway, like ipfs.io/ipfs/<GeoDID Document CID>

    Update

    The DID Document may be updated by invoking the update(<GeoDID ID>) method at contract address <0x_____> on the Ropsten testnet.

    Once the address has been verified as the DID controller, an oracle function will be invoked and will trigger an off chain event to open the GeoDID Document for the user to update. When user is done updating, they can submit the update, which will compute the CID of the GeoDID Document and compare the block to the previous CID version.

    If the CIDs differ, the client will append the timestamp of the update within the GeoDID Document, recalculate the finalized CID, and will append a new Record in the astral-core-package. The updated CID will be returned via the oracle, and appends to the end of the array of GeoDID Document CIDs, meaning users can fetchVersionHistory(<GeoDID fragment>) and retrieve all the CIDs of historical GeoDID documents.

    Deactivate (Revoke)

    A GeoDID Controller can revoke access to a GeoDID by invoking the deactivate(<GeoDID fragment>) method. This simply sets that GeoDID's GeoDIDActive record to false - it does not remove information from the smart contract about the historical versions of the GeoDID. It does, however, mean that future attempts to resolve that GeoDID will not succeed.

    Reference Implementations

    Once we develop it, we will store code at as a reference implementation of this DID method.\

    Encoding the GeoTIFF

    Process of Tiling and Encoding GeoTIFF

    Tile Object

    Beforehand the GeoTIFF is tiled at different resolutions and sizes, and the binary of the image is then serialized into an IPLD Block. This block contains the serialized binary of the tile, and its respective CID (Content Identifier). This data is then stored into an Object that also contains the tiles respective window and size.

    Wrapper Object wrapping Tile Objects

    The Wrapper Object wraps the Tile Objects by row, as to act as a "key", when we need the path to pull these tiles. The Wrapper Object should be grouping them by the row *2(tileSize), in order to encapsulate the scaled up version of the tile.

    MasterDocument Output

    The Master Document is the document that contains all the Rows, Tiles, and their respective Overviews. This is essentially the "IFD", except it is done with IPLD, and is stored as a JSON object.

    Response after the GeoTIFF is successfully Tiled and pinned to IPFS

    The response after the GeoTIFF is successfully Tiled returns an Object that contains metadata related to the tiling job.

    Field
    Type
    Description

    Note that any future requests must be within the bbox or window returned in this object. If it isn't the request will be rejected.

    GeoDID Item Example

    The Item Specification of the GeoDID; includes the item's fields for specification.

    The GeoDID Item

    The Item extends the default GeoDID Specification. It can function as a standalone DID and does not rely on a GeoDID Collection to be referenced.

    Example of GeoDID Item

    Verifiable Spatial Data Registries

    Smart contracts for raster and vector spatial data assets

    Leveraging Spatial.sol, we are early in the process of developing a standard contract for verifiable spatial data registries. There is a wide breadth of opportunities in this design space.

    Specifically, we are building on work we've done to let self sovereign users register geographic zones on smart contract platforms, starting with EVM-compatible chains. This capability is required for a number of use cases - a spatial governance system like Hyperaware, sustainability-linked bonds and other spatial DeFi applications, parametric insurance policies like those IBISA provides, location-based Web3 games including AR and Pebble Go, and so on.

    We believe that a smart contract standard would promote the composability of these spatial data registries. We see value in building on existing work and designing these spatial data registries to interoperate with existing protocols. To this end, our plan is to represent these spatial data assets - vector or raster data objects - as tokens.

    We will likely extend the ERC721 contract standard and represent geographic zones as NFTs, meaning they can interoperate with ERC721-compatible dapps, be owned and transferred, etc. We'll extend the contract, though, to include geospatial operations - for example, we plan to include a method that checks if a point supplied is contained within the boundaries of a specific zone: zoneContains(zoneID, coordinates) returns (bool)`.

    Before making this decision, however, we want to look into ERC998 and the potential of a composable spatial data registry. The hierarchical ownership model of ERC998 maps very nicely to the hierarchical nature of administrative boundaries - nations are composed of states or provinces, which are composed of counties etc. Research to be done.

    Justification

    Land registry contracts exist, for both virtual and real-world land parcels - , , , , and others. However, these almost always restrict users to owning grid cells, due to the complexity of representing irregular polygons in a smart contract. A land registry where two users could own the same piece of land would not be realistic - detecting these topological intersections is difficult.

    It is our belief that for spatial data registries to be useful in real world contexts, they must support irregular polygons. This means that we need to be able to detect zone intersections, which could result in a piece of land being double-insured, or a vehicle being in two congestion zones at once. This was the inspiration for our initial experiments writing Spatial.sol.

    Vision

    The spatial data registries standard we are designing aims to achieve the representation of this physical scarcity applied to physical territory within a spatial reference system. These are "non-intersecting" boundaries, defined as Type 1 jurisdictions by .

    We intend to achieve this by performing zone intersection checks using Spatial.sol, which will have functions that allow us to check if zones intersect. Our goal is to create a verifiable, trustless way of detecting these boundary disputes, if necessary. This will likely require some kind of on-chain spatial indexing system, helping us minimize the number of intersects calls to make by only testing polygons known to be in the vicinity of a newly-registered polygon.

    We're researching how to efficiently design a system that would enable this kind of a spatial data registry, along with requirements. We're trying to work out exactly what the needs might be and anticipate that this will emerge as a contract standard. We can imagine many variations of this type of contract deployed, each storing its own registry of polygons - for mobility apps, administrative jurisdictions, maritime governance zones, restricted airspace, watersheds and so on. The initial design will be tailored to registering geographic zones (i.e. polygons). Use cases will likely emerge that require the registration of other spatial data assets like points and lines, or raster datasets such as satellite images and LIDAR scans; we will consider these, but focus on our core use case of a zone registry.

    An Early Use Case

    One early adopter of the verifiable spatial data registry standard is the Kolektivo Framework, which is designing a Decentralized Exchange Trading System that relies on natural capital currencies. The Kolektivo implementation of these currencies, which are backed by ecosystem assets and ecosystem services, will rely on Astral's verifiable spatial data registries.

    Another early adopter is Geo Web, "a set of open protocols with a system of property rights for anchoring digital content to physical land" (). At the moment Geo Web parcels are grid cells, a regular raster. By using Astral verifiable spatial data registries, version 2 of the Geo Web protocol could support irregular geometries as NFTs - an application much better suited to mirror the real, physical world.

    Ultimately we aim to publish an Ethereum Improvement Proposal describing this new standard and develop an open source reference implementation in Solidity.

    Questions

    • Who has the right to create or register parcels / zones? Under what conditions?

    • Can zones overlap / intersect?

    • How can we build tools to gracefully recover from lost or abandoned parcels?

    Design Considerations

    Work in progress - comments very welcome! @astralprotocol

    Any comments or feedback please reach out @astralprotocol on twitter or via Discord.

    Representing geographic vector features

    How to represent these features on chain: struct, array or otherwise? We want to balance efficiency, simplicity and developer experience.

    Our initial impulse is to mirror GeoJSON as it is the de facto standard for web developers, well documented and a familiar concept. So a struct is a sensible approach, with s.coordinates being the coordinate array [[lon0, lat0], [lon1, lat1], [lon2, lat2], ... ]:

    Then we would call length(s), which would compute the length of the linestring represented by s.coordinates.This also would enable efficient type checking and verification when data is onboarded or used in a method:

    Coordinate Reference Systems

    How do we want to handle different coordinate reference systems? Do we want to plan on supporting various systems? We think it is probably an aspirational goal, but realistically a requirement that we don't think we need to worry about too much for a long time.

    For now we think we want to just focus on mirroring the standards adopted by GeoJSON, especially for v0. (Interesting that GeoJSON for alternative coordinate reference systems in 2016 "because of interoperability issues".)

    The coordinate reference system for all GeoJSON coordinates is a geographic coordinate reference system, using the World Geodetic System 1984 (WGS 84) [WGS84] datum, with longitude and latitude units of decimal degrees. This is equivalent to the coordinate reference system identified by the Open Geospatial Consortium (OGC) URN urn:ogc:def:crs:OGC::CRS84.

    An OPTIONAL third-position element SHALL be the height in meters above or below the WGS 84 reference ellipsoid. In the absence of elevation values, applications sensitive to height or depth SHOULD interpret positions as being at local ground or sea level.

    This decision would be based on a few observations:

    • We want these contracts to be composable - dapps will be built that permissionlessly call other spatial data registries deployed on a blockchain. Even if we had a contract- or feature-level CRS flag, handling a range of coordinate reference systems would cause complexity to balloon

    • Early on especially we don't anticipate precision location based applications to be running on any smart contract platforms. This is partly because positioning systems are neither hyper precise nor trustless. This disclaimer should be included in the code - that precision may vary. It will be a long time before these technologies mature to the point that they can support mission critical location-based applications - for now our bias towards action and the pragmatic, progress-oriented critical path leads us to lean towards just adopting the most commonly used coordinate reference system used on the web.

    Consistency

    • We need to establish a consistent way of working with these features. i.e. Always passing around the entire struct and not sometimes passing s.coordinates into a function.

    Features as contract instances?

    • Is there any benefit to taking the and having these geographic features be contract instances in their own right? Then we could have methods - s.length() returns the length of the linestring - but it seems super inefficient and probably unnecessary.

    eth-spatial client libraries

    Solidity has certain quirks that makes it tricky to work with. Our focus is on developer experience - we want to make it as easy as possible for dapp developers to build spatial and location-based decentralized applications.

    So we expect we will need to develop client libraries that effectively serve as a transpiler. We're not exactly certain how this will work, but a few ideas:

    • These client libraries - imagine eth-spatial.js and eth-spatial.py to start - are designed to provide a developer friendly interface between vector spatial data represented in JavaScript / Python environments and those represented in the EVM.

    • This could mean moving from JS / Python -> EVM and preparing GeoJSON for submission to a contract that uses Spatial.sol. This would happen by, for example, converting decimal degrees coordinates to Solidity-friendly integers.

    • A well-designed client transpiler library would also improve the efficiency of the system by performing checks to make sure that no errors will arise off chain, and helping devs be confident that data they're submitting in an Ethereum transaction are valid and won't be rejected by the contract.

    (or something like that - to give the instance of web3 access to all of the required methods etc to work with spatial contracts)

    DID Primer

    A primer on DIDs before we go into the Core Specification of GeoDIDs

    What is a DID?

    Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identities. A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies. DIDs are URIs that associate a DID subject with a DID document allowing trustable interactions associated with that subject.

    Each DID document can express cryptographic material, verification methods, or service endpoints, which provide a set of mechanisms enabling a DID controller to prove control of the DID.

    To learn more about DIDs and why they're useful:

    Spatial.sol

    A Solidity library of topological and geometric functions

    Many of the applications we envision will require us to perform spatial operations in smart contracts. To this end, a few years ago (at ETHParis) we outlined and experimented with a Solidity library for performing these operations - measuring the length of a linestring, testing to see if lines intersect, checking to see if a point is inside a polygon, etc.

    We prototyped a location-aware wallet (early experiments that may lead to truly local currencies). At the time we were discouraged due to computational complexity leading to high gas costs, but Layer 2 solutions like Polygon and xDai, alternative lower cost EVM-compatible blockchains like IoTeX or Avalanche and, of course, Eth2 does mean that these kinds of operations will be increasingly viable in smart contracts.

    We are developing the first version of the Spatial.sol library and plan to release it under an open source license. Developing the library entails tackling several technical challenges, and to build a production-quality library will be quite involved.

    Spatial.sol

    Spatial.sol is required for many of the location-based dapps and spatial contracts we envision, including the verifiable spatial data registries described . We are researching which functions the library should include, and will look to for inspiration. Some functions we expect to include are:

    • Boolean verification algorithms like isPolygon, isLine and so on

    • sqrt

    • Conversion helper functions like degreesToNanoradians.

    These functions will serve as a basis for the first cohort of applications built on Astral. Note that these are primarily vector operations - we anticipate that raster analysis techniques may be useful (recreating some of the functionality of , for example), but will leave that for a future version of Spatial.sol. We still need to research the feasibility of supporting multifeature data structures like multipoints, multilines and multipolygons, as well as more complex shapes like polygons with holes. Version 1 will not support 3-dimensional volumes (polyhedra), but we anticipate that many use cases will require implementation of this functionality in future - for example, to test if an aircraft has entered some airspace.

    Spatial Reference Systems

    We anticipate that one of the most difficult aspects of building smart contracts designed to work with spatial data is accommodating the diversity of . The Earth is not a perfect sphere - it is a geoid - and many different systems are used to represent positions relative to the planet. matters, and implementing systems that accommodate different spatial reference systems, converting between them etc will be a complex technical challenge.

    For v1 of Spatial.sol we will minimize accuracy and complexity in favor of developing simplified on-chain geospatial computing capabilities. For example, we will likely implement a function, but not a function.

    Trigonometry.sol

    Spatial.sol has a dependency: Trigonometry.sol, developed and open sourced by Sikorka (, ). This library appears to have been abandoned a few years ago, and lacks a tangent method, which is useful in some operations like the between two points. Spatial.sol will require an audited Trigonometry.sol to be secure.

    eth-spatial.js

    We anticipate that we'll want to develop a corresponding client library eth-spatial.js, to abstract some of the complexities of supplying spatial data to contracts using Spatial.sol. Our goal is to ease the developer experience - we expect that many developers will be working with GeoJSON in the client.

    eth-spatial.js will be designed to seamlessly accept common data formats used on the web (starting with GeoJSON) and prepare the spatial data for use in contracts that use the Spatial.sol library. One example process here will be converting decimal degrees to integers to be used in the contract, possibly nanoradians.

    Likewise, our aim is for eth-spatial.js to easily convert from on-chain spatial data to web-ready GeoJSON. We may look into developing a subgraph to index on-chain geospatial data - the final step to fetching this data will be presenting it in a format that interoperates with common spatial data libraries like Leaflet, Mapbox GL JS etc. An eth-spatial.py library would also probably be helpful, as would implementations in other languages - future work to do.

    Future

    We anticipate this library will need a thorough audit. We would like to explore how we can ensure ongoing support and development of the library once it is developed and open sourced.

    GeoDID Collection Example

    The Collection Specification of the GeoDID; includes the collection's fields for specification.

    The GeoDID Collection

    As of right now, the Collection extends the default GeoDID Specification without any added fields. It is up to the user to add what they deem necessary,. However, the collection must function as a DID that will reference other DID documents or URL links. The metadata added to it must reflect that.

    Example of GeoDID Collection

    GeoDIDs

    When sensor measurements are captured digitally, the information is formatted to be processed by some software. Spatial data comes in many different formats - GeoJSON, KML, shapefiles, GeoTIFFs, cryptospatial coordinates etc etc.

    Astral endeavors to design an open and agnostic protocol that will gracefully develop and evolve with the state of spatial computing. We also expect full trustlessness, self sovereignty and independent verifiability of all protocols we design. This requires a versatile, Web3-native spatial data identification, encoding and storage scheme.

    To achieve this, with support from the Filecoin Foundation and London Blockchain Labs, we have drafted a GeoDID Method Specification and prototype modules and smart contracts to create, read, update and deactivate these geographic decentralized identifiers and the data assets they reference.

    GeoDIDs will identify these files by wrapping the spatial data assets in DID documents, thereby providing a standard schema to grant the user verifiable control and enable trustless interoperability. Different spatial data formats will be supported through GeoDID Extensions - meaning GeoDIDs can support any current or future digital spatial data format. Required member variables will include relevant metadata to enable comprehensive indexing and use of the spatial data a GeoDID identifies.

    We have designed the initial draft of the spec and would be curious for feedback from you or anyone interested when we release the spec to the community for review.

    GeoDIDs are agnostic to the spatial data assets being identified. However, we are designing the alpha implementation of the GeoDID modules to rely on IPFS by default, so we can cryptographically verify the integrity of the data assets referenced.

    The GeoDID Method Specification can be found , and writing on the extensions . A repository containing our prototype implementation of a GeoDID system is .

    A Web3-Native Geospatial Vision

    Context: Cloud-Native Geospatial

    Satellite images are big. One 8-band Landsat 7 scene we examined (LE71660522010289ASN00) is nearly 850MB. These datasets are rich with information about the state of the Earth's surface at the moment of capture, immensely valuable snapshots of the state of our world that can be studied for decades - centuries to come. However, traditional client-server data architectures make it infeasible for non-specialists to work with these large raster datasets. Download times slow workflows, and consumer-grade hardware is incapable of visualizing and analyzing these datasets.

    To solve this problem, the satellite imagery community has designed the Cloud-Optimized GeoTIFF (COG) standard. COGs are GeoTIFFs, but they are organized so that they can be hosted in the cloud and users can send HTTP range requests to only access the parts of the file they need, when they need them. By tiling the image, users only need to request the geographic extent relevant to their workflow; overviews allow users to load lower resolution versions of the image, which is often all that is required in many applications. COGs are a major step forward to unlock the potential of earth observation data, and a key component of the cloud-native geospatial vision the community is moving toward.

    (Next read about STAC catalogues ->)

    The Cloud-Native Geospatial community has designed another specification, intended to complement COGs: . "The SpatioTemporal Asset Catalog (STAC) specification provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered." ().

    The problem was that various satellite imagery providers were using their own custom-designed systems for indexing and cataloging satellite images and other spatio-temporal assets. Each of these indexing systems were roughly the same, but were not interoperable. STAC standardizes the way these collections of data are organized and referenced.

    STAC Items are single spatio-temporal data assets - for example, a single Landsat scene with associated metadata. STAC Catalogs are JSON files that includes links to Items and sub-Catalogs. This can be thought of as a directory structure, with Catalogs acting as folders and Items acting as files - Catalogs can contain Catalogs (sub-folders) or Items. This simple, flexible system allows users to organize large volumes of satellite imagery and - crucially - quickly and easily search through the assets available.

    From

    Web3-Native Geospatial

    Cloud-native geospatial technologies are key to scaling the geospatial data sector, and for the efficient discovery and computation of relevant data. However, they are built in a centralized computing paradigm, with roots still in the client-server model. Data networks designed in this way are brittle - links can break, file content can be changed unbeknownst to users, and data must be held by trusted custodians who may abuse their role as maintainers of server systems holding geospatial data.

    A more secure, resilient and user-centric vision for the Internet is evolving with Web3. Decentralized identifiers, peer-to-peer storage and transfer, and content identifiers resolve the most critical fragilities that exist in the incumbent web.

    The geospatial data sector is not immune to these problems - in fact, the ecosystem is rife with inefficiencies. Many organizations hold multiple versions of the same reference spatial datasets on various internally-operated servers. Each of these servers requires computing resources and skilled workers to maintain. It seems unlikely that each of these datasets is kept up to date based on update release cycles, meaning the reference data upon which the organizational data is overlaid is not current. Deep architectural wastefulness pervades geospatial data storage systems in the public, private and third sectors worldwide.

    The Web3-native geospatial vision seeks to integrate the learnings of the incumbent geospatial data practitioners with the principles underpinning the design of the decentralized web.

    • Data must persist — we cannot try to resolve a mission-critical dataset a few years after its creation only to receive a 404 error. This is especially true for spatial finance applications where decisions dealing with large monetary values are made based on insights derived from geospatial data.

    • Datasets must be verifiable — all parties can have cryptographic confidence when accessing a dataset around the world and into the future that they are looking at the same information as others.

    • Data networks must be resilient — they must be self-healing and resilient to the failure of any actor.

    These are challenging problems to solve, and the community is early in the process of developing the standards, tools and protocols that will unlock the potential of the decentralized, location-based web.

    In this talk, the core team from the Astral Protocol will outline their vision for a Web3-native geospatial data architecture, including data storage and computation systems that are decentralized and fault tolerant, and in which all participants are cryptoeconomically-aligned. The Astral team will touch on data storage systems they are designing based on IPFS, Filecoin, Arweave and Ceramic, and will share insights into decentralized geospatial data processing and discovery systems including dClimate, Ocean Protocol and Algovera AI. They will also touch on their early research into privacy-preserving geospatial technologies.

    These technologies are designed to interoperate with distributed ledgers, smart contracts and digital currencies, with an eye towards underpinning the emerging regenerative finance ecosystem being built on public blockchains.

    GeoDID Core Specification

    The Core Specification of the GeoDID; includes default fields for specification.

    Figure 1: GeoDID Document Hierarchy

    Types of GeoDIDs

    There are two "types" of GeoDID Specifications under the Astral Protocol, that work together to enable structure between resources. At their core, they are both extensions of the DID and default GeoDID specification. However, they differ in their functionality and purpose, in order to enable a better user experience for all users.

    The GeoDID Collection - A GeoDID Collection is a simple, flexible JSON file of service endpoints that provides a structure to organize and browse GeoDID Items. The collection is responsible for bundling a set of items or sub collections, by utilizing links to reference other child Collections or Items . The division of sub-collections is up to the implementor, but is generally done in order to make the end user's UX easier.

    The GeoDID Item - A GeoDID Item is an extension of the Default GeoDID Structure. Unlike its counterpart, the GeoDID Item is responsible for identifying a particular resources and referencing relative assets, through the service endpoints. GeoDID Items can only act as the leaves of the tree and cannot link to other items or collections. It can only reference assets like raster imagery, videos, geojson and reference linked parent DID Documents.

    Default GeoDID Structure

    GeoDID Default Fields

    Field
    Type
    Description

    Public Key Object Fields

    The publicKey is used to specify how the is expected to be authenticated, for purposes such as performing CRUD operations on the DID Document. However, the PublicKey Object is OPTIONAL by default as stated by the .

    Field
    Description

    Note: Do not worry about this field as it will automatically be populated with the user's Ethereum address.

    DID_Metadata Object Fields

    Field
    Type
    Description

    Service Endpoint Object Fields

    The Metadata Object array will contain an array of Metadata related to the assets or links within the GeoDID. (ex. List of Spatial Data Providers who provided the data, GeoJSON Feature)object list contains all the assets the GeoDID Item will need to reference.The Assets Object array will contain a list of references to assets relative to the GeoDID Item.

    Field
    Type
    Description

    Link Object Field

    This object list describes the one-to-many relationships with other GeoDIDs. These entities can be sub-collection or sub-item GeoDIDs. This object field will come in handy when a user needs to traverse or scrape a data collection.

    As of right now, , is persisting the relationships between the different DID Documents and their respective CIDs. We still need to figure out how to properly update the links within the GeoDID in a scalable way. For large trees of GeoDID Collection, updates to the relations might take too long, so as a work around we are maintaining all the relationships on The Graph. And the GeoDID will only contain a reference to itself, the root Document, and its' parent.

    Field
    Type
    Description
    import { getImageFromUrl, startTile, getGeoTile } from "ipld-geotiff";
    import { IPFS, create } from "ipfs";
    
    async function example(){
        const url = 'http://download.osgeo.org/geotiff/samples/gdal_eg/cea.tif';
        
        // bbox that is sent from client
        const request = [
            -28493.166784412522,
            4224973.143255847,
            2358.211624949061,
            4255884.5438021915
        ];
        
        // First create instance of IPFS
        const ipfs: IPFS = await create();
        
        // Request TIFF from Endpoint
        const image = await getImageFromUrl(url);
        // Start the tiling and encoding process 
        const ires: IResponse =  await startTile(ipfs, image);
    
        // Use GetGeoTile to obtain the tile that you would like
        const tiff_of_tile = await getGeoTile(ipfs, ires.cid, ires.max_Dimensions);
    }
    const url = 'http://download.osgeo.org/geotiff/samples/gdal_eg/cea.tif';
        
    // First create instance of IPFS
    const ipfs: IPFS = await create();
        
    // Request TIFF from Endpoint
    const image = await getImageFromUrl(url);
    
    // Start the tiling and encoding process 
    const ires: IResponse =  await startTile(ipfs, image);
  • The user will save this GeoDID ID to append children sub-collections or sub-items as children.

  • If the user decides to add children to the sub-collection, they repeat step 4, and use the returned GeoDID ID + Collection path to append more leaf nodes.

  • If the user decides to add items to the collection, they repeat step 3, until they finish adding all items.

  • SpatioTemporal Asset Catalog (STAC) specification
    Ocean Protocol DID Method
    https://github.com/AstralProtocol
    Decentraland
    Etherland
    Superworld
    Geo Web
    Cryptovoxels
    Hooghe and Marks (2003)
    Geo Web Gitcoin grant

    Various geometric functions like distance, area, centroid, length, perimeter

  • Angular functions like bearing, rhumbBearing

  • Geometric helper functions like boundingBox, convexHull

  • Boolean topological tests like pointInBbox, pointInPolygon, intersects

  • Functions for manipulating geometries such as difference, dissolve, intersect, union, etc.

  • here
    Turf.js
    RasterIO
    spatial reference systems
    Geodesy
    Haversine
    Vincenty
    docs
    code
    bearing
    here
    here
    here

    STAC Catalogs

    GeoDID Extensions

    Spatial data assets identified by GeoDIDs will come in a particular format, likely with information about the spatial reference system, attribution and other metadata. GeoDID extensions will enable the GeoDID Core Spec to expand to support any type of spatial data asset - legacy, current, or future.

    Under construction - stay tuned! @AstralProtocol

    GeoDIDs v0.2

    Planned upgrades to the GeoDID Method Specification

    GeoDIDs identify spatial data assets. DIDs support selectors, paths, query parameters and fragments. These additional details that can be included in a GeoDID offer a powerful way to efficiently represent and store large spatial datasets in a much more resource-constrained manner that is still persistent, cryptographically verifiable and optionally private.

    The next phase of research and development will be for GeoDIDs that support spatial querying and clipping.

    For example, consider GeoDID representing a collection of satellite imagery. We should be able to specify a sub-collection, or even item, that defines a spatial and temporal query in the GeoDID itself. That way, a user could store a single GeoDID that specifies a single image, clipped to a particular area, extracted from the GeoDID Collection. The user would not need to store that clipped image, but only the GeoDID with query parameters - they would still have the confidence that the GeoDID would resolve to the same clipped image permanently.

    This would be crucial for the auditability of spatial finance applications. A satellite image might prove that a particular green infrastructure project was completed by a certain date, or that some insured natural capital warranted a payout. For both traditional and decentralized spatial finance, this verifiability will likely bring a lot of value to

    See Decentralized Identifiers by Dr Phil Windley for more details on selectors, paths, query parameters and fragments.

    Universal Location Proofs

    Technically it is extremely difficult (if not impossible) to create a definitive proof that some information was created at a specific physical location.

    There are technical ways to improve trust in the position - signing position captured by the sensor triangulating with the GPS or FOAM network in a secure enclave, for example. It would require a special app or - eventually - a plugin for mobile crypto wallets that allows users to generate the “universal location check-ins” (credit @jabyl from Distributed Town for his help thinking this through).

    Additional layers of trust could be built on a location by incorporating other sensor readings (like from a camera or microphone), as could requiring users to scan a cycling QR code only available at the location, form social check-ins in which they verify that the others were present, etc.

    We’ve been doing some early thinking about zero-knowledge location proofs as well, which prove that a point is inside a polygon without revealing the user’s position. This could then be verified on chain, enabling location-based smart contracts that preserve the user's privacy. Applications include local currencies, intelligent mobility systems, dynamic game preserves, detecting illegal and unregulated fishing in Maritime Protected Areas, location-anchored games and a lot more.

    <Buffer 81 59 e1 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 57554 more bytes>
    js-ipld-dag-cbor
    writeArrayBuffer

    Array<number>

    The max bbox of the image.

    cid

    string

    CID of the pinned MasterDoc

    max_Dimensions

    Array<number>

    Array of numbers used to understand the Size of each Tile

    window

    Array<number>

    The max window of the image.

    bbox

    [
      {
        '@context': 'https://w3id.org/did/v1',
        id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
        publicKey: [
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#controller',
            type: 'Secp256k1VerificationKey2018',
            controller: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
            ethereumAddress: '0x4B11B9A1582E455c2C5368BEe0FF5d2F1dd4d28e'
          }
        ],
        didmetadata: { type: 'item', created: '2021-03-12T15:56:10.937Z' },
        links: [
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
            type: 'item',
            rel: 'root'
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
            type: 'item',
            rel: 'self'
          }
        ],
        service: [
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#item-metadata-1'
            type: item-metadata
            serviceEndpoint: <CID or URL>
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#geojson-1'
            type: geojson
            serviceEndpoint: <CID or URL>
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#geojson-2'
            type: geojson
            serviceEndpoint: <CID or URL>
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#geotiff-1'
            type: geotiff
            serviceEndpoint: <CID or URL>
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#json-1'
            type: json
            serviceEndpoint: <CID or URL>
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#misc-1'
            type: misc
            serviceEndpoint: <CID or URL>
          }
        ]
      }
    ]
    WGS84 supports a third element in the coordinate array: altitude. This is powerful because it means Spatial.sol / other Astral standards would be built to support working with 3-dimensional volumes from the outset. That said, this adds significant complexity and would likely
  • These client libraries should be designed to interoperate with web3.js or ethers.js, extending those when the developer is interacting with contracts that use Spatial.sol.

  • removed support
    https://datatracker.ietf.org/doc/html/rfc7946#section-4
    Platin approach
    [
       '@context':'https://w3id.org/did/v1',
       id:'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
       publicKey: [
          {
             id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#controller',
             type: 'Secp256k1VerificationKey2018',
             controller: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
             ethereumAddress: '0x4B11B9A1582E455c2C5368BEe0FF5d2F1dd4d28e'
          }
       ],
       did_metadata:{
          type:'collection',
          created:'2019-03-23T06:35:22Z',
          updated:'2019-03-23T06:37:45Z'
       },
       links:[
          {
             id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
             type: 'collection',
             rel: 'root'
          },
          {
             id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
             type: 'collection',
             rel: 'self'
          }
       ],
       service:[
          {
             id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#collection-metadata'
             type: collection-metadata
             serviceEndpoint: <CID or URL>
          }
       ]
    ]

    Types

    Common Types used throughout the Project

    IDocumentInfo

    interface IDocumentInfo {
        geodidid: string;
        documentVal: any;
        parentid?: string;
    }

    ILoadInfo

    interface LoadInfo {
        documentInfo: IDocumentInfo;
        powergateInstance: Powergate 
    }

    IPinInfo

    interface IPinInfo {
        geodidid: string;
        cid: string;
        pinDate: Date;
        token: string
    } 
    [
      '0,0,240,240',
      '0,0,240,240/cid',
      '0,0,240,240/data',
      '0,0,240,240/window',
      '0,0,240,240/window/0',
      '0,0,240,240/window/1',
      '0,0,240,240/window/2',
      '0,0,240,240/window/3',
      '0,0,240,240/tileSize',
      '0,0,240,240/tileSize/width',
      '0,0,240,240/tileSize/height',
      ...
      ...
      '240,240,480,480',
      '240,240,480,480/cid',
      '240,240,480,480/data',
      '240,240,480,480/window',
      '240,240,480,480/window/0',
      '240,240,480,480/window/1',
      '240,240,480,480/window/2',
      '240,240,480,480/window/3',
      '240,240,480,480/tileSize',
      '240,240,480,480/tileSize/width',
      '240,240,480,480/tileSize/height',
      '480,240,514,480',
      '480,240,514,480/cid',
      '480,240,514,480/data',
      '480,240,514,480/window',
      '480,240,514,480/window/0',
      '480,240,514,480/window/1',
      '480,240,514,480/window/2',
      '480,240,514,480/window/3',
      '480,240,514,480/tileSize',
      '480,240,514,480/tileSize/width',
      '480,240,514,480/tileSize/height'
    ]
    [
      <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 57550 more bytes>
    ]
    ArrayBuffer {
      [Uint8Contents]: <4d 4d 00 2a 00 00 00 08 00 18 01 00 00 03 00 00 00 01 00 f0 00 00 01 01 00 03 00 00 00 01 00 f0 00 00 01 02 00 03 00 00 00 01 00 08 00 00 01 03 00 03 00 00 00 01 00 01 00 00 01 06 00 03 00 00 00 01 00 01 00 00 01 11 00 04 00 00 00 01 00 00 03 e8 01 15 00 03 00 00 00 01 00 01 00 00 01 16 00 04 00 00 ... 58500 more bytes>,
      byteLength: 58600
    }
    {
      '0,240,120,360': {
        window: [ 0, 240, 120, 360 ],
        cid: CID(bafyreihdsnkivmftr64zgv23aso4eseikqlqkznrcgtmkjd4hybno2nk7i),
        data: Uint8Array(14404) [
          129, 89,  56,  64,   0,  41, 33, 58,  82, 115,  41, 74,
           99, 90,  90,  58,  25,  82, 66, 82,  74,  16,  58, 74,
           74, 66,  66,  66,   0,  66, 82, 66, 115, 197, 107, 82,
          107, 90,  90, 107, 115,  82, 74, 58,  25,   0,  25,  8,
           16, 33,  25, 107,  82,  74, 99, 90, 107,  82,  49, 66,
          123, 90,  82,  58,  82, 115, 16, 25,  16,  25,   8, 33,
           41, 58, 107,  82, 123, 107, 82, 90,  25,  33,  82, 66,
           41,  0,  33,  25,  16,  25, 49, 99,  74,  58,  25, 49,
           99, 74,  74,  90,
          ... 14304 more items
        ],
        tileSize: { width: 120, height: 120 }
      },
      '120,240,240,360': {
        window: [ 120, 240, 240, 360 ],
        cid: CID(bafyreiczb6y3ogyyvxuw5jpcfuumrn7mnxwavhrvxyfn4yqkji3tw5b3ea),
        data: Uint8Array(14404) [
          129,  89,  56,  64,  66,  58,  58,   0,  25,   8,   0,  16,
            8,   8,  82,  90,  82,  25,   0,   8,  16,  58,  33,  25,
           25,  25,  41,  33,   0,   8,  16,   8,  33,  99, 156, 165,
          123, 181, 156, 140, 140, 107, 115, 123, 107, 123, 107, 107,
           90,  66, 132, 123,  99, 165, 165, 165, 132,  99,  49, 123,
          115,  90,  66,   0,   0,  25,  25,  25,  25,  33,  25,   8,
           33,  25, 107, 123,  90,  99,  90, 115, 107,  16,  25,  82,
           90, 173,  99,  58,  82,  49,  66,  99,  99,  90,  82,  99,
           41, 115,  33,  41,
          ... 14304 more items
        ],
        tileSize: { width: 120, height: 120 }
      },
      ...
    }
    {
      '30': {
        '480,0,514,60': CID(bafyreidslhcfyghycmpqtsvboms4623cinhtkqfctz64dvqyxw3ervo6t4),
        '480,60,514,120': CID(bafyreihmweufxt2nq3axnujiurnsgkzkejzakwvkr56fgp6tsureujrzcm),
        '480,120,514,180': CID(bafyreid7zfogsdrkl3ng22yqeny5aj7hhioph75kik7xjfxuyhvivaeave),
        '480,180,514,240': CID(bafyreidjxno5kq7riwykklict4zu7kfk5b4wycvgxlhtkwgpkgoqsgwxhi),
        '480,240,514,300': CID(bafyreigs3udscqv2mqvkmas3b6g2fy42dh3uy7orr4tvmhrfs6xneesfny),
        '480,300,514,360': CID(bafyreighxzwr3sfyzjmkdppdhengkgdf2qxqx5nbefaqkm6kp7ghqajesi),
        '480,360,514,420': CID(bafyreife3lpyz4vkvsmhlp6r46r72wkr3kxb5t4a3e3luhlzfw3rz3fbu4),
        '480,420,514,480': CID(bafyreihyexbwogtgjhk6lwt5p577ls6hcoi53ggvjb24xnh7kfmsobnphe),
        '480,480,514,515': CID(bafyreie6dflpabv3ywckcigfpkljlvvlruexpfjhzaiwv7lxz6gjqwk54q)
      },
      '60': {
        '480,0,514,120': CID(bafyreibjdiqovqvopi4owtk36iyenqe7reykkfsromy3l4fsmc6iuydd7y),
        '480,120,514,240': CID(bafyreih2jnnsgsvhjouhy5h3fpdmsahm5v545taljdgmbdruprzo2txbs4),
        '480,240,514,360': CID(bafyreicfd6cmsjc6taxx7kgw3qaol7ourfbsh57qajs6bwt333b3q5pfoi),
        '480,360,514,480': CID(bafyreiffelgsnbcl352drv7hdmlisjh3owlfut2cdmtvahcxlygdwxnvjy),
        '480,480,514,515': CID(bafyreieupxskp4jnhrva6t3wohsqn4uzgfngti74sysu3jypdsgpkkzyma)
      },
      '120': {
        '480,0,514,240': CID(bafyreiggzc74xwce3wn6je2bx5evpuun3gi7evjcbzwtkp3ck2hkzq4lmq),
        '480,240,514,480': CID(bafyreiax5o26tyun3miuvv7zsyf47cmtihinplkxjrccuiemuwmztisy74),
        '480,480,514,515': CID(bafyreib6xrt6daxw45t342miu7mk6yugso4f4sq25wtimdb4iv4omutf7e)
      },
      '240': {
        '480,0,514,480': CID(bafyreigkdgkrgpnpnqcion3vaiq72dyedq2qdkvb2sulpln4tkhefkkfb4),
        '480,480,514,515': CID(bafyreiahb7az6lnj63bfefogbp6w5vq5aczz3ouoh2aeen7zgbb2j5wzge)
      },
      '515': {
        '0,0,514,515': CID(bafyreibwt4pakge4urf63bwn2ot2juolaaayval6wlwnokt4ztc2uyisbe)
      }
    }
    {
      cid: 'bafyreigdmqpykrgxyaxtlafqpqhzrb7qy2rh75nldvfd4kok6gl47quzvy',
      max_Dimensions: [
        30,  60, 120,
        240, 515
      ],
      window: [ 0, 0, 514, 515 ],
      bbox: [
        -28493.166784412522,
        4224973.143255847,
        2358.211624949061,
        4255884.5438021915
      ]
    }
    enum FeatureType { Point, LineString, Polygon } // MultiPoint? MultiLineString? MultiPolygon?
    
    struct Point {
        FeatureType type, // set to FeatureType.Point
        int[2] coordinates // [lon, lat]
    }
    
    struct LineString {
        FeatureType type, // set to FeatureType.LineString
        int[][2] coordinates // [[lon0, lat0],[lon1, lat1], ... ,[lonN, latN]]
    }
    
    
    struct Polygon {
        FeatureType type, // set to FeatureType.Polygon
        int[][][2] coordinates // [[lon0, lat0],[lon1, lat1], ... ,[lonN, latN]]
    }
    
    // not sure if my nested array syntax is right lol
    LineString s = LineString(FeatureType.LineString, [[0,0],[1,1],[1,2]])
    
    function length (Feature inputFeature) public returns (int length) {
        require(inputFeature.type == "LineString", "can only calculate the length of a linestring");
        // Calculate the length
        return length;
    }
    // lol forgive my horrendous pseudocode
    const Web3 = require('web3')
    const ethSpatial = require('eth-spatial')
    
    let web3 = new Web3(...);
    let web3spatial = new ethSpatial(web3);

    The right to privacy must be built in on the technical layer, not added on top of system design at the policy layer.

    SpatioTemporal Asset Catalogs
    https://stacspec.org/
    https://eos.com/landsat-7/

    Service Object

    REQUIRED The service object contains several sub fields used to reference metadata, other GeoDIDs, and/or assets.

    string

    The Ethereum address of the controller.

    string

    REQUIRED UPON UPDATE The GeoDID package will automatically timestamp the GeoDID upon an update. If the GeoDID Document never updates then there will not be a

    description

    string

    OPTIONAL A description describing the GeoDID Document. It can be anything but most likely the description will address the DID subject.

    id

    string

    REQUIRED The identifier for the DID Document. It can be the root DID ID or it can be a DID URL with a specific path or fragment. The id must be of the following format: did:<method>:<specific identifier>. The path(.../path), query(...?query), and fragment(...#fragment) are optional but will be used later as identifiers for the children collections and items.

    authentication

    [Authentication Object]

    OPTIONAL BY DEFAULT Authentication is a process (typically some type of protocol) by which an entity can prove it has a specific attribute or controls a specific secret using one or more verification methods. With DIDs, a common example would be proving control of the private key associated with a public key published in a DID document.

    did_metadata

    did_metadata Object

    REQUIRED The did_metadata object contains relative metadata pertaining to the particular GeoDID. For example, timestamps for the CRUD operations, the type of GeoDID, descriptions, etc.

    id

    string

    The GeoDID ID + key fragment which will be used to reference the controllers public key.

    type

    string

    The type of Verification method being used. (ex. Ed25519VerificationKey2018, Secp256k1VerificationKey2018)

    controller

    string

    The GeoDID ID which will be used to reference the controllers.

    type

    string

    REQUIRED The type can either be a Collection or Item. ****

    subtype

    string

    REQUIRED The subtype can either be a GeoJSON or Raster.

    created

    string

    REQUIRED UPON CREATION The GeoDID package will automatically timestamp the GeoDID upon creation.

    id

    string

    REQUIRED The DID URL that dereferences to the entity's metadata. [#metadata]

    type

    string

    REQUIRED The type of metadata (ex. collection-metadata, item-metadata), or the type of the asset. (ex. GeoTIFF, geoJSON, JSON, CSV)

    serviceEndpoint

    string

    REQUIRED The actual link in the format of an URL or CID. Relative and absolute links are both allowed.

    id

    string

    REQUIRED The DID URL that dereferences to the entity's GeoDID Document. This field is required if you want to create a hierarchy of GeoDID Documents (ex. GeoDID collection is parent to GeoDID Items or Collections).

    type

    string

    REQUIRED See chapter "Endpoint types" for more information.

    rel

    string

    REQUIRED Relationship between the current document and the linked document. See chapter "Relation types" for more information.

    verification relationship
    DID subject
    W3C working groups' DID specification
    The Graph

    service

    ethereumAddress

    updated

    Astral
    Filecoin Foundation
    spatial finance
    https://github.com/AstralProtocol/astralprotocol
    Ordnance Survey's Boundary-Line dataset
    W3C DID Core
    Ceramic Network
    Decentralized identifiers fundamentals and deep dive
    Figure 1: The basic architecture of the DID

    Build with GeoDIDs

    As part of our exploration process we build a system for storing geospatial data on IPFS for a Filecoin Development Grant in Q1 2021. This work has informed our thinking abourn building tools for Web3-native satellite imagery, which we're carrying forward.

    This project was experimental, and code is not stable. If you'd like to build with Astral tools, reach out on Discord: https://discord.gg/4WPyYvRtzQ.

    Data

    We're developing Geographic Decentralized Identifiers (GeoDIDs) to provide a Web3-native format for identifying spatial data assets.

    Oracles

    To date our oracle systems are quite simple, and we're looking for developers who are interested in implementing those to pull spatial data from GeoDIDs into smart contracts.

    Spatial Contracts

    We have been developing patterns and libraries to work with spatial data in smart contracts for a few years now, and are looking for additional support. Specifically, we are working on:

    • A Solidity library of geometric and topological functions, much like .

    • A verifiable spatial data registry for GeoDIDs:

      • A zone registry, where users can control polygons representing areas of space on, beneath or above the Earth's surface.

    API

    API for the @astralprotocol/core package

    Constructor

    Creates a new AstralClient Instance to utilize the following functions.

    new AstralClient(_ethAddress, _endpoint?);

    Name
    Type
    Attributes
    Description

    Methods

    CreateGenesisGeoDID

    Creates a GenesisGeoDID Document. This creates a new root node for the linked data structure.

    async createGenesisGeoDID(_typeOfGeoDID: string): Promise<IDocumentInfo>{}

    Type
    Description

    CreateChildGeoDID

    Creates a Child GeoDIDDocument. This creates a child node for an existing linked data structure.

    async createChildGeoDID(_typeOfGeoDID: string, _parentID: string, _path: string): Promise<IDocumentInfo>{}

    Name
    Type
    Attributes
    Description

    PinDocument

    Pins the Document to IPFS or FFS via Powergate.

    async pinDocument(_documentInfo: IDocumentInfo, _token?: string): Promise<IPinInfo>{}

    Name
    Type
    Attributes
    Description

    LoadDocument

    Loads the Document by the DocID and the Powergate Auth token associated with it.

    async loadDocument(_docId: string, _token: string): Promise<ILoadInfo>{}

    Name
    Type
    Attribute
    Description

    Getting Started

    Follow these simple steps to register GeoDIDs quickly

    Setting up local Powergate Client

    In order to store the GeoDIDs created by the core package, you will need to start up a local Powergate client or connect to an existing hosted client. Below will be a brief overview on how to setup a local Powergate client on your system. Further information is available at: https://github.com/textileio/powergate.

    In order to setup the Powergate Client locally on your system you must have , , and installed.

    In your terminal, create a new directory and clone the Powergate repo into it:

    git clone https://github.com/textileio/powergate.git

    After you clone the repo, enter the following commands:

    cd powergate/docker

    make localnet

    More information regarding Powergate's Localnet mode, please refer to their documentation:

    Install the packages

    Configure truffle-config.js

    Create a script for interacting with the Astral Client and Contracts

    And execute with

    The steps executed in this page have been reproduced in a public github that you can consult: ****

    @astralprotocol/contracts

    Documentation about the Astral Protocol Contracts Package.

    Description

    These contracts serve as the Registry for the Astral Protocol GeoDIDs. It allows binding of a GeoDID to an ethereum address and CID name resolving.

    By registering a spatial asset Smart Contract events are triggered, which are picked up by the subgraph indexer to build the tree of relationships for easy querying.

    To add Astral Protocol Contracts to your application

    To develop or try the Astral Protocol Contracts locally

    • Clone the and go to packages/contracts:

    • Run ganache yarn ganache

    • In a new terminal, deploy contracts with yarn truffle

    • Run tests with yarn truffle-test

    To deploy your own contracts in the Ropsten testnet

    • Create a .env file in /packages/contracts with a MNEMONIC and ROPSTEN_API_KEY

    API

    API for the @astralprotocol/contracts package

    State modifying methods

    constructor

    Initiates the smart contract with an hardcoded uri type representing the did method (did:geo). Also initiates the msg.sender as the default admin and as a data supplier role.

    registerRole

    Registers a new user with the ability to register a spatial asset. Contract creator is hardcoded as default admin and data supplier roles.

    enableStorage

    Registers a new storage that can accept GeoDID document creation.

    disableStorage

    Disables an existing storage.

    registerSpatialAsset

    Registers on-chain one Spatial Asset.

    addChildrenGeoDIDs

    Adds children GeoDIDs to an existing GeoDID. GeoDIDId must correspond to a GeoDID type that can be a parent (Collection or type 0).

    addParentGeoDID

    Adds a GeoDID as a parent to an already existing GeoDID.

    removeChildrenGeoDIDs

    Removes children GeoDIDs from a specified GeoDID.

    removeParentGeoDID

    Removes a specified parent GeoDID from a GeoDID.

    deactivateSpatialAsset

    De-registers a spatial asset.

    @astralprotocol/core

    Documentation about the Astral Protocol Core Package.

    Description

    The @astralprotocol/core package is a Typescript NPM package that is responsible for any CRUD operations performed on the DID Documents. This includes the creation of DID Documents, loading the DID Documents, as well as updating them. The package also has utilities that enable the creation of the collision resistant GeoDID IDs, a custom **** did-resolver that enables DID Resolution, as well as pinning features for storing the Documents on IPFS or FFS. This package is meant to be used in conjunction with the @astralprotocol/contracts **** and @astralprotocol/subgraph **** packages. However, the package can also be used independently if the user does not want to rely on the Ethereum network.

    This package is not responsible for persistence of the documents (mappings, etc.), the created DID Documents are persisted through IPFS/FFS, and the metadata regarding the DID Documents are persisted through the subgraph and smart contracts.

    To add Astral Protocol Core to your application

    To develop or try the Astral Protocol Core locally

    Set up a local Powergate Client

    In order to store the GeoDIDs created by the core package, you will need to start up a local Powergate client or connect to an existing hosted client. Below will be a brief overview on how to setup a local Powergate client on your system. Further information is available at: .

    In order to setup the Powergate Client locally on your system you must have , , and installed.

    • In your terminal, create a new directory and clone the Powergate repo into it:

    git clone https://github.com/textileio/powergate.git

    • After you clone the repo, enter the following commands:

    cd powergate/docker

    make localnet

    For moreinformation regarding Powergate's Localnet mode, please refer to their documentation:

    Check an implementation of core package:

    Run the script

    @astralprotocol/subgraph

    Documentation about the Astral Protocol Subgraph Package.

    Description

    The @astralprotocol/subgraph serves as the indexing engine of the protocol, capturing the registration and modification events of GeoDIDs in the @astralprotocol/contracts. It acts like a decentralized querying database where it is substantially easier to make complex queries to the Spatial Assets registry. It is used to create the tree of GeoDID nodes that represents their relationships and groupings.

    The current version of the subgraph (spatialassetsfinalv1) is indexing the Ethereum Roptsten network at the following GraphQL endpoints:

    You can connect to these with your GraphQL client of choice or try them at .

    To add Astral Protocol Subgraph to your application

    To develop or try the Astral Protocol Subgraph locally

    Prerequisites

    • Clone the astralprotocol repository and go to packages/subgraph

    • Run sudo apt-get install libsecret-1-dev

    Deployment

    1. Ensure you have ganache running with the contracts deployed from packages/contracts

    2. Update the SpatialAssets contract address that you got from the previous step in the subgraph.yaml (if needed and ensure the correct file is named according to the network of deployment - for ganache it should read as mainnet: backup the current subgraph.yamlfile and rename it to subgraphRopsten.yaml).

    Testing

    The following query can be provided to the graphql endpoint to view the GeoDIDs tree (after doing the deployment steps above):

    Spatial Oracles

    We have done little work on spatial oracles, instead focusing to date on the data storage and spatial contracts layers of the protocol stack. Our early thinking suggests that making a full suite of spatial analytics algorithms (raster and vector) available at the oracle layer would be useful for on-demand processing of geospatial data.

    For example, one concept protocol we have designed is a parametric insurance system. With this, we trustlessly insure physical assets in space - initially conceived of as static areas or volumes like land parcels or administrative jurisdictions (maritime, terrestrial, airspace etc). Upon purchasing a policy, agents would register their land parcel in an Astral verifiable spatial data registry, possibly represented using a GeoDID identifying a polygon or polyhedron. Additional information like the policy duration, indemnity process and, crucially, insured parameter and data source, would be specified upon policy creation. See this relatively simple example deployed by traditional insurers.

    Asset monitoring could be configured in a few ways. In the example above, periodic checks to the parameterized data feed could be made, and a payout could be triggered automatically if the parameter threshold is exceeded. Alternatively, the insurance contract could be reactive, requiring a policy holder to submit a claim transaction. In this event, the contract would trigger the oracle to fetch both the land parcel information and the relevant parameterized external information. To enable a scalable, fully decentralized system, we suspect the most efficient architecture will require an oracle or some Layer 2 consensus network to apply a spatial analysis algorithm to these inputs to determine if the claim is valid. (This differs to many existing DeFi insurance protocols - these often rely on some entity - a trusted individual or DAO committee - to assess the evidence off chain and submit an attestation to settle a claim or trigger automatic indemnity - see IBISA and certain review strategies employed by Protekt's Claims Manager.

    This functionality was also required to detect the amount of time devices spent in policy zones in Hyperaware, and to supply NOx levels to the we prototyped during the KERNEL Genesis Block.

    What is unique about this compared to other oracle systems is that our focus is narrowly on spatial data, that is, information that contains some spatial, or location, dimension. We could argue that all data is spatial data, but here specifically we are looking at data representing physical space - geospatial data, and data positioned within other spatial reference systems.

    Needless to say, much research into these oracle capabilities - including privacy-preserving techniques - for bringing spatial insights on chain in an efficient way is warranted, as it seems this is an unavoidable layer of the Astral stack.

    [
      {
        '@context': 'https://w3id.org/did/v1',
        id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
        publicKey: [
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#controller',
            type: 'Secp256k1VerificationKey2018',
            controller: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
            ethereumAddress: '0x4B11B9A1582E455c2C5368BEe0FF5d2F1dd4d28e'
          }
        ],
        didmetadata: { type: 'collection', created: '2021-03-12T15:56:10.937Z' },
        links: [
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
            type: 'collection',
            rel: 'root'
          },
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre',
            type: 'collection',
            rel: 'self'
          }
        ],
        service: [
          {
            id: 'did:geo:QmdDEcQbiFEY5YWvKgk2exd6XLetgfVmswZvXRgNkpehre#' + `${<ServiceType>}` + `${index}`
            type: <ServiceType>
            serviceEndpoint: <CID or URL>
          }
        ]
      }
    ]
    constructor(string memory uri) public
    function registerRole() public
    https://api.thegraph.com/subgraphs/name/astralprotocol/spatialassetsfinalv1
    wss://api.thegraph.com/subgraphs/name/astralprotocol/spatialassetsfinalv1
    sustainability-linked bond dApp
    :sparkles:

    IDocumentInfo

    Returns info regarding the Document Type, like the geodidid and the Document Itself.

    string

    REQUIRED

    The path that will be appended to the Parent GeoDID ID

    Type
    Description

    DocumentInfo

    Returns information regarding the Document, like the GeoDID ID and the contents of the Document.

    interface IDocumentInfo {
        geodidid: string;
        documentVal: any;
        parentid?: string;
    }
    Type
    Description

    IPinInfo

    Returns information regarding the Pin, like the GeoDID ID, cid, Powergate Auth token, and the pinDate.

    Type
    Description

    ILoadInfo

    Returns information regarding the Load, like the DocumentInfo as well as the Powergate Instance that the Document was pinned on.

    _ethAddress

    string

    REQUIRED

    The Ethereum Address of the user.

    _endpoint

    string

    OPTIONAL

    The Graph Endpoint. Already has a default value that can be overloaded with another endpoint.

    Name

    Type

    Attributes

    Description

    _typeOfGeoDID

    GeoDidType

    REQUIRED

    The type of Genesis GeoDID you want to create. OfType GeoDidType.

    _typeOfGeoDID

    GeoDidType

    REQUIRED

    The type of Genesis GeoDID you want to create. OfType GeoDidType.

    _parentID

    string

    REQUIRED

    The Parent GeoDID ID of this new Child GeoDID

    _documentInfo

    IDocumentInfo

    REQUIRED

    The Info related to the Document that is required for pinning.

    _token

    string

    OPTIONAL

    The Auth Token of the Powergate Instance that you want to pin the document on. If you don't have one yet, the client will automatically create a new one for you and return it for you to save.

    _docId

    string

    REQUIRED

    The GeoDID id of the DID Document.

    _token

    string

    REQUIRED

    The Auth Token for the Powergate Instance that the Document in stored on.

    _path

    interface IDocumentInfo {
        geodidid: string;
        documentVal: any;
        parentid?: string;
    }
    Docker
    Docker-Compose
    Go 1.16
    https://github.com/textileio/powergate#localnet-mode
    https://github.com/AstralProtocol/wrapperTest

    bytes32

    OPTIONAL

    GeoDID Id of the parent. Must be set to 0 if no parent is to be added.

    childrenGeoDIDIDs

    bytes32[]

    OPTIONAL

    GeoDID IDs of the children. Must be set to [] if no children are to be added.

    cid

    bytes32

    REQUIRED

    CID of the GeoDID Document generated with its creation (check @astralprotocol/core)

    offChainStorage

    bytes32

    REQUIRED

    Bytes32 representation of the off-chain storage signature (must be pre-approved)

    geoDIDtype

    uint256

    REQUIRED

    0 for Collection type GeoDIDs, 1 for Item type GeoDIDs. emit SpatialAssetRegistered(owner, geoDIDId, cid, offChainStorage, geoDIDId, _canBeParent[geoDIDId]);

    Event
    Arguments
    Condition

    SpatialAssetRegistered

    address indexed to, bytes32 indexed geoDIDId, bytes32 indexed cid, bytes32 offChainStorage, bytes32 root, bool canBeParent

    Successful registration of a GeoDID

    ParentAdded

    bytes32 indexed geoDIDId, bytes32 indexed parentGeoDIDId

    If parentGeoDIDId is different than 0

    ChildrenAdded

    bytes32 indexed geoDIDId, bytes32 indexed childrenGeoDIDId

    If the childrenGeoDIDIds array is not empty and the GeoDIDs exist

    Event
    Arguments
    Condition

    ChildrenAdded

    bytes32 indexed geoDIDId, bytes32 indexed childrenGeoDIDId

    If the childrenGeoDIDIds array is not empty and the GeoDIDs exist

    Event
    Arguments
    Condition

    ParentAdded

    bytes32 indexed geoDIDId, bytes32 indexed parentGeoDIDId

    If parentGeoDIDId exists

    Event
    Arguments
    Condition

    ChildrenRemoved

    bytes32 indexed geoDIDId, bytes32 indexed childrenGeoDIDId

    If the childrenGeoDIDIds array is not empty and the GeoDIDs exist.

    Event
    Arguments
    Condition

    ParentRemoved

    bytes32 indexed geoDIDId, bytes32 indexed parentGeoDIDId

    If parentGeoDIDId exists

    Event
    Arguments
    Condition

    SpatialAssetDeactivated

    bytes32 indexed geoDIDId, bytes32[] **** childrenToRemove

    If geoDIDId exists

    Name

    Type

    Attributes

    Description

    offChainStorage

    bytes32

    REQUIRED

    Bytes32 representation of the off-chain storage signature to be enabled

    Name

    Type

    Attributes

    Description

    offChainStorage

    bytes32

    REQUIRED

    Bytes32 representation of the off-chain storage signature to be disabled

    Name

    Type

    Attributes

    Description

    owner

    address

    REQUIRED

    To be designated the owner of the GeoDID. Currently must be msg.sender.

    geoDIDId

    bytes32

    REQUIRED

    GeoDID Id generated with the GeoDID creation (check @astralprotocol/core)

    Name

    Type

    Attributes

    Description

    geoDIDId

    bytes32

    REQUIRED

    GeoDID Id generated with the GeoDID creation and registered in the smart contract

    childrenGeoDIDIDs

    bytes32[]

    OPTIONAL

    GeoDID IDs of the children. Must be set to [] if no children are to be added (nothing is executed in the function)

    Name

    Type

    Attributes

    Description

    geoDIDId

    bytes32

    REQUIRED

    GeoDID Id generated with the GeoDID creation (check @astralprotocol/core)

    parentGeoDIDId

    bytes32

    REQUIRED

    GeoDID Id of the parent. It must exist.

    Name

    Type

    Attributes

    Description

    geoDIDId

    bytes32

    REQUIRED

    GeoDID Id generated with the GeoDID creation (check @astralprotocol/core)

    childrenGeoDIDIds

    bytes32[]

    OPTIONAL

    GeoDID IDs of the children. Must be set to [] if no children are to be removed.

    Name

    Type

    Attributes

    Description

    geoDIDId

    bytes32

    REQUIRED

    GeoDID Id generated with the GeoDID creation (check @astralprotocol/core)

    parentGeoDIDId

    bytes32

    REQUIRED

    GeoDID Id of the parent to remove. It must exist.

    Name

    Type

    Attributes

    Description

    geoDIDId

    bytes32

    REQUIRED

    GeoDID Id generated with the GeoDID creation (check @astralprotocol/core)

    childrenGeoDIDIds

    bytes32[]

    OPTIONAL

    GeoDID IDs of the children. Must be set to [] if no children are to be removed.

    parentGeoDIDId

    https://github.com/textileio/powergate
    Docker
    Docker-Compose
    Go 1.16
    https://github.com/textileio/powergate#localnet-mode
    Run
    git clone https://github.com/graphprotocol/graph-node/
    (check setup instructions for docker version on
    )
  • Have the development steps of @astralprotocol/contracts done previously (with Ganache)

  • In another terminal, inside the graph-node folder, run cd docker && docker-compose up. If using Docker for WSL, Docker must be running on Windows. If graph-node throws an error try clearing the data/postgres folder, within the docker directory of graph-node, with sudo rm -rf data/postgres. Restart docker if needed.
  • Generate subgraph typescript files with yarn codegen, then create and deploy the subgraph to the graph-node with yarn create-local && yarn deploy-local

  • You can query the subgraph and view the GeoDID tree in the local provided endpoint.

  • The Graph's playground
    Docker Instalation
    https://thegraph.com/docs/
    interface IPinInfo {
        geodidid: string;
        cid: string;
        pinDate: Date;
        token: string
    } 
    interface LoadInfo {
        documentInfo: IDocumentInfo;
        powergateInstance: Powergate 
    }
    yarn add @astralprotocol/core @astralprotocol/contracts dotenv bs58 truffle @truffle/hdwallet-provider
    const HDWalletProvider = require("@truffle/hdwallet-provider");
    require('dotenv').config();
    
    // Create a .env file with your MNEMONIC and a ROPSTEN API key from INFURA
    // Must have the following format:
    // MNEMONIC="words here "
    // ROPSTEN_API_KEY=https://ropsten.infura.io/v3/key
    
    let mnemonic = process.env.MNEMONIC
    let ropstenURL = process.env.ROPSTEN_API_KEY
    
    
    let provider = new HDWalletProvider({
      mnemonic: {
        phrase: mnemonic,
      },
      providerOrUrl: ropstenURL,
    });
    
    module.exports = {
      networks: {
        development: {
          host: "127.0.0.1",
          port: 8545,
          network_id: "*",
        },
        ropsten: {
          provider: provider,
          network_id: "3",
        },
      },
      compilers: {
        solc: {
          version: "0.6.12",
        },
      },
    };
    scripts/deployGeoDIDs.js
    const { AstralClient } = require('@astralprotocol/core');
    const SpatialAssets = require("@astralprotocol/contracts/build/contracts/SpatialAssets.json")
    const bs58 = require('bs58')
    
    module.exports = async function (callback) {
      const stringToBytes = (string) => web3.utils.asciiToHex(string)
    
        // based on https://ethereum.stackexchange.com/questions/17094/how-to-store-ipfs-hash-using-bytes32
      // Return bytes32 hex string from base58 encoded ipfs hash,
      // stripping leading 2 bytes from 34 byte IPFS hash
      // Assume IPFS defaults: function:0x12=sha2, size:0x20=256 bits
      // E.g. "QmNSUYVKDSvPUnRLKmuxk9diJ6yS96r1TrAXzjTiBcCLAL" -->
      // "0x017dfd85d4f6cb4dcd715a88101f7b1f06cd1e009b2327a0809d01eb9c91f231"
      function getBytes32FromIpfsHash(ipfsListing) {
        return "0x"+bs58.decode(ipfsListing).slice(2).toString('hex')
      }
    
      try {
    
        const accounts = await web3.eth.getAccounts()
        const userAccount = accounts[0]
    
        // find contract in network 3 (Ropsten)
        const SpatialAssetsContract = new web3.eth.Contract(SpatialAssets.abi, SpatialAssets.networks['3'].address, {
          from: userAccount,
          data: SpatialAssets.deployedBytecode,
        });
    
        // update the endpoint to the latest
        const subgraphEndpoint = "https://api.thegraph.com/subgraphs/name/astralprotocol/spatialassetsfinalv1"
    
        const astral = await AstralClient.build(userAccount, subgraphEndpoint, "https://astralinstance.tk");
      
        const storage = stringToBytes('FILECOIN');
    
        // Creates a Genesis GeoDID 
        
        const genDocRes = await astral.createGenesisGeoDID('collection')
        console.log(genDocRes);
      
        // With the returned IDocumentInfo from the last function, we can pin it.
        // Since no token was specified the client will assign a new auth Token to the user.
        const results = await astral.pinDocument(genDocRes);
        console.log(results);
                  
        // register the geodid id and cid obtained. Type 0 because it is a collection
    
        console.log(results.geodidid)
        console.log(results.cid)
    
        const bytes32GeoDID= getBytes32FromIpfsHash(results.geodidid.substring(8));
        const bytes32Cid = getBytes32FromIpfsHash(results.cid);
      
        try {
          await SpatialAssetsContract.methods.registerSpatialAsset(userAccount, bytes32GeoDID, stringToBytes(''),[], bytes32Cid, storage,0).send()    
          .on('receipt', function(receipt){
          // receipt example
          console.log(receipt);
    
          })
          .on('error', function(error) { // If the transaction was rejected by the network with a receipt, the second parameter will be the receipt.
            console.log(error);
          });
        } 
        catch (err) {
          // Will throw an error if tx reverts
          console.log(err)
        }
    
        
        // With the Auth Token and the GeoDID ID we can load the document with the loadDocument function
        const loadResults = await astral.loadDocument(results.geodidid);
        console.log(loadResults);
    
      }
      catch(error) {
        console.log(error)
      }
    
        callback()
    };
    "deployGeoDIDs": "truffle exec scripts/deployGeoDIDs.js --network ropsten",
    yarn deployGeoDIDs
    function enableStorage(bytes32 offChainStorage) public
    function disableStorage(bytes32 offChainStorage) public
    function registerSpatialAsset (
        address owner, 
        bytes32 geoDIDId, 
        bytes32 parentGeoDIDId , 
        bytes32[] memory childrenGeoDIDIds, 
        bytes32 cid, 
        bytes32 offChainStorage, 
        uint256 geoDIDtype
    ) public
    function addChildrenGeoDIDs(
        bytes32 geoDIDId, 
        bytes32[] memory childrenGeoDIDIds
    ) public
    function addParentGeoDID(
    	  bytes32 geoDIDId, 
        bytes32 parentGeoDIDId
    ) public
    function removeChildrenGeoDIDs(
        bytes32 geoDIDId, 
        bytes32[] memory childrenGeoDIDIds
    ) public
    function removeParentGeoDID(
        bytes32 geoDIDId, 
        bytes32 parentGeoDIDId
    ) public
    function deactivateSpatialAsset(
        bytes32 geoDIDId, 
        bytes32[] memory childrenToRemove
    ) public
    yarn add -D @astralprotocol/core
    OR
    npm install -D @astralprotocol/core
    
    import AstralClient from '@astralprotocol/core';
    OR
    const AstralClient = require('@astralprotocol/core');
    testScript.js
    import AstralClient from '@astralprotocol/core';
    
    async function run(){
    
        // Create a new Astral Client Instance with the user's ethAddress
        // and a subgraph endpoint (check the latest one @astralprotocol/subgraph)
        let astral = new AstralClient(
            '0xa3e1c2602f628112E591A18004bbD59BDC3cb512', 
            'https://api.thegraph.com/subgraphs/name/astralprotocol/spatialassetsv06'
        );
        
        try{
        
            // Creates a Genesis GeoDID 
            const genDocRes = await astral.createGenesisGeoDID('collection')
            console.log(genDocRes);
    
            // With the returned IDocumentInfo from the last function, we can pin it.
            // Since no token was specified the client will assign a new auth Token to the user.
            const results = await astral.pinDocument(genDocRes);
            console.log(results);
    
            const token = results.token;
    
            // With the Auth Token and the GeoDID ID we can load the document with the loadDocument function
            const loadResults = await astral.loadDocument(results.geodidid, token);
            console.log(loadResults);
    
            console.log('\n');
            console.log('\n');
    
            // Creates a Child GeoDID Item of the priviously created Genesis GeoDID
            const itemres = await astral.createChildGeoDID('item', results.geodidid, 'item1');
            console.log(itemres)
    
            console.log('\n');
    
            // With the returned IDocumentInfo from the last function, we can pin it.
            // This time we reuse the same token that was created earlier to pin the child document to the same instance.
            const itemresults = await astral.pinDocument(itemres, token);
            console.log(itemresults);
    
            console.log('\n');
    
            // With the Auth Token and the GeoDID ID we can load the document with the loadDocument function
            const loadItemResults = await astral.loadDocument(itemresults.geodidid, token);
            console.log(loadItemResults);
    
            console.log('\n');
    
            // Here we can display the string representation of the DID Document
            console.log(JSON.stringify(loadItemResults.documentInfo.documentVal));
    
        }catch(e){
            console.log(e);
        }
        
    }
    node testScript.js
    yarn add @astralprotocol/subgraph
    {
      geoDIDs {
        id
        owner
        cid
        storage
        root
        parent
        edges {
          id
          childGeoDID {
            id
          }
        }
        active
        type
      }
    }
    Front-end packages and dApp interfaces to connect with spatial contracts.
    Turf.js

    You can deploy an instance by running yarn new-instance. It builds a GeoDID tree with hardcoded GeoDID ids and CIDs.

  • You can test the removal of some links by running yarn remove-links.

  • Watch the changes in a locally deployed subgraph.

  • Do coverage check up by killing the ganache process in the first terminal and running yarn coverage

  • astralprotocol repository
    yarn add @astralprotocol/contracts
    git clone git@github.com:AstralProtocol/astralprotocol.git
    cd astralprotocol/packages/contracts
    .env
    MNEMONIC="mnemonic phrase goes here with testnet ether in address[0] on ropsten cool"
    ROPSTEN_API_KEY=https://ropsten.infura.io/v3/<PROJECT ID HERE>
    Build Status
    Coverage Status