Skip to content

Expand section on "Serializing Large Datasets" #29

@gkellogg

Description

@gkellogg

Of note are discussions in w3c/json-ld-syntax#366 and rubensworks/jsonld-streaming-parser.js#65. Best practices should include the principles of JSON-LD streaming, where keys come in a strict order, as well as the use of the streaming document profile so that clients understand the considerations for order of processing when expanding JSON-LD and turning into RDF.

Alternatively, serializations can be limited to a flattened representation so that state does not need to be managed over a large set of embedded nodes (this is probably the most natural way to blindly serialize a large dataset, in any case.

The core JSON-LD API algorithms are intended to work in memory, which is incompatible with the needs of medium to large datasets.

cc/ @wouterbeek @rubensworks

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    Non TR Work

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions