This year was very difficult for reason of personal circumstances. However, some progress was made nonetheless.

A subset of Engelbart’s NLS/Augment ViewSpec commands was implemented in the form of another viewer. The program is also equipped with the resource retriever for obtaining a local or remote resource. Input is processed by the stream-based Tero parser that acts on a minimal HTML grammar, to extract the text from paragraphs only. Such plain-text can then be re-rendered (like a “layout engine”) according to the current ViewSpec settings, which the user can easily and quickly toggle via adjacent hotkey presses.

A structure-oriented XML editor was developed, but limited to a delete and corresponding undo operation so far. This tool was primarily made to clean up XML files – in particular for removing non-significant white-space that’s used solely for indentation to aid a human reader. The basis of this editor is Java Swing’s tree control.

Converters were built for transforming XML into “SOF”, the “structure/semantics/standoff overlay/outline format”, and back from SOF to XML. Similarly, a text “condenser” and “uncondenser” for SOF was added in order to provide access to the raw text and keep it separate from the inline markup, or vice versa re-distribute the markup throughout the text spans. One tool variant performs both at once, the conversion from XML to SOF and the text condensing, for convenience purposes.

A new repository was started for collecting Tero parse grammar definitions. Several already existing grammar definitions were moved into there.

A memory/storage capability was finally introduced. It holds local copies of resources and supplies these on retrieval requests. It may try to download data from a remote location if no local copy is present yet. If a copy is available, the program simply resolves the identifier to the data in the local storage. Internally, it does not use a database or other optimization techniques for now. The tool is still rather primitive in terms of operations and complex in terms of implementation for not using queries or a fast lookup index, reading the entire resource list anew every time. It calculates hashes, while on the other hand does not record timestamps nor supports refreshing.

With the storage function in place, a new version of the earlier viewer was developed, which gains some persistence this way.

A simple visualizer for generic graphs was made, together with graph editing operations for adding, removing, moving, linking and unlinking nodes. The equivalent code-bases are for JavaFX and HTML5’s canvas.

Continuing from pre-existing code-bases of various entry list/tree prototypes, three new variants were produced. The first variant enhanced an infinite hierarchical entry tree that comes with edit history by granting public read-access to the entries. The second variant enhanced this by allowing the entries to be re-ordered, and also displays the name of the user who submitted a revision. The third variant enhanced the former by consent to libre-free content/contributions licensing (to grow the collaborative result as a shared digital commons), and enabled the user to change the parent of an entry, effectively moving it across the levels of the tree.

In the augmented reality department, another variant of the geolocation Progressive Web App was spun off, which removed the retrieval of data anchored to positions in physical space from the server to be displayed if the user per device orientation (“compass”) looks towards it, and also removed the chance to record the user’s current location on the server-side.

Within the pattern catalog management server Web package, a list control was introduced, so items can be added, removed and re-ordered (including snapshot-based revision history for these operations). Similarly, a static text control can be configured as an element on a pattern template/form. A separate variant of the package comes with user management: it restricts template creation to only users who have the administrator role.

The audio messaging system received a function to create chat rooms, so other users could later be invited to these, so the recordings get grouped into such a room and appear there in a flat, consecutive, chronological list. The initial tree structure interface inherited from a different earlier experiment was removed, as it turned out to be very exhaustive to repeatedly check the many tree branches for new messages, while this arrangement did not help much with audio curation.

CRUD API methods were improved in the context of a project progress tracker server + client Web package. The client uses Twitter Bootstrap and an object-oriented representation of the data resources. In addition to the creation and display of project items, a separate API endpoint was made to collect profiles about people.

The previous navigator for change instructions was lacking in the area of rendering performance. This year, a new strategy was successfully adopted, which significantly speeds up the display update except in one rare corner case. Previously, there has been actual waiting time and delay for the user after clicking, caused by the load on the browser’s DOM, for naively discarding all old elements and recreating the entire text as DOM nodes. Now, only the affected text range gets updated.

A nesting/staging (multi-level) stream-based Tero parser was achieved.

Several tools were combined into a workflow configuration that constitutes a core loop of a first simple “system”. It uses the retriever to access local or remote resources, which automatically persists external documents to the storage. Typically, a list allows the user to open these in the viewer and after a phase of reading, returns back to the index.

An attempt was started to slowly grow a directory of profiles about people as a data project, but had to be put on hold to avoid competition and duplication.

A proofreading was conducted for “Xanadu Hypertext Documents” by Chip Morningstar, working from Alberto González Palomo’s text and correcting many issues that were left-overs from the OCR extraction of the original scan.

For the PlanetMath repositories, a parser grammar for their LaTeX document metadata description commands was defined. From the extracted data, the unique field names get reported (to notify about potential updates/extension), and using some of these attributes, a multi-dimensional index was automatically compiled.

Copyright (C) 2023 Stephan Kreutzer. This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.