The Text Tonsorium is a workflow management system that not only executes workflows, but also composes workflows from building blocks. Each building block encapsulates a Natural Language Processing tool.
In contrast to other workflow management systems, the Text Tonsorium focuses on results rather than on tools, so a general understanding of how the desired result is described is useful. Still, you may like to also know which tools currently are integrated in the Text Tonsorium. This page gives you information about both.
A workflow design consists of one or more tools connected by data streams, see Figure 1. In this diagra, we depict tools as shapes (squares, circles, stars, etc.) and data streams as edges between the shapes.
Broadly speaking, the Text Tonsorium can do two things with workflows: compose them and enact them. Here we show how it is done.
The diagram above is a gross simplification. The main omission is that the Text Tonsorium always attempts to find not just one, but all roads leading to a goal.
The Text Tonsorium finds all workflows by trying out all tools and all tool parameter combinations.
The full set of workflows found by the Text Tonsorium is not presented to the user in its entirety. In general, there will be workflows that do not make much sense to a user. Many workflows will be pruned away.
In the following example, two tools compete to create output fulfilling the same goal. There are three tools that 'consume' the output. The consuming tools cannot take output from both competing tools. The question is: which of the competing tools should they choose?
Some tools should never occur in the same workflow design. For example, two Optical Character Recognition systems will, in general, output different numbers of tokens when given the same input, e.g. bacause one system sees white space where the other does not.
In the Text Tonsorium, tools that should not co-occur can be given a value for a 'smell' feature that spreads throughout the workflow. If two different smells collide, the workflow design is discarded.
The Text Tonsorium may compose many workflows that all lead to your goal. It will then ask you to choose one of the proposed workflows. In general, the more detail you add to your goal, the fewer solutions the Text Tonsorium will find, even zero.
Data streams, but also the input and output specifications of tools, are described in terms of 'features'. Features express things such as the language used in a text, file format, and the type of content.
Users are confronted with features and feature values when they specify the goal of a workflow design. There is a second level of feature specification, a level that users normally aren't bothered with: feature values can be further specified with 'style descriptors'.
Feature values and style descriptors are always chosen from predefined sets of values, using drop down lists.
This is the list of features that currently is defined in this instance of Text Tonsorium:
Subtype of resource, e.g. basis text, tokenisation, alphabetic list.
Language of the text
The way that information is encoded for storage in a computer file.
Time period of the spelling in the text resource.
How results are presented for the user.
Decorative tradition, for example typeface class.
Whether data contains ambiguous elements.
Special feature used to give unique identity to input and intermediate data, for example output of OCR software.
These are the currently active tools:
# | Name of the tool | Supported languages |
---|---|---|
1 | ANNIE DE (GATE cloud) | de |
2 | ANNIE EN (GATE cloud) | en |
3 | ANNIE FR (GATE cloud) | fr |
4 | ANNIE RO (GATE cloud) | ro |
5 | ANNIE RU (GATE cloud) | ru |
6 | Anno-splitter | |
7 | Bohnet parser | da, de, es, fr |
8 | Bohnet tagger | de, en, es, fr |
9 | Brill tagger | da, en, gml, la |
10 | cluster | |
11 | CoNLL 2009 to U | |
12 | CoNLL formatter | |
13 | CONLL to Penn Treebank | |
14 | CONLL to three columns | |
15 | CQP formatter | |
16 | CSTlemma | af, ast, be, bg, ca, cs, cy, da, de, el, en, es, et, fa, fo, fr, ga, gd, gl, gml, gv, hr, hu, is, it, ka, la, mk, nl, no, pl, pt, ro, ru, sk, sl, sq, sr, sv, uk |
17 | CSTner | da |
18 | daner | da |
19 | dependency2tree | |
20 | Diplom annotator | |
21 | Diplom fetch corrected text | da, gml, la, sv |
22 | Document similarity checker | |
23 | eSpeak | af, bg, bs, ca, cs, cy, da, de, el, en, eo, es, et, fi, fr, hi, hr, hu, hy, id, is, it, ka, kn, ku, la, lv, mk, ml, nl, pl, pt, ro, ru, sk, sq, sr, sv, sw, ta, tr, vi, zh |
24 | Frequencies | |
25 | html2text | |
26 | JSON pretty print | |
27 | JSON to ORG-mode | |
28 | JSON to TEI | |
29 | JSON to TSV | |
30 | KORP to Excel | |
31 | Lapos | da, gml, la |
32 | LemPoS | be, bg, cs, da, de, es, et, fa, fo, hr, hu, is, it, la, mk, nl, pl, pt, ro, ru, sk, sl, sq, sr, sv, uk |
33 | LibreOffice | |
34 | Normaliser | da |
35 | Normalize dipl | la |
36 | NP finder | da |
37 | OpenNLP Tagger | da, en |
38 | pdf2htmlEX | |
39 | PDFMiner | |
40 | plain to TEI | |
41 | PoS translator | da, la |
42 | PruneLemPos | |
43 | Repetitiveness checker | |
44 | RTFreader | |
45 | Sentence extractor | |
46 | Stanford CoreNLP | en |
47 | TEI annotator | |
48 | TEI extract tokens/sentences | |
49 | TEI to CoNLL-U | |
50 | TEI to Org-mode | |
51 | TEI tokenizer | |
52 | TEI-segmenter | |
53 | Tesseract-OCRv5 | af, br, bs, ca, co, cs, cy, da, de, en, eo, es, et, eu, fa, fi, fo, fr, ga, gl, hr, ht, hu, id, is, it, iu, jv, la, lb, lt, lv, ms, mt, nb, nl, nn, oc, pl, pt, ro, sk, sl, sq, sr, sv, sw, tr, uz, vi, yi |
54 | Token extractor | af, ar, ast, be, bg, bs, ca, cop, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fo, fr, ga, gd, gl, gml, got, gv, he, hi, hr, hu, hy, id, is, it, ja, ka, kn, ko, ku, la, lt, lv, mk, ml, mr, mt, nb, nl, nn, no, pl, pt, ro, ru, se, sk, sl, sq, sr, sv, sw, ta, te, tr, ug, uk, ur, vi, wo, zh |
55 | udpipe | af, ar, be, bg, ca, cop, cs, cu, da, de, el, en, es, et, eu, fa, fi, fr, ga, gd, gl, got, he, hi, hr, hu, hy, id, it, ja, ko, la, lt, lv, mr, mt, nb, nl, nn, pl, pt, ro, ru, se, sk, sl, sr, sv, ta, te, tr, ug, uk, ur, vi, wo, zh |
56 | vujiLoX | la |
ANNIE is a named entity recognition pipeline that identifies basic entity types, such as Person, Location, Organization, Money amounts, Time and Date expressions. It is the prototypical information extraction pipeline distributed with the GATE framework and forms the base of many more complex GATE-based IE applications.
ANNIE is a named entity recognition pipeline that identifies basic entity types, such as Person, Location, Organization, Money amounts, Time and Date expressions. It is the prototypical information extraction pipeline distributed with the GATE framework and forms the base of many more complex GATE-based IE applications.
ANNIE is a named entity recognition pipeline that identifies basic entity types, such as Person, Location, Organization, Money amounts, Time and Date expressions. It is the prototypical information extraction pipeline distributed with the GATE framework and forms the base of many more complex GATE-based IE applications.
ANNIE is a named entity recognition pipeline that identifies basic entity types, such as Person, Location, Organization, Money amounts, Time and Date expressions. It is the prototypical information extraction pipeline distributed with the GATE framework and forms the base of many more complex GATE-based IE applications.
ANNIE is a named entity recognition pipeline that identifies basic entity types, such as Person, Location, Organization, Money amounts, Time and Date expressions. It is the prototypical information extraction pipeline distributed with the GATE framework and forms the base of many more complex GATE-based IE applications.
Takes TEI P5 document containing multiple stand-off annotation groups (spanGrp). Outputs one of the annotation groups.
Dependency parser, part of mate-tools.
Part of Speech tagger that is distributed as part of mate-tools.
Part-of-speech tagger: Marks each word in a text with information about word class and morphological features.
Make clusters of lines, separated by empty lines. Each cluster contains all those lines that have the the value in the first or second column in common with one or more other lines in the same cluster.
Convert CoNLL 2009 (14 columns) to CoNLL-U (10 columns)
Converts input to CoNLL 2009 format.
Convert syntax dependency annotation in CoNLL 2009 or CoNLL-U format to bracketed "Lisp-like" format.
Convert a CONLL 2009 or CONLL-U file to a tabalator separated file. On each line: <word> \t <lemma> \t <pos> \n
Takes input comntaining words, tags and lemmas and creates output that can be read by the CQP software.
Produces the dictionary look-up form (or lemma) for each word, inflected or not, in the input.
Classifies names as proper names, locations (with sub-classes of street, city, land and other types of locations), and other names (called MISC)
Named Entity Recognition for Danish, Distributed by ITU NLP. Uses Stanford CoreNLP NER and the model from DKIE to tag incoming Danish plain text for named entities, in three classes: location, person, and organization names.
Convert CoNLL output of a dependency parser into a latex or graphviz tree.
Store lemma in column 3 and/or word class in column 4 of an orgmode input file that already has diplomatic and facsimal values in columns 7 and 8.
Fetch the column with corrected transcriptions. This column contains words with additions between parentheses. The parentheses are removed in the output.
Uses a statistical method to find phrases that are found in each of the input documents.
Text to speech software. Originally known as speak and originally written for Acorn/RISC_OS computers starting in 1995. This version is an enhancement and re-write, including a relaxation of the original memory and processing power constraints, and with support for additional languages.
Sorts input lines, collapses equal lines, appends column with frequencies. Assumes that input is 1, 2 or 3 columns, separated by tabs.
A very simple script that loads from HTML, and then iterates over the DOM to correctly output plain text.
Json pretty-print parser based on a recursive lexical analyser. The parser was based on the specification defined at json.org. The input file is parsed to build a json object. If the object is correct, it will be pretty-printed to standard output.
Converts JSON output with tokens, lemmas and Part of Speech tags to a three-column ORG-mode table.
Read json file with fields for token ID, word, lemma and pos. Output a TEI P5 annotation file (spanGrp) containing either lemmas or Part of Speech tags.
Convert word-lemma-pos data from JSON to CQP format.
This tool generates a tabulator separated file with all KWIC (keyword-in-context) results generated by the KORP tool at the address https://alf.hum.ku.dk/korp/. Input to the tool is the URL copied from the address line when KORP has performed a search.
Fork of the Lookahead Part-Of-Speech (Lapos) Tagger
Lemmatizes input text and adds PoS-options to each lemma. Output can be ambiguous.
A powerful office suite, here used to convert office documents to RTF or PDF.
Normalises older (1200-1900) Danish text to spelling rules as employed in ODS (Ordbog over det danske Sprog).
Fill column left of diplom column with normalized tokens, i.e. v -> u, j -> i and all lowercase.
Collects words that constitute noun phrases.
Part of Speech Tagger that marks tokens with their corresponding word type based on the token itself and the context of the token. Uses a probability model to predict the correct pos tag.
Converts PDF to HTML without losing text or format. (The produced HTML can hardly be interpreted by other tools.) Renders PDF files in HTML, utilizing modern Web technologies. It aims to provide an accurate rendering, while keeping optimized for Web display. Best for text-based PDF files, for example scientific papers with complicated formulas and figures. Text, fonts and formats are natively preserved in HTML such that you can still search and copy. The generated HTML file is static, with optional features powered by JavaScript.
Extracts information from PDF documents. Focuses entirely on getting and analyzing text data.
From a plain segmentized and tokenized text file that uses DEL characters to separate tokens that are written together in the input, create a TEI P5 Clarin Base Format text with attributes S and T for segment and token identification.
Translate from DSL's tag set to Menota
A "Poor man's POS-tagger" that takes text input that has ambiguous lemma and PoS annotations and diminishes the ambiguity by using bigram HMM + Viterbi algorithm. No training data are involved! Works best with larger texts.
Uses a statistical method to find repetitions in a text.
Extracts segments from RTF-file or from plain text. Optionally tokenises. Keeps \f
From a TEI text enriched with T (token) and S (segment) attributes, extract the sentences and their offsets in the source.
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations. CoreNLP currently supports 8 languages: Arabic, Chinese, English, French, German, Hungarian, Italian, and Spanish.
Add attributes for lemma and Part of Speech tag to <w> and <c> elements. (<w> and <c> elements must already exist.)
Reads TEIP5 and produces token and sentence annotations. The annotations refer to the base text, but also include the tokens and sentences themselves in plain text.
Converts a TEI P5 document with annotations for lemma, pos (or msd) and syntactic dependencies to CoNLL-U 10 column format.
Convert TEI P5 stand-off annotation to a two column file in Org-mode format. The first column contains a token, the second contains the annotation: POS-tag, word class, or lemma
Apply a primitive tokenisation to the contents of the <text> element in a TEI P5 document. Each word, punctuation and whitespace is marked up by w or c tags. S and T attributes indicate wich primitive tokens should be combined to create higher level tokens.
Reads tokens and sentences as annotations and produces segment annotations, where segments refer to tokens, not to the base text. Input and output is encoded in TEI P5.
Tesseract Open Source OCR Engine. Tesseract 4 adds a new neural net (LSTM) based OCR engine which is focused on line recognition, but also still supports the legacy Tesseract OCR engine of Tesseract 3 which works by recognizing character patterns.
From a TEI text enriched with T (token) and S (segment) attributes, extract tokens and their offset in the input.
Tokenizer, POS Tagger, Lemmatizer and Parser models for 94 treebanks of 61 languages of Universal Depenencies 2.5 Treebanks.
Converts Latin text to lower case and transforms v to u and j to i.