Skip to content

Commit

Permalink
Merge branch 'development' into kbss-cvut/termit-ui#553-multilingual-…
Browse files Browse the repository at this point in the history
…annotation
  • Loading branch information
ledsoft authored Nov 19, 2024
2 parents 58c10e2 + 742a017 commit 6f10ef0
Show file tree
Hide file tree
Showing 17 changed files with 247 additions and 125 deletions.
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,29 +31,29 @@ See the [docs folder](doc/index.md) for additional information on implementation
This section briefly lists the main technologies and principles used (or planned to be used) in the application.

- Spring Boot 3, Spring Framework 6, Spring Security, Spring Data (paging, filtering)
- Jackson 2.13
- Jackson Databind
- [JB4JSON-LD](https://github.com/kbss-cvut/jb4jsonld-jackson) - Java - JSON-LD (de)serialization library
- [JOPA](https://github.com/kbss-cvut/jopa) - persistence library for the Semantic Web
- JUnit 5 (RT used 4), Mockito 4 (RT used 1), Hamcrest 2 (RT used 1)
- Servlet API 4 (RT used 3.0.1)
- JSON Web Tokens (CSRF protection not necessary for JWT)
- JUnit 5, Mockito 4, Hamcrest 2
- Jakarta Servlet API 4
- JSON Web Tokens
- SLF4J + Logback
- CORS (for separate frontend)
- Java bean validation (JSR 380)


## Ontology
## Ontologies

The ontology on which TermIt is based can be found in the `ontology` folder. For proper inference
functionality, `termit-model.ttl`, the
_popis-dat_ ontology model (http://onto.fel.cvut.cz/ontologies/slovnik/agendovy/popis-dat/model) and the SKOS vocabulary
model
(http://www.w3.org/TR/skos-reference/skos.rdf) need to be loaded into the repository used by TermIt (see `doc/setup.md`)
for details.
The ontology on which TermIt is based can be found in the `ontology` folder. It extends the
_popis-dat_ ontology (http://onto.fel.cvut.cz/ontologies/slovnik/agendovy/popis-dat). TermIt vocabularies and terms
use the SKOS vocabulary (http://www.w3.org/TR/skos-reference/skos.rdf).

Relevant ontologies need to be loaded into the repository for proper inference functionality. See [setup.md](doc/setup.md)
for more details.

## Monitoring

We use [JavaMelody](https://github.com/javamelody/javamelody) for monitoring the application and its usage. The data are
[JavaMelody](https://github.com/javamelody/javamelody) can be used for monitoring the application and its usage. The data are
available on the `/monitoring` endpoint and are secured using _basic_ authentication. Credentials are configured using
the `javamelody.init-parameters.authorized-users`
parameter in `application.yml` (see
Expand Down
14 changes: 5 additions & 9 deletions doc/implementation.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,23 +43,19 @@ follows:
Fulltext search currently supports multiple types of implementation:

* Simple substring matching on term and vocabulary label _(default)_
* RDF4J with Lucene SAIL
* GraphDB with Lucene connector

Each implementation has its own search query which is loaded and used by `SearchDao`. In order for the more advanced
implementations for Lucene to work, a corresponding Maven profile (**graphdb**, **rdf4j**) has to be selected. This
implementation for Lucene to work, a corresponding Maven profile (**graphdb**) has to be selected. This
inserts the correct query into the resulting artifact during build. If none of the profiles is selected, the default
search is used.

Note that in case of GraphDB, corresponding Lucene connectors (`label_index` for labels and `defcom_index` for
definitions and comments)
have to be created as well.
definitions and comments) have to be created as well.

### RDFS Inference in Tests

The test in-memory repository is configured to be a SPIN SAIL with RDFS inferencing engine. Thus, basically all the
inference features available in production are available in tests as well. However, the repository is by default left
empty (without the model or SPIN rules) to facilitate test performance (inference in RDF4J is really slow). To load the
The test in-memory repository is configured to be a RDF4J SAIL with RDFS inferencing engine. The repository is by default left
empty (without the model) to facilitate test performance (inference in RDF4J is really slow). To load the
TermIt model into the repository and thus enable RDFS inference, call the `enableRdfsInference`
method available on both `BaseDaoTestRunner` and `BaseServiceTestRunner`. SPIN rules are currently not loaded as they
don't seem to be used by any tests.
method available on both `BaseDaoTestRunner` and `BaseServiceTestRunner`.
61 changes: 18 additions & 43 deletions doc/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This guide provides information on how to build and deploy TermIt.

### System Requirements

* JDK 11 or newer (tested up to JDK 11 LTS)
* JDK 17 or newer
* Apache Maven 3.5.x or newer


Expand All @@ -16,13 +16,11 @@ This guide provides information on how to build and deploy TermIt.

To build TermIt for **non**-development deployment, use Maven and select the `production` profile.

In addition, full text search in TermIt supports three modes:
In addition, full text search in TermIt supports two modes:
1. Default label-based substring matching
2. RDF4J repository with Lucene index
3. GraphDB repository with Lucene index
2. GraphDB repository with Lucene indexes

Options 2. and 3. have their respective Maven profiles - `rdf4j` and `graphdb`. Select one of them
or let the system use the default one.
Option 2. has its respective Maven profile - `graphdb`.

Moreover, TermIt can be packaged either as an executable JAR (using Spring Boot) or as a WAR that can be deployed in any Servlet API 4-compatible application server.
Maven profiles `standalone` (active by default) and `war` can be used to activate them respectively.
Expand All @@ -40,9 +38,10 @@ There is one parameter not used by the application itself, but by Spring - `spri
by the application:
* `lucene` - decides whether Lucene text indexing is enabled and should be used in full text search queries.
* `admin-registration-only` - decides whether new users can be registered only by application admin, or whether anyone can register.
* `no-cache` - disables EhCache which is used to cache lists of resources and vocabularies for faster retrieval.
* `no-cache` - disables Ehcache, which is used to cache lists of resources and vocabularies for faster retrieval, and persistence cache.
* `development` - indicates that the application is running is development. This, for example, means that mail server does not need to be configured.

The `lucene` Spring profile is activated automatically by the `rdf4j` and `graphdb` Maven profiles. `admin-registration-only` and `no-cache` have to be added
The `lucene` Spring profile is activated automatically by the `graphdb` Maven. `admin-registration-only` and `no-cache` have to be added
either in `application.yml` directly, or one can pass the parameter to Maven build, e.g.:

* `mvn clean package -P graphdb "-Dspring.profiles.active=lucene,admin-registration-only"`
Expand All @@ -51,7 +50,7 @@ either in `application.yml` directly, or one can pass the parameter to Maven bui
#### Example

* `mvn clean package -B -P production,graphdb "-Ddeployment=DEV"`
* `clean package -B -P production,rdf4j,war "-Ddeployment=STAGE"`
* `clean package -B -P production,graphdb,war "-Ddeployment=STAGE"`

The `deployment` parameter is used to parameterize log messages and JMX beans and is important in case multiple deployments
of TermIt are running in the same Tomcat.
Expand All @@ -74,20 +73,17 @@ or configure it permanently by setting the `MAVEN_OPTS` variable in System Setti

### System Requirements

* JDK 11 or later (tested with JDK 11)
* (WAR) Apache Tomcat 8.5 or 9.x (recommended) or any Servlet API 4-compatible application server
* JDK 17 or later
* (WAR) Apache Tomcat 10 or any Jakarta Servlet API 4-compatible application server
* _For deployment of a WAR build artifact._
* Do not use Apache Tomcat 10.x, it is based on the new Jakarta EE and TermIt would not work on it due to package namespace issues (`javax` -> `jakarta`)
* Do not use Apache Tomcat 9.x or older, it is based on the old Java EE and TermIt would not work on it due to package namespace issues (`javax` -> `jakarta`)

### Setup

Application deployment is simple - just deploy the WAR file (in case of the `war` Maven build profile) to an
application server or run the JAR file (in case of the `standalone` Maven build profile).

What is important is the correct setup of the repository. We will describe two options:

1. GraphDB
2. RDF4J
What is important is the correct setup of the repository.

#### GraphDB

Expand All @@ -99,16 +95,16 @@ In order to support inference used by the application, a custom ruleset has to b
4. Create the following Lucene connectors in GraphDB:
* *Label index*
* name: **label_index**
* Field name: **label**, **title**
* Property chain: **http://www.w3.org/2000/01/rdf-schema#label**, **http://purl.org/dc/terms/title**
* Field names: **prefLabel**, **altLabel**, **hiddenLabel**, **title**
* Property chains: **http://www.w3.org/2004/02/skos/core#prefLabel**, http://www.w3.org/2004/02/skos/core#altLabel**, **http://www.w3.org/2004/02/skos/core#hiddenLabel**, **http://purl.org/dc/terms/title**
* Languages: _Leave empty (for indexing all languages) or specify the language tag - see below_
* Types: **http://www.w3.org/2004/02/skos/core#Concept**, **http://onto.fel.cvut.cz/ontologies/slovník/agendový/popis-dat/pojem/slovník**
* Analyzer: Analyzer appropriate for the system language, e.g. **org.apache.lucene.analysis.cz.CzechAnalyzer**
* *Definition and comment index*
* name: **defcom_index**
* Field name: **definition**, **comment**, **description**
* Field name: **definition**, **scopeNote**, **description**
* Languages: _Leave empty (for indexing all languages) or specify the language tag - see below_
* Property chain: **http://www.w3.org/2004/02/skos/core#definition**, **http://www.w3.org/2000/01/rdf-schema#comment**, **http://purl.org/dc/terms/description**
* Property chain: **http://www.w3.org/2004/02/skos/core#definition**, **http://www.w3.org/2004/02/skos/core#scopeNote**, **http://purl.org/dc/terms/description**
* Types and Analyzer as above

Language can be set for each connector. This is useful in case the data contain labels, definitions, and comments in multiple languages. In this case,
Expand All @@ -117,34 +113,13 @@ there is a term with label `území`@cs and `area`@en. Now, if no language is sp
look as follows: `<em>území</em> area`, which may not be desired. If the connector language is set to `cs`, the result snippet will contain
only `<em>území</em>`. See the [documentation](http://graphdb.ontotext.com/documentation/free/lucene-graphdb-connector.html) for more details.

#### RDF4J

In order to support the inference used by the application, new rules need to be added to RDF4J because its own RDFS rule engine does not
support OWL stuff like inverse properties (which are used in the model).

For RDF4J 2.x:
1. Start by creating an RDF4J repository of type **RDFS+SPIN with Lucene support**
2. Upload SPIN rules from `rulesets/rules-termit-spin.ttl` into the repository
3. There is no need to configure Lucene connectors, it by default indexes all properties in RDF4J (alternatively, it is possible
to upload a repository configuration directly into the system repository - see examples at [[1]](https://github.com/eclipse/rdf4j/tree/master/core/repository/api/src/main/resources/org/eclipse/rdf4j/repository/config)
4. -----

For RDF4J 3.x:
1. Start by creating an RDF4J repository with RDFS and SPIN inference and Lucene support
* Copy repository configuration into the appropriate directory, as described at [[2]](https://rdf4j.eclipse.org/documentation/server-workbench-console/#repository-configuration)
* Native store with RDFS+SPIN and Lucene sample configuration is at [[3]](https://github.com/eclipse/rdf4j/blob/master/core/repository/api/src/main/resources/org/eclipse/rdf4j/repository/config/native-spin-rdfs-lucene.ttl)
2. Upload SPIN rules from `rulesets/rules-termit-spin.ttl` into the repository
3. There is no need to configure Lucene connectors, it by default indexes all properties in RDF4J
4. -----

#### Common

TermIt needs the repository to provide some inference. Beside loading the appropriate rulesets (see above), it is also
necessary to load the ontological models into the repository.

5. Upload the following RDF files into the newly created repository:
* `ontology/termit-glosář.ttl`
* `ontology/termit-model.ttl`
* `ontology/sioc-ns.rdf`
* `http://onto.fel.cvut.cz/ontologies/slovník/agendový/popis-dat/model`
* `http://onto.fel.cvut.cz/ontologies/slovník/agendový/popis-dat/glosář`
* `https://www.w3.org/TR/skos-reference/skos.rdf`
Expand Down Expand Up @@ -203,4 +178,4 @@ TERMIT_SECURITY_PROVIDER=oidc
TermIt will automatically configure its security accordingly
(it is using Spring's [`ConditionalOnProperty`](https://www.baeldung.com/spring-conditionalonproperty)).

**Note that termit-ui needs to be configured for mathcing authentication mode.**
**Note that termit-ui needs to be configured for matching authentication mode.**
4 changes: 2 additions & 2 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
</parent>

<artifactId>termit</artifactId>
<version>3.2.0</version>
<version>3.3.0</version>
<name>TermIt</name>
<description>Terminology manager based on Semantic Web technologies.</description>
<packaging>${packaging}</packaging>
Expand Down Expand Up @@ -394,7 +394,7 @@
</build>
</profile>

<!-- Profiles for storages. Important for correct full text search functionality -->
<!-- Profile for GraphDB storage with Lucene connectors. Important for correct full text search functionality -->
<profile>
<id>graphdb</id>
<properties>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
package cz.cvut.kbss.termit.event;

import cz.cvut.kbss.termit.model.Asset;
import org.springframework.context.ApplicationEvent;

/**
* Event published before an asset is deleted.
*/
public class BeforeAssetDeleteEvent extends ApplicationEvent {
final Asset<?> asset;
public BeforeAssetDeleteEvent(Object source, Asset<?> asset) {
super(source);
this.asset = asset;
}

public Asset<?> getAsset() {
return asset;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
package cz.cvut.kbss.termit.model.changetracking;

import cz.cvut.kbss.jopa.model.MultilingualString;
import cz.cvut.kbss.jopa.model.annotations.OWLAnnotationProperty;
import cz.cvut.kbss.jopa.model.annotations.OWLClass;
import cz.cvut.kbss.jopa.model.annotations.ParticipationConstraints;
import cz.cvut.kbss.jopa.vocabulary.RDFS;
import cz.cvut.kbss.termit.model.Asset;
import cz.cvut.kbss.termit.util.Vocabulary;
import jakarta.annotation.Nonnull;

import java.util.Objects;

/**
* Represents a record of asset deletion.
*/
@OWLClass(iri = Vocabulary.s_c_smazani_entity)
public class DeleteChangeRecord extends AbstractChangeRecord {
@ParticipationConstraints(nonEmpty = true)
@OWLAnnotationProperty(iri = RDFS.LABEL)
private MultilingualString label;

/**
* Creates a new instance.
* @param changedEntity the changed asset
* @throws IllegalArgumentException If the label type is not String or MultilingualString
*/
public DeleteChangeRecord(Asset<?> changedEntity) {
super(changedEntity);

if (changedEntity.getLabel() instanceof String stringLabel) {
this.label = MultilingualString.create(stringLabel, null);
} else if (changedEntity.getLabel() instanceof MultilingualString multilingualLabel) {
this.label = multilingualLabel;
} else {
throw new IllegalArgumentException("Unsupported label type: " + changedEntity.getLabel().getClass());
}
}

public DeleteChangeRecord() {
super();
}

public MultilingualString getLabel() {
return label;
}

public void setLabel(MultilingualString label) {
this.label = label;
}

@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (!(o instanceof DeleteChangeRecord that)) {
return false;
}
if (!super.equals(o)) {
return false;
}
return Objects.equals(label, that.label);
}

@Override
public String toString() {
return "DeleteChangeRecord{" +
super.toString() +
", label=" + label +
'}';
}

@Override
public int compareTo(@Nonnull AbstractChangeRecord o) {
if (o instanceof UpdateChangeRecord) {
return 1;
}
if (o instanceof PersistChangeRecord) {
return 1;
}
return super.compareTo(o);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ public int compareTo(@Nonnull AbstractChangeRecord o) {
if (o instanceof UpdateChangeRecord) {
return -1;
}
if (o instanceof DeleteChangeRecord) {
return -1;
}
return super.compareTo(o);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,9 @@ public int compareTo(@Nonnull AbstractChangeRecord o) {
if (o instanceof PersistChangeRecord) {
return 1;
}
if (o instanceof DeleteChangeRecord) {
return -1;
}
return super.compareTo(o);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import cz.cvut.kbss.termit.dto.RecentlyCommentedAsset;
import cz.cvut.kbss.termit.event.AssetPersistEvent;
import cz.cvut.kbss.termit.event.AssetUpdateEvent;
import cz.cvut.kbss.termit.event.BeforeAssetDeleteEvent;
import cz.cvut.kbss.termit.exception.PersistenceException;
import cz.cvut.kbss.termit.model.Asset;
import cz.cvut.kbss.termit.model.User;
Expand Down Expand Up @@ -65,6 +66,12 @@ public T update(T entity) {
return super.update(entity);
}

@Override
public void remove(T entity) {
eventPublisher.publishEvent(new BeforeAssetDeleteEvent(this, entity));
super.remove(entity);
}

/**
* Finds unique last commented assets.
*
Expand Down
Loading

0 comments on commit 6f10ef0

Please sign in to comment.