Semantic Services Gateway

The Ontotext Semantic Services use Kong as an API gateway. It performs service routing, JWT token validation, throttling, and more.

The goal is to place all Semantic Services and applications behind an API gateway.

Administration

Kong exposes an administrative REST API on the 8001 port (default).

Alternately, instead of manually executing REST requests, you can use Konga, which is an open-source solution for managing multiple Kong instances through a web page.

Note

This port should not be exposed to the outside world.

Declarative Configuration

Kong can be deployed in two modes - with and without a database. The Ontotext Semantic Services use the latter with Kong’s declarative configuration.

This declarative configuration kong.yaml enables proxying of the Semantic Services components along with JWT validation on the Semantic Objects.

The configuration registers the following upstream services:

  • Semantic Objects on http://semantic-objects:8080
  • GraphDB on http://graphdb:7200
  • Workbench on http://workbench:3000
  • FusionAuth on http://fusionauth:9011
  • Grafana on http://grafana:3000

This configuration also manages the generation of correlation IDs on all requests made to Kong. This ensures traceability across all Semantic Services.

Before deploying Kong, make sure to update the following fields:

  • jwt_secrets[0].key: the consumer ISS claim that will be validated for each JWT. Ideally, this should be the domain on which the Semantic Services are deployed and should match FusionAuth’s issuer.
  • jwt_secrets[0].secret: the secret key used to sign and validate JWTs.
  • Make sure each services[].url is accessible by Kong’s container network. Update if needed. Avoid using localhost as this will loop back to Kong’s container.

Note

See Kong’s documentation for DB-less mode with declarative configuration.

Deployment

This docker-compose.yaml can be used as an example on how to deploy Kong.

Note

Make sure there is a config folder with the kong.yaml configuration inside next to the Docker Compose YAML so it can be mounted in Kong’s container or update the Docker Compose file with the correct configuration file location.

To deploy Kong’s Docker Compose, execute the following shell command:

docker-compose up -d

Warning

This example, however, does not connect to any services. Kong should be deployed together with the Semantic Services in a Docker network where they can be reached. Another way to do that is to expose and make the services accessible outside Docker.

Note

For installing and deploying the full Ontotext Semantic Services, including security and monitoring, see the Installation section for available deployment scenarios.

Troubleshooting

The upstream server is timing out

If you get an error message saying that Kong timeouts before the request finishes, timeout needs to be increased (60 seconds by default). To resolve this error, timeout values should be increased.

For example, to update the Semantic Objects timeouts to 300 seconds, modify kong.yaml like this:

services:
  - name: semantic-objects
    url: http://semantic-objects:8080
    connect_timeout: 300000
    read_timeout: 300000
    write_timeout: 300000