Platform Gateway¶
What’s in this document?
Ontotext Platform uses Kong as an API gateway. It performs service routing, JWT token validation, throttling and more.
The goal is to place all Platform services and applications behind an API gateway.
Administration¶
Kong exposes an administrative REST API on the 8001 port (default).
Alternately, instead of manually executing REST requests, you can use Konga, which is an open-source solution for managing multiple Kong instances through a web page.
Note
This port should not be exposed to the outside world.
Declarative Configuration¶
Kong can be deployed in two modes - with and without a database. The Ontotext Platform uses the latter with Kong’s declarative configuration.
The following declarative configuration kong.yaml enables proxying
of the Platform components along with JWT validation on the Semantic Objects Service:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | _format_version: "1.1"
consumers:
- username: ontotext-platform-consumer
custom_id: ontotext-platform-consumer
# Refer to https://docs.konghq.com/hub/kong-inc/jwt/
jwt_secrets:
- consumer: ontotext-platform-consumer
# This field is required to validate the iss claim
# Each token should carry iss claim with the same value in order to be let through
key: "localhost"
secret: "XXXXXXXXX"
algorithm: HS256
plugins:
# Refer to https://docs.konghq.com/hub/kong-inc/jwt/
- name: jwt
service: semantic-objects
config:
claims_to_verify: ["exp"]
key_claim_name: iss
# Refer to https://docs.konghq.com/hub/kong-inc/correlation-id/
# This is configured to trigger for all services/routes and to return it to the client.
- name: correlation-id
config:
# Match the one used by semantic objects
header_name: X-Request-ID
generator: uuid
# Make Kong return the header to the clients
echo_downstream: true
services:
# Semantic objects as a service configurations
- name: semantic-objects
url: http://semantic-objects:8080
connect_timeout: 60000
read_timeout: 60000
write_timeout: 60000
routes:
- name: graphql
paths: ["/graphql"]
methods: ["GET", "POST"]
strip_path: false
preserve_host: false
#
- name: soml
paths: ["/soml"]
methods: ["GET", "POST", "PUT", "DELETE"]
strip_path: false
preserve_host: false
#
- name: soml-rbac
paths: ["/soml-rbac"]
methods: ["GET", "PUT"]
strip_path: false
preserve_host: false
#
- name: good-to-go
paths: ["/__gtg"]
methods: ["GET"]
strip_path: false
preserve_host: false
#
- name: healthchecks
paths: ["/__health"]
methods: ["GET"]
strip_path: false
preserve_host: false
#
- name: troubleshoot
paths: ["/__trouble"]
methods: ["GET"]
strip_path: false
preserve_host: false
# GraphDB
- name: graphdb
url: http://graphdb:7200
connect_timeout: 60000
read_timeout: 60000
write_timeout: 60000
routes:
- name: graphdb
paths: ["/graphdb"]
methods: ["GET", "POST", "PUT", "DELETE"]
strip_path: true
preserve_host: false
# GraphiQL playground
- name: graphiql
url: http://graphiql:8080
connect_timeout: 60000
read_timeout: 60000
write_timeout: 60000
routes:
- name: graphiql
paths: ["/graphiql"]
methods: ["GET", "POST"]
strip_path: false
preserve_host: false
# Security services
- name: fusionauth
url: http://fusionauth:9011
connect_timeout: 60000
read_timeout: 60000
write_timeout: 60000
routes:
- name: fusionauth
# FusionAuth is not easily proxible behind a subpath, so we have to route on root lvl...
paths:
- /api/login
- /oauth2
- /admin
- /login
- /logout
- /setup-wizard
# Static resources
- /images
- /js
- /fonts
- /css
- /ajax
methods: ["GET", "POST", "PUT", "DELETE"]
strip_path: false
preserve_host: false
plugins:
# Fusion requires to set X-Forwarded-Port if it's behind proxy
# Using post-function because request-transformer cannot set X-Forwarded headers
- name: post-function
config:
# Update this port according to how the platform is deployed
functions:
- ngx.var.upstream_x_forwarded_port=8000;
- name: grafana
url: http://grafana:3000
connect_timeout: 60000
read_timeout: 60000
write_timeout: 60000
routes:
- name: grafana
paths: ["/grafana"]
methods: ["GET", "POST", "PUT", "DELETE"]
strip_path: true
preserve_host: false
|
The configuration registers the following upstream services:
Semantic Objects service on
http://semantic-objects:8080GraphDB on
http://graphdb:7200GraphiQL playground on
http://graphiql:8080FusionAuth on
http://fusionauth:9011Grafana on
http://grafana:3000
This configuration also manages the generation of correlation IDs on all requests made to Kong. This ensures traceability across all Platform services.
Before deploying Kong, make sure to update the following fields:
jwt_secrets[0].key- the consumer ISS claim that will be validated for each JWT. Ideally, this should be the domain on which the Platform is deployed and should match FusionAuth’s issuer.jwt_secrets[0].secret- the secret key used to sign and validate JWTs.Make sure each
services[].urlis accessible by Kong’s container network. Update if needed. Avoid usinglocalhostas this will loop back to Kong’s container.
Note
See Kong’s documentation for DB-less mode with declarative configuration.
Deployment¶
The following docker-compose.yaml can be used as an example on
how to deploy Kong:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | version: '3.6'
services:
kong:
image: kong:2.0.3-alpine
restart: always
environment:
KONG_DATABASE: "off"
KONG_DECLARATIVE_CONFIG: "/etc/kong/kong.yaml"
KONG_MEM_CACHE_SIZE: "64m"
KONG_NGINX_WORKER_PROCESSES: "4"
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_PROXY_ACCESS_LOG: "/dev/stdout"
KONG_ADMIN_ACCESS_LOG: "/dev/stdout"
KONG_PROXY_ERROR_LOG: "/dev/stderr"
KONG_ADMIN_ERROR_LOG: "/dev/stderr"
ports:
- 8000:8000
- 8001:8001
- 8443:8443
- 8444:8444
volumes:
- "./config/kong.yaml:/etc/kong/kong.yaml"
|
Note
Make sure there is a folder config with the kong.yaml configuration inside next to the
Docker compose YAML so it can be mounted in Kong’s container or update the Docker compose file
with the correct configuration file location.
To deploy Kong’s Docker compose, execute the following shell command:
docker-compose up -d
Warning
This example, however, does not connect to any services. Kong should be deployed together with the Platform services in a Docker network where they can be reached. Another way to do that is to expose and make the services accessible outside Docker.
Note
For deploying the full Ontotext Platform, including security and monitoring, see the Deployment section for available deployment scenarios.
Monitoring¶
Health checks¶
Kong provides a node status endpoint.
Troubleshooting¶
The upstream server is timing out¶
If you get an error message saying that Kong timeouts before the request finishes, timeout needs to
be increased (60 seconds by default). To resolve this error, timeout values should be increased.
For example, to update the Semantic Objects service timeouts to 300 seconds, modify kong.yaml like this:
services:
- name: semantic-objects
url: http://semantic-objects:8080
connect_timeout: 300000
read_timeout: 300000
write_timeout: 300000