Skip to content

Commit

Permalink
Merge branch 'main' into feat_opentelemetry-traces
Browse files Browse the repository at this point in the history
  • Loading branch information
develop7 authored Mar 12, 2024
2 parents 109dd88 + 00f5780 commit 3db0495
Show file tree
Hide file tree
Showing 15 changed files with 119 additions and 48 deletions.
2 changes: 1 addition & 1 deletion .github/scripts/arm/docker-env/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# PostgREST docker hub image

FROM ubuntu:focal@sha256:bb1c41682308d7040f74d103022816d41c50d7b0c89e9d706a74b4e548636e54 AS postgrest
FROM ubuntu:focal@sha256:80ef4a44043dec4490506e6cc4289eeda2d106a70148b74b5ae91ee670e9c35d AS postgrest

RUN apt-get update -y \
&& apt install -y --no-install-recommends libpq-dev zlib1g-dev jq gcc libnuma-dev \
Expand Down
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ This project adheres to [Semantic Versioning](http://semver.org/).
- #3061, Apply all function settings as transaction-scoped settings - @taimoorzaeem
- #3171, #3046, Log schema cache stats to stderr - @steve-chavez
- #3210, Dump schema cache through admin API - @taimoorzaeem
- #2676, Performance improvement on bulk json inserts, around 10% increase on requests per second by removing `json_typeof` from write queries - @steve-chavez

### Fixed

Expand All @@ -21,6 +22,7 @@ This project adheres to [Semantic Versioning](http://semver.org/).
- #3224, Return status code 406 for non-accepted media type instead of code 415 - @wolfgangwalther
- #3160, Fix using select= query parameter for custom media type handlers - @wolfgangwalther
- #3237, Dump media handlers and timezones with --dump-schema - @wolfgangwalther
- #3323, Don't hide error on LISTEN channel failure - @steve-chavez

### Deprecated

Expand Down
10 changes: 5 additions & 5 deletions docs/references/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -121,8 +121,8 @@ It's possible to reload PostgREST's configuration without restarting the server.

.. _config_reloading_signal:

Reload with signal
------------------
Configuration Reload with signal
--------------------------------

To reload the configuration via signal, send a SIGUSR2 signal to the server process.

Expand All @@ -132,8 +132,8 @@ To reload the configuration via signal, send a SIGUSR2 signal to the server proc
.. _config_reloading_notify:

Reload with NOTIFY
------------------
Configuration Reload with NOTIFY
--------------------------------

To reload the configuration from within the database, you can use a NOTIFY command.

Expand Down Expand Up @@ -230,7 +230,7 @@ db-channel
**In-Database** `n/a`
=============== =======================

The name of the notification channel that PostgREST uses for :ref:`schema_reloading` and configuration reloading.
The name of the notification channel that PostgREST uses for :ref:`schema_reloading_notify` and :ref:`config_reloading_notify`.

.. _db-channel-enabled:

Expand Down
4 changes: 2 additions & 2 deletions docs/references/schema_cache.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ There’s no downtime when reloading the schema cache. The reloading will happen

.. _schema_reloading_notify:

Reloading with NOTIFY
~~~~~~~~~~~~~~~~~~~~~
Schema Cache Reloading with NOTIFY
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

PostgREST also allows you to reload its schema cache through PostgreSQL `NOTIFY <https://www.postgresql.org/docs/current/sql-notify.html>`_.

Expand Down
7 changes: 3 additions & 4 deletions nix/tools/tests.nix
Original file line number Diff line number Diff line change
Expand Up @@ -91,16 +91,15 @@ let
checkedShellScript
{
name = "postgrest-dump-schema";
docs = "Dump the loaded schema's SchemaCache as a yaml file.";
docs = "Dump the loaded schema's SchemaCache in JSON format.";
workingDir = "/";
withEnv = postgrest.env;
withPath = [ jq ];
}
''
${withTools.withPg} -f test/spec/fixtures.sql \
${withTools.withPg} -f test/spec/fixtures/load.sql \
${cabal-install}/bin/cabal v2-run ${devCabalOptions} --verbose=0 -- \
postgrest --dump-schema \
| ${yq}/bin/yq -y .
postgrest --dump-schema
'';

coverage =
Expand Down
13 changes: 7 additions & 6 deletions src/PostgREST/AppState.hs
Original file line number Diff line number Diff line change
Expand Up @@ -495,15 +495,16 @@ listener appState observer = do
putIsListenerOn appState True
SQL.listen db $ SQL.toPgIdentifier dbChannel
SQL.waitForNotifications handleNotification db
_ ->
die $ "Could not listen for notifications on the " <> dbChannel <> " channel"
Left err -> do
observer $ DBListenerFail dbChannel err
exitFailure
where
handleFinally _ False _ = do
observer DBListenerFailNoRecoverObs
handleFinally dbChannel False err = do
observer $ DBListenerFailNoRecoverObs dbChannel err
killThread (getMainThreadId appState)
handleFinally dbChannel True _ = do
handleFinally dbChannel True err = do
-- if the thread dies, we try to recover
observer $ DBListenerFailRecoverObs dbChannel
observer $ DBListenerFailRecoverObs dbChannel err
putIsListenerOn appState False
-- assume the pool connection was also lost, call the connection worker
connectionWorker appState
Expand Down
20 changes: 14 additions & 6 deletions src/PostgREST/Observation.hs
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ module PostgREST.Observation

import qualified Data.ByteString.Lazy as LBS
import qualified Data.Text.Encoding as T
import qualified Hasql.Connection as SQL
import qualified Hasql.Pool as SQL
import qualified Network.Socket as NS
import Numeric (showFFloat)
Expand All @@ -36,8 +37,9 @@ data Observation
| ConnectionRetryObs Int
| ConnectionPgVersionErrorObs SQL.UsageError
| DBListenerStart Text
| DBListenerFailNoRecoverObs
| DBListenerFailRecoverObs Text
| DBListenerFail Text SQL.ConnectionError
| DBListenerFailNoRecoverObs Text (Either SomeException ())
| DBListenerFailRecoverObs Text (Either SomeException ())
| ConfigReadErrorObs
| ConfigReadErrorFatalObs SQL.UsageError Text
| ConfigReadErrorNotFatalObs SQL.UsageError
Expand Down Expand Up @@ -81,10 +83,12 @@ observationMessage = \case
jsonMessage usageErr
DBListenerStart channel -> do
"Listening for notifications on the " <> channel <> " channel"
DBListenerFailNoRecoverObs ->
"Automatic recovery disabled, exiting."
DBListenerFailRecoverObs channel ->
"Retrying listening for notifications on the " <> channel <> " channel.."
DBListenerFail channel err -> do
"Could not listen for notifications on the " <> channel <> " channel. " <> show err
DBListenerFailNoRecoverObs channel err ->
showListenerError err <> ". Automatic recovery disabled on the " <> channel <> " channel"
DBListenerFailRecoverObs channel err ->
showListenerError err <> ". Retrying listening for notifications on the " <> channel <> " channel.."
ConfigReadErrorObs ->
"An error ocurred when trying to query database settings for the config parameters"
ConfigReadErrorFatalObs usageErr hint ->
Expand All @@ -106,3 +110,7 @@ observationMessage = \case
showMillis x = toS $ showFFloat (Just 1) (x * 1000) ""

jsonMessage err = T.decodeUtf8 . LBS.toStrict . Error.errorPayload $ Error.PgError False err

showListenerError :: Either SomeException () -> Text
showListenerError (Left e) = show e
showListenerError (Right _) = "Failed getting notifications" -- this should not happen as the listener will never finish with a Right result
31 changes: 17 additions & 14 deletions src/PostgREST/Query/SqlFragment.hs
Original file line number Diff line number Diff line change
Expand Up @@ -300,36 +300,39 @@ fromJsonBodyF :: Maybe LBS.ByteString -> [CoercibleField] -> Bool -> Bool -> Boo
fromJsonBodyF body fields includeSelect includeLimitOne includeDefaults =
(if includeSelect then "SELECT " <> namedCols <> " " else mempty) <>
"FROM (SELECT " <> jsonPlaceHolder <> " AS json_data) pgrst_payload, " <>
-- convert a json object into a json array, this way we can use json_to_recordset for all json payloads
-- Otherwise we'd have to use json_to_record for json objects and json_to_recordset for json arrays
-- We do this in SQL to avoid processing the JSON in application code
"LATERAL (SELECT CASE WHEN " <> jsonTypeofF <> "(pgrst_payload.json_data) = 'array' THEN pgrst_payload.json_data ELSE " <> jsonBuildArrayF <> "(pgrst_payload.json_data) END AS val) pgrst_uniform_json, " <>
(if includeDefaults
then "LATERAL (SELECT jsonb_agg(jsonb_build_object(" <> defsJsonb <> ") || elem) AS val from jsonb_array_elements(pgrst_uniform_json.val) elem) pgrst_json_defs, "
then if isJsonObject
then "LATERAL (SELECT " <> defsJsonb <> " || pgrst_payload.json_data AS val) pgrst_json_defs, "
else "LATERAL (SELECT jsonb_agg(" <> defsJsonb <> " || elem) AS val from jsonb_array_elements(pgrst_payload.json_data) elem) pgrst_json_defs, "
else mempty) <>
"LATERAL (SELECT " <> parsedCols <> " FROM " <>
(if null fields
-- When we are inserting no columns (e.g. using default values), we can't use our ordinary `json_to_recordset`
-- because it can't extract records with no columns (there's no valid syntax for the `AS (colName colType,...)`
-- part). But we still need to ensure as many rows are created as there are array elements.
then SQL.sql $ jsonArrayElementsF <> "(" <> finalBodyF <> ") _ "
(if null fields -- when json keys are empty, e.g. when payload is `{}` or `[{}, {}]`
then SQL.sql $
if isJsonObject
then "(values(1)) _ " -- only 1 row for an empty json object '{}'
else jsonArrayElementsF <> "(" <> finalBodyF <> ") _ " -- extract rows of a json array of empty objects `[{}, {}]`
else jsonToRecordsetF <> "(" <> SQL.sql finalBodyF <> ") AS _(" <> typedCols <> ") " <> if includeLimitOne then "LIMIT 1" else mempty
) <>
") pgrst_body "
where
namedCols = intercalateSnippet ", " $ fromQi . QualifiedIdentifier "pgrst_body" . cfName <$> fields
parsedCols = intercalateSnippet ", " $ pgFmtCoerceNamed <$> fields
typedCols = intercalateSnippet ", " $ pgFmtIdent . cfName <> const " " <> SQL.sql . encodeUtf8 . cfIRType <$> fields
defsJsonb = SQL.sql $ BS.intercalate "," fieldsWDefaults
defsJsonb = SQL.sql $ "jsonb_build_object(" <> BS.intercalate "," fieldsWDefaults <> ")"
fieldsWDefaults = mapMaybe (\case
CoercibleField{cfName=nam, cfDefault=Just def} -> Just $ encodeUtf8 (pgFmtLit nam <> ", " <> def)
CoercibleField{cfDefault=Nothing} -> Nothing
) fields
(finalBodyF, jsonTypeofF, jsonBuildArrayF, jsonArrayElementsF, jsonToRecordsetF) =
(finalBodyF, jsonArrayElementsF, jsonToRecordsetF) =
if includeDefaults
then ("pgrst_json_defs.val", "jsonb_typeof", "jsonb_build_array", "jsonb_array_elements", "jsonb_to_recordset")
else ("pgrst_uniform_json.val", "json_typeof", "json_build_array", "json_array_elements", "json_to_recordset")
then ("pgrst_json_defs.val", "jsonb_array_elements", if isJsonObject then "jsonb_to_record" else "jsonb_to_recordset")
else ("pgrst_payload.json_data", "json_array_elements", if isJsonObject then "json_to_record" else "json_to_recordset")
jsonPlaceHolder = SQL.encoderAndParam (HE.nullable $ if includeDefaults then HE.jsonbLazyBytes else HE.jsonLazyBytes) body
isJsonObject = -- light validation as pg's json_to_record(set) already validates that the body is valid JSON. We just need to know whether the body looks like an object or not.
let
insignificantWhitespace = [32,9,10,13] --" \t\n\r" [32,9,10,13] https://datatracker.ietf.org/doc/html/rfc8259#section-2
in
LBS.take 1 (LBS.dropWhile (`elem` insignificantWhitespace) (fromMaybe mempty body)) == "{"

pgFmtOrderTerm :: QualifiedIdentifier -> CoercibleOrderTerm -> SQL.Snippet
pgFmtOrderTerm qi ot =
Expand Down
12 changes: 6 additions & 6 deletions test/memory/memory-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -106,13 +106,13 @@ jsonKeyTest "1M" "POST" "/rpc/leak?columns=blob" "27M"
jsonKeyTest "1M" "POST" "/leak?columns=blob" "20M"
jsonKeyTest "1M" "PATCH" "/leak?id=eq.1&columns=blob" "20M"

jsonKeyTest "10M" "POST" "/rpc/leak?columns=blob" "44M"
jsonKeyTest "10M" "POST" "/leak?columns=blob" "44M"
jsonKeyTest "10M" "PATCH" "/leak?id=eq.1&columns=blob" "44M"
jsonKeyTest "10M" "POST" "/rpc/leak?columns=blob" "32M"
jsonKeyTest "10M" "POST" "/leak?columns=blob" "32M"
jsonKeyTest "10M" "PATCH" "/leak?id=eq.1&columns=blob" "32M"

jsonKeyTest "50M" "POST" "/rpc/leak?columns=blob" "172M"
jsonKeyTest "50M" "POST" "/leak?columns=blob" "172M"
jsonKeyTest "50M" "PATCH" "/leak?id=eq.1&columns=blob" "172M"
jsonKeyTest "50M" "POST" "/rpc/leak?columns=blob" "72M"
jsonKeyTest "50M" "POST" "/leak?columns=blob" "72M"
jsonKeyTest "50M" "PATCH" "/leak?id=eq.1&columns=blob" "72M"

postJsonArrayTest "1000" "/perf_articles?columns=id,body" "20M"
postJsonArrayTest "10000" "/perf_articles?columns=id,body" "20M"
Expand Down
16 changes: 16 additions & 0 deletions test/pgbench/2676/new.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
WITH pgrst_source AS (
INSERT INTO "test"."complex_items"("arr_data", "field-with_sep", "id", "name")
SELECT "pgrst_body"."arr_data", "pgrst_body"."field-with_sep", "pgrst_body"."id", "pgrst_body"."name"
FROM (SELECT '[{"id": 4, "name": "Vier"}, {"id": 5, "name": "Funf", "arr_data": null}, {"id": 6, "name": "Sechs", "arr_data": [1, 2, 3], "field-with_sep": 6}]'::json AS json_data) pgrst_payload,
LATERAL (SELECT "arr_data", "field-with_sep", "id", "name" FROM json_to_recordset(pgrst_payload.json_data) AS _("arr_data" integer[], "field-with_sep" integer, "id" bigint, "name" text) ) pgrst_body
RETURNING "test"."complex_items".*
)
SELECT
'' AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]') AS body,
nullif(current_setting('response.headers', true), '') AS response_headers,
nullif(current_setting('response.status', true), '') AS response_status,
'' AS response_inserted
FROM (SELECT "complex_items".* FROM "pgrst_source" AS "complex_items") _postgrest_t;
17 changes: 17 additions & 0 deletions test/pgbench/2676/old.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
WITH pgrst_source AS (
INSERT INTO "test"."complex_items"("arr_data", "field-with_sep", "id", "name")
SELECT "pgrst_body"."arr_data", "pgrst_body"."field-with_sep", "pgrst_body"."id", "pgrst_body"."name"
FROM (SELECT '[{"id": 4, "name": "Vier"}, {"id": 5, "name": "Funf", "arr_data": null}, {"id": 6, "name": "Sechs", "arr_data": [1, 2, 3], "field-with_sep": 6}]'::json AS json_data) pgrst_payload,
LATERAL (SELECT CASE WHEN json_typeof(pgrst_payload.json_data) = 'array' THEN pgrst_payload.json_data ELSE json_build_array(pgrst_payload.json_data) END AS val) pgrst_uniform_json,
LATERAL (SELECT "arr_data", "field-with_sep", "id", "name" FROM json_to_recordset(pgrst_uniform_json.val) AS _("arr_data" integer[], "field-with_sep" integer, "id" bigint, "name" text) ) pgrst_body
RETURNING "test"."complex_items".*
)
SELECT
'' AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]') AS body,
nullif(current_setting('response.headers', true), '') AS response_headers,
nullif(current_setting('response.status', true), '') AS response_status,
'' AS response_inserted
FROM (SELECT "complex_items".* FROM "pgrst_source" AS "complex_items") _postgrest_t;
4 changes: 2 additions & 2 deletions test/pgbench/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
Can be used as:

```
postgrest-with-postgresql-15 -f test/pgbench/fixtures.sql pgbench -n -T 10 -f test/pgbench/1567/old.sql
postgrest-with-postgresql-15 -f test/pgbench/fixtures.sql pgbench -U postgres -n -T 10 -f test/pgbench/1567/old.sql
postgrest-with-postgresql-15 -f test/pgbench/fixtures.sql pgbench -n -T 10 -f test/pgbench/1567/new.sql
postgrest-with-postgresql-15 -f test/pgbench/fixtures.sql pgbench -U postgres -n -T 10 -f test/pgbench/1567/new.sql
```

## Directory structure
Expand Down
16 changes: 16 additions & 0 deletions test/spec/Feature/Query/InsertSpec.hs
Original file line number Diff line number Diff line change
Expand Up @@ -400,6 +400,22 @@ spec actualPgVersion = do
`shouldRespondWith` [json|[{ id: 20 }]|]
{ matchStatus = 201 }

context "insignificant whitespace" $ do
it "ignores it and successfuly inserts with json payload" $ do
request methodPost "/json_table"
[("Prefer", "return=representation")]
"\t \n \r { \"data\": { \"foo\":\"bar\" } }\t \n \r "
`shouldRespondWith` [json|[{"data":{"foo":"bar"}}]|]
{ matchStatus = 201
}

request methodPost "/json_table"
[("Prefer", "return=representation")]
"\t \n \r [{ \"data\": { \"foo\":\"bar\" } }, \t \n \r {\"data\": 34}]\t \n \r "
`shouldRespondWith` [json|[{"data":{"foo":"bar"}}, {"data":34}]|]
{ matchStatus = 201
}

-- https://github.com/PostgREST/postgrest/issues/2861
context "bit and char columns with length" $ do
it "should insert to a bit column with length" $
Expand Down
4 changes: 2 additions & 2 deletions test/spec/Feature/Query/PlanSpec.hs
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ spec actualPgVersion = do
liftIO $ do
resHeaders `shouldSatisfy` elem ("Content-Type", "application/vnd.pgrst.plan+json; for=\"application/json\"; charset=utf-8")
resStatus `shouldBe` Status { statusCode = 200, statusMessage="OK" }
totalCost `shouldBe` 3.27
totalCost `shouldBe` 0.06

it "outputs the total cost for an update" $ do
r <- request methodPatch "/projects?id=eq.3"
Expand All @@ -165,7 +165,7 @@ spec actualPgVersion = do
liftIO $ do
resHeaders `shouldSatisfy` elem ("Content-Type", "application/vnd.pgrst.plan+json; for=\"application/json\"; charset=utf-8")
resStatus `shouldBe` Status { statusCode = 200, statusMessage="OK" }
totalCost `shouldBe` 12.45
totalCost `shouldBe` 8.23

it "outputs the total cost for a delete" $ do
r <- request methodDelete "/projects?id=in.(1,2,3)"
Expand Down
9 changes: 9 additions & 0 deletions test/spec/Feature/Query/UpdateSpec.hs
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,15 @@ spec actualPgVersion = do
matchHeaders = [matchContentTypeJson]
}

context "insignificant whitespace" $ do
it "ignores it and successfuly updates with json payload" $ do
request methodPatch "/items?id=eq.1"
[("Prefer", "return=representation")]
"\t \n \r { \"id\": 99 } \t \n \r "
`shouldRespondWith` [json|[{"id":99}]|]
{ matchStatus = 200
}

context "in a nonempty table" $ do
it "can update a single item" $ do
patch "/items?id=eq.2"
Expand Down

0 comments on commit 3db0495

Please sign in to comment.