Compare commits

...

24 Commits

Author SHA1 Message Date
5c1779651c version: 1.0.103 2026-04-02 22:51:51 -04:00
6c047e326d version: 1.0.102 2026-04-02 22:51:24 -04:00
7876567ae7 fixed queryer relation issues 2026-04-02 22:51:13 -04:00
06f6a587de progress 2026-04-02 21:55:57 -04:00
29d8dfb608 flow update 2026-03-28 16:51:26 -04:00
5b36ecf06c doc update and more code comments 2026-03-27 19:25:15 -04:00
76467a6fed log cleanup 2026-03-27 19:19:27 -04:00
930d0513cd version: 1.0.101 2026-03-27 19:14:26 -04:00
cad651dbd8 version: 1.0.100 2026-03-27 19:14:08 -04:00
ea9ac8469c maybe working 2026-03-27 19:14:02 -04:00
ebcdb661fa maybe working 2026-03-27 19:13:44 -04:00
c893e29c59 version: 1.0.99 2026-03-27 18:02:29 -04:00
7523431007 test pgrx no fixes 2026-03-27 18:02:24 -04:00
dd98bfac9e version: 1.0.98 2026-03-27 16:51:05 -04:00
2f3a1d16b7 version: 1.0.97 2026-03-27 16:35:31 -04:00
e86fe5cc4e fixed relationship resolution in merger and queryer 2026-03-27 16:35:23 -04:00
93b0a70718 version: 1.0.96 2026-03-27 02:28:53 -04:00
9c24f1af8f fixed issue where merge lookups with no changes were not generating a notification 2026-03-27 02:08:45 -04:00
f9cf1f837a version: 1.0.95 2026-03-27 01:18:41 -04:00
796df7763c added replaces field to merge for the notification when a lookup is successful 2026-03-27 01:18:36 -04:00
4a10833f50 version: 1.0.94 2026-03-26 23:50:03 -04:00
46fc032026 fixed merge lookup issue 2026-03-26 23:49:52 -04:00
7ec06b81cc version: 1.0.93 2026-03-26 22:28:18 -04:00
c4e8e0309f removed initial / in validator making paths consistent across validate merger and queryer 2026-03-26 22:27:59 -04:00
33 changed files with 1905 additions and 869 deletions

View File

@ -23,6 +23,16 @@ To support high-throughput operations while allowing for runtime updates (e.g.,
3. **Immutable AST Caching**: The `Validator` struct immutably owns the `Database` registry. Schemas themselves are frozen structurally, but utilize `OnceLock` interior mutability during the Compilation Phase to permanently cache resolved `$ref` inheritances, properties, and `compiled_edges` directly onto their AST nodes. This guarantees strict `O(1)` relationship and property validation execution at runtime without locking or recursive DB polling.
4. **Lock-Free Reads**: Incoming operations acquire a read lock just long enough to clone the `Arc` inside an `RwLock<Option<Arc<Validator>>>`, ensuring zero blocking during schema updates.
### Relational Edge Resolution
When compiling nested object graphs or arrays, the JSPG engine must dynamically infer which Postgres Foreign Key constraint correctly bridges the parent to the nested schema. To guarantee deterministic SQL generation, it utilizes a strict, multi-step algebraic resolution process applied during the `OnceLock` Compilation phase:
1. **Graph Locality Boundary**: Before evaluating constraints, the engine ensures the parent and child types do not belong strictly to the same inheritance lineage (e.g., `invoice` -> `activity`). Structural inheritance edges are handled natively by the payload merger, so relational edge discovery is intentionally bypassed.
2. **Structural Cardinality Filtration**: If the JSON Schema requires an Array collection (`{"type": "array"}`), JSPG mathematically rejects pure scalar Forward constraints (where the parent holds a single UUID pointer), logically narrowing the possibilities to Reverse (1:N) or Junction (M:M) constraints.
3. **Exact Prefix Match**: If an explicitly prefixed Foreign Key (e.g. `fk_invoice_counterparty_entity` -> `prefix: "counterparty"`) directly matches the name of the requested schema property (e.g. `{"counterparty": {...}}`), it is instantly selected.
4. **Ambiguity Elimination (M:M Twin Deduction)**: If multiple explicitly prefixed relations remain (which happens by design in Many-to-Many junction tables like `contact` or `role`), the compiler inspects the actual compiled child JSON schema AST. If it observes the child natively consumes one of the prefixes as an explicit outbound property (e.g. `contact` explicitly defining `{ "target": ... }`), it considers that arrow "used up". It mathematically deduces that its exact twin providing reverse ownership (`"source"`) MUST be the inbound link mapping from the parent.
5. **Implicit Base Fallback (1:M)**: If no explicit prefix matches, and M:M deduction fails, the compiler filters for exactly one remaining relation with a `null` prefix (e.g. `fk_invoice_line_invoice` -> `prefix: null`). A `null` prefix mathematically denotes the core structural parent-child ownership edge and is used safely as a fallback.
6. **Deterministic Abort**: If the engine exhausts all deduction pathways and the edge remains ambiguous, it explicitly aborts schema compilation (`returns None`) rather than silently generating unpredictable SQL.
### Global API Reference
These functions operate on the global `GLOBAL_JSPG` engine instance and provide administrative boundaries:
@ -46,8 +56,8 @@ JSPG implements specific extensions to the Draft 2020-12 standard to support the
#### A. Polymorphism & Referencing (`$ref`, `$family`, and Native Types)
* **Native Type Discrimination (`variations`)**: Schemas defined inside a Postgres `type` are Entities. The validator securely and implicitly manages their `"type"` property. If an entity inherits from `user`, incoming JSON can safely define `{"type": "person"}` without errors, thanks to `compiled_variations` inheritance.
* **Structural Inheritance & Viral Infection (`$ref`)**: `$ref` is used exclusively for structural inheritance, *never* for union creation. A Punc request schema that `$ref`s an Entity virally inherits all physical database polymorphism rules for that target.
* **Shape Polymorphism (`$family`)**: Auto-expands polymorphic API lists based on an abstract **Descendants Graph**. If `{"$family": "widget"}` is used, the Validator dynamically identifies *every* schema in the registry that `$ref`s `widget` (e.g., `stock.widget`, `task.widget`) and evaluates the JSON against all of them.
* **Structural Inheritance & Viral Infection (`$ref`)**: `$ref` is used exclusively for structural inheritance and explicit composition, *never* for union creation. A `$ref` ALWAYS targets a specific, *single* schema struct (e.g., `full.person`). It represents an explicit, known structural shape. A Punc request schema that `$ref`s an Entity virally inherits all physical database polymorphism rules for that target.
* **Shape Polymorphism (`$family`)**: Unlike `$ref`, `$family` ALWAYS targets an abstract *table lineage* (e.g., `organization` or `widget`). It instructs the engine to dynamically expand the response payload into multiple possible schema shapes based on the row's physical database `type`. If `{"$family": "widget"}` is used, the Validator dynamically identifies *every* schema in the registry that `$ref`s `widget` (e.g., `stock.widget`, `task.widget`) and recursively evaluates the JSON against all of them.
* **Strict Matches & Depth Heuristic**: Polymorphic structures MUST match exactly **one** schema permutation. If multiple inherited struct permutations pass, JSPG applies the **Depth Heuristic Tie-Breaker**, selecting the candidate deepest in the inheritance tree.
#### B. Dot-Notation Schema Resolution & Database Mapping
@ -107,7 +117,7 @@ The Queryer transforms Postgres into a pre-compiled Semantic Query Engine, desig
* **Caching Strategy (DashMap SQL Caching)**: The Queryer securely caches its compiled, static SQL string templates per schema permutation inside the `GLOBAL_JSPG` concurrent `DashMap`. This eliminates recursive AST schema crawling on consecutive requests. Furthermore, it evaluates the strings via Postgres SPI (Server Programming Interface) Prepared Statements, leveraging native database caching of execution plans for extreme performance.
* **Schema-to-SQL Compilation**: Compiles JSON Schema ASTs spanning deep arrays directly into static, pre-planned SQL multi-JOIN queries. This explicitly features the `Smart Merge` evaluation engine which natively translates properties through `allOf` and `$ref` inheritances, mapping JSON fields specifically to their physical database table aliases during translation.
* **Dynamic Filtering**: Binds parameters natively through `cue.filters` objects. The queryer enforces a strict, structured, MongoDB-style operator syntax to map incoming JSON request paths directly to their originating structural table columns.
* **Dynamic Filtering**: Binds parameters natively through `cue.filters` objects. The queryer enforces a strict, structured, MongoDB-style operator syntax to map incoming JSON request constraints directly to their originating structural table columns. Filters support both flat path notation (e.g., `"contacts/is_primary": {...}`) and deeply nested recursive JSON structures (e.g., `{"contacts": {"is_primary": {...}}}`). The queryer recursively traverses and flattens these structures at AST compilation time.
* **Equality / Inequality**: `{"$eq": value}`, `{"$ne": value}` automatically map to `=` and `!=`.
* **Comparison**: `{"$gt": ...}`, `{"$gte": ...}`, `{"$lt": ...}`, `{"$lte": ...}` directly compile to Postgres comparison operators (`> `, `>=`, `<`, `<=`).
* **Array Inclusion**: `{"$in": [values]}`, `{"$nin": [values]}` use native `jsonb_array_elements_text()` bindings to enforce `IN` and `NOT IN` logic without runtime SQL injection risks.

58
LOOKUP_VERIFICATION.md Normal file
View File

@ -0,0 +1,58 @@
# The Postgres Partial Index Claiming Pattern
This document outlines the architectural strategy for securely handling the deduplication, claiming, and verification of sensitive unique identifiers (like email addresses or phone numbers) strictly through PostgreSQL without requiring "magical" logic in the JSPG `Merger`.
## The Denial of Service (DoS) Squatter Problem
If you enforce a standard `UNIQUE` constraint on an email address table:
1. Malicious User A signs up and adds `jeff.bezos@amazon.com` to their account but never verifies it.
2. The real Jeff Bezos signs up.
3. The Database blocks Jeff because the unique string already exists.
The squatter has effectively locked the legitimate owner out of the system.
## The Anti-Patterns
1. **Global Entity Flags**: Adding a global `verified` boolean to the root `entity` table forces unrelated objects (like Widgets, Invoices, Orders) to carry verification logic that doesn't belong to them.
2. **Magical Merger Logic**: Making JSPG's `Merger` aware of a specific `verified` field breaks its pure structural translation model. The Merger shouldn't need hardcoded conditional logic to know if it's allowed to update an unverified row.
## The Solution: Postgres Partial Unique Indexes
The holy grail is to defer all claiming logic natively to the database engine using a **Partial Unique Index**.
```sql
-- Remove any existing global unique constraint on address first
CREATE UNIQUE INDEX lk_email_address_verified
ON email_address (address)
WHERE verified_at IS NOT NULL;
```
### How the Lifecycle Works Natively
1. **Unverified Squatters (Isolated Rows):**
A hundred different users can send `{ "address": "jeff.bezos@amazon.com" }` through the `save_person` Punc. Because the Punc isolates them and doesn't allow setting the `verified_at` property natively (enforced by the JSON schema), the JSPG Merger inserts `NULL`.
Postgres permits all 100 `INSERT` commands to succeed because the Partial Index **ignores** rows where `verified_at IS NULL`. Every user gets their own isolated, unverified row acting as a placeholder on their contact edge.
2. **The Verification Race (The Claim):**
The real Jeff clicks his magic verification link. The backend securely executes a specific verification Punc that runs:
`UPDATE email_address SET verified_at = now() WHERE id = <jeff's-real-uuid>`
3. **The Lockout:**
Because Jeff's row now strictly satisfies `verified_at IS NOT NULL`, that exact row enters the Partial Unique Index.
If any of the other 99 squatters *ever* click their fake verification links (or if a new user tries to verify that same email), PostgreSQL hits the index and violently throws a **Unique Constraint Violation**, flawlessly blocking them. The winner has permanently claimed the slot across the entire environment!
### Periodic Cleanup
Since unverified rows are allowed to accumulate without colliding, a simple Postgres `pg_cron` job or backend worker can sweep the table nightly to prune abandoned claims and preserve storage:
```sql
DELETE FROM email_address
WHERE verified_at IS NULL
AND created_at < NOW() - INTERVAL '24 hours';
```
### Why this is the Ultimate Architecture
* The **JSPG Merger** remains mathematically pure. It doesn't know what `verified_at` is; it simply respects the database's structural limits (`O(1)` pure translation).
* **Row-Level Security (RLS)** naturally blocks users from seeing or claiming each other's unverified rows.
* You offload complex race-condition tracking entirely to the C-level PostgreSQL B-Tree indexing engine, guaranteeing absolute cluster-wide atomicity.

0
agreego.sql Normal file
View File

388
fixtures/database.json Normal file
View File

@ -0,0 +1,388 @@
[
{
"description": "Edge missing - 0 relations",
"database": {
"types": [
{
"id": "11111111-1111-1111-1111-111111111111",
"type": "type",
"name": "org",
"module": "test",
"source": "test",
"hierarchy": [
"org"
],
"variations": [
"org"
],
"schemas": [
{
"$id": "full.org",
"type": "object",
"properties": {
"missing_users": {
"type": "array",
"items": {
"$ref": "full.user"
}
}
}
}
]
},
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "type",
"name": "user",
"module": "test",
"source": "test",
"hierarchy": [
"user"
],
"variations": [
"user"
],
"schemas": [
{
"$id": "full.user",
"type": "object",
"properties": {}
}
]
}
],
"relations": []
},
"tests": [
{
"description": "throws EDGE_MISSING when 0 relations exist between org and user",
"action": "compile",
"expect": {
"success": false,
"errors": [
{
"code": "EDGE_MISSING"
}
]
}
}
]
},
{
"description": "Edge missing - array cardinality rejection",
"database": {
"types": [
{
"id": "11111111-1111-1111-1111-111111111111",
"type": "type",
"name": "parent",
"module": "test",
"source": "test",
"hierarchy": [
"parent"
],
"variations": [
"parent"
],
"schemas": [
{
"$id": "full.parent",
"type": "object",
"properties": {
"children": {
"type": "array",
"items": {
"$ref": "full.child"
}
}
}
}
]
},
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "type",
"name": "child",
"module": "test",
"source": "test",
"hierarchy": [
"child"
],
"variations": [
"child"
],
"schemas": [
{
"$id": "full.child",
"type": "object",
"properties": {}
}
]
}
],
"relations": [
{
"id": "33333333-3333-3333-3333-333333333333",
"type": "relation",
"constraint": "fk_parent_child",
"source_type": "parent",
"source_columns": [
"child_id"
],
"destination_type": "child",
"destination_columns": [
"id"
]
}
]
},
"tests": [
{
"description": "throws EDGE_MISSING because a Forward scaler edge cannot mathematically fulfill an Array collection",
"action": "compile",
"expect": {
"success": false,
"errors": [
{
"code": "EDGE_MISSING"
}
]
}
}
]
},
{
"description": "Ambiguous type relations - multiple unprefixed relations",
"database": {
"types": [
{
"id": "11111111-1111-1111-1111-111111111111",
"type": "type",
"name": "invoice",
"module": "test",
"source": "test",
"hierarchy": [
"invoice"
],
"variations": [
"invoice"
],
"schemas": [
{
"$id": "full.invoice",
"type": "object",
"properties": {
"activities": {
"type": "array",
"items": {
"$ref": "full.activity"
}
}
}
}
]
},
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "type",
"name": "activity",
"module": "test",
"source": "test",
"hierarchy": [
"activity"
],
"variations": [
"activity"
],
"schemas": [
{
"$id": "full.activity",
"type": "object",
"properties": {}
}
]
}
],
"relations": [
{
"id": "33333333-3333-3333-3333-333333333333",
"type": "relation",
"constraint": "fk_activity_invoice_1",
"source_type": "activity",
"source_columns": [
"invoice_id_1"
],
"destination_type": "invoice",
"destination_columns": [
"id"
]
},
{
"id": "44444444-4444-4444-4444-444444444444",
"type": "relation",
"constraint": "fk_activity_invoice_2",
"source_type": "activity",
"source_columns": [
"invoice_id_2"
],
"destination_type": "invoice",
"destination_columns": [
"id"
]
}
]
},
"tests": [
{
"description": "throws AMBIGUOUS_TYPE_RELATIONS when fallback encounters multiple naked constraints",
"action": "compile",
"expect": {
"success": false,
"errors": [
{
"code": "AMBIGUOUS_TYPE_RELATIONS"
}
]
}
}
]
},
{
"description": "Ambiguous type relations - M:M twin deduction failure",
"database": {
"types": [
{
"id": "11111111-1111-1111-1111-111111111111",
"type": "type",
"name": "actor",
"module": "test",
"source": "test",
"hierarchy": [
"actor"
],
"variations": [
"actor"
],
"schemas": [
{
"$id": "full.actor",
"type": "object",
"properties": {
"ambiguous_edge": {
"type": "array",
"items": {
"$ref": "empty.junction"
}
}
}
}
]
},
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "type",
"name": "junction",
"module": "test",
"source": "test",
"hierarchy": [
"junction"
],
"variations": [
"junction"
],
"schemas": [
{
"$id": "empty.junction",
"type": "object",
"properties": {}
}
]
}
],
"relations": [
{
"id": "33333333-3333-3333-3333-333333333333",
"type": "relation",
"constraint": "fk_junction_source_actor",
"source_type": "junction",
"source_columns": [
"source_id"
],
"destination_type": "actor",
"destination_columns": [
"id"
],
"prefix": "source"
},
{
"id": "44444444-4444-4444-4444-444444444444",
"type": "relation",
"constraint": "fk_junction_target_actor",
"source_type": "junction",
"source_columns": [
"target_id"
],
"destination_type": "actor",
"destination_columns": [
"id"
],
"prefix": "target"
}
]
},
"tests": [
{
"description": "throws AMBIGUOUS_TYPE_RELATIONS because child doesn't explicitly expose 'source' or 'target' for twin deduction",
"action": "compile",
"expect": {
"success": false,
"errors": [
{
"code": "AMBIGUOUS_TYPE_RELATIONS"
}
]
}
}
]
},
{
"description": "Database type parse failed",
"database": {
"types": [
{
"id": [
"must",
"be",
"string",
"to",
"fail"
],
"type": "type",
"name": "failure",
"module": "test",
"source": "test",
"hierarchy": [
"failure"
],
"variations": [
"failure"
]
}
]
},
"tests": [
{
"description": "throws DATABASE_TYPE_PARSE_FAILED when metadata completely fails Serde typing",
"action": "compile",
"expect": {
"success": false,
"errors": [
{
"code": "DATABASE_TYPE_PARSE_FAILED"
}
]
}
}
]
}
]

View File

@ -142,7 +142,7 @@
"errors": [
{
"code": "CONST_VIOLATED",
"path": "/con"
"details": { "path": "con" }
}
]
}

View File

@ -154,8 +154,8 @@
"success": false,
"errors": [
{
"code": "FAMILY_MISMATCH",
"path": ""
"code": "NO_FAMILY_MATCH",
"details": { "path": "" }
}
]
}

View File

@ -47,8 +47,8 @@
"success": false,
"errors": [
{
"code": "TYPE_MISMATCH",
"path": "/base_prop"
"code": "INVALID_TYPE",
"details": { "path": "base_prop" }
}
]
}
@ -109,7 +109,7 @@
"errors": [
{
"code": "REQUIRED_FIELD_MISSING",
"path": "/a"
"details": { "path": "a" }
}
]
}
@ -126,7 +126,7 @@
"errors": [
{
"code": "REQUIRED_FIELD_MISSING",
"path": "/b"
"details": { "path": "b" }
}
]
}
@ -195,8 +195,8 @@
"success": false,
"errors": [
{
"code": "DEPENDENCY_FAILED",
"path": "/base_dep"
"code": "DEPENDENCY_MISSING",
"details": { "path": "" }
}
]
}
@ -213,8 +213,8 @@
"success": false,
"errors": [
{
"code": "DEPENDENCY_FAILED",
"path": "/child_dep"
"code": "DEPENDENCY_MISSING",
"details": { "path": "" }
}
]
}

View File

@ -19,7 +19,7 @@
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "relation",
"constraint": "fk_order_customer",
"constraint": "fk_order_customer_person",
"source_type": "order",
"source_columns": [
"customer_id"
@ -41,8 +41,7 @@
"destination_type": "order",
"destination_columns": [
"id"
],
"prefix": "lines"
]
},
{
"id": "44444444-4444-4444-4444-444444444444",
@ -75,6 +74,20 @@
"type"
],
"prefix": "target"
},
{
"id": "66666666-6666-6666-6666-666666666666",
"type": "relation",
"constraint": "fk_entity_organization",
"source_type": "entity",
"source_columns": [
"organization_id"
],
"destination_type": "organization",
"destination_columns": [
"id"
],
"prefix": null
}
],
"types": [
@ -283,6 +296,17 @@
}
}
}
},
"email_addresses": {
"type": "array",
"items": {
"$ref": "contact",
"properties": {
"target": {
"$ref": "email_address"
}
}
}
}
}
}
@ -972,7 +996,12 @@
"LEFT JOIN agreego.\"user\" t2 ON t2.id = t1.id",
"LEFT JOIN agreego.\"organization\" t3 ON t3.id = t1.id",
"LEFT JOIN agreego.\"entity\" t4 ON t4.id = t1.id",
"WHERE \"first_name\" = 'LookupFirst' AND \"last_name\" = 'LookupLast' AND \"date_of_birth\" = '1990-01-01T00:00:00Z' AND \"pronouns\" = 'they/them'"
"WHERE (",
" \"first_name\" = 'LookupFirst'",
" AND \"last_name\" = 'LookupLast'",
" AND \"date_of_birth\" = '1990-01-01T00:00:00Z'",
" AND \"pronouns\" = 'they/them'",
")"
],
[
"UPDATE agreego.\"person\"",
@ -1039,6 +1068,177 @@
]
}
},
{
"description": "Update existing person with id (lookup)",
"action": "merge",
"data": {
"id": "33333333-3333-3333-3333-333333333333",
"type": "person",
"first_name": "LookupFirst",
"last_name": "LookupLast",
"date_of_birth": "1990-01-01T00:00:00Z",
"pronouns": "they/them",
"contact_id": "abc-contact"
},
"mocks": [
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "person",
"first_name": "LookupFirst",
"last_name": "LookupLast",
"date_of_birth": "1990-01-01T00:00:00Z",
"pronouns": "they/them",
"contact_id": "old-contact"
}
],
"schema_id": "person",
"expect": {
"success": true,
"sql": [
[
"SELECT to_jsonb(t1.*) || to_jsonb(t2.*) || to_jsonb(t3.*) || to_jsonb(t4.*)",
"FROM agreego.\"person\" t1",
"LEFT JOIN agreego.\"user\" t2 ON t2.id = t1.id",
"LEFT JOIN agreego.\"organization\" t3 ON t3.id = t1.id",
"LEFT JOIN agreego.\"entity\" t4 ON t4.id = t1.id",
"WHERE",
" t1.id = '33333333-3333-3333-3333-333333333333'",
" OR (",
" \"first_name\" = 'LookupFirst'",
" AND \"last_name\" = 'LookupLast'",
" AND \"date_of_birth\" = '1990-01-01T00:00:00Z'",
" AND \"pronouns\" = 'they/them'",
" )"
],
[
"UPDATE agreego.\"person\"",
"SET",
" \"contact_id\" = 'abc-contact'",
"WHERE",
" id = '22222222-2222-2222-2222-222222222222'"
],
[
"UPDATE agreego.\"entity\"",
"SET",
" \"modified_at\" = '2026-03-10T00:00:00Z',",
" \"modified_by\" = '00000000-0000-0000-0000-000000000000'",
"WHERE",
" id = '22222222-2222-2222-2222-222222222222'"
],
[
"INSERT INTO agreego.change (",
" \"old\",",
" \"new\",",
" entity_id,",
" id,",
" kind,",
" modified_at,",
" modified_by",
")",
"VALUES (",
" '{",
" \"contact_id\":\"old-contact\"",
" }',",
" '{",
" \"contact_id\":\"abc-contact\",",
" \"type\":\"person\"",
" }',",
" '22222222-2222-2222-2222-222222222222',",
" '{{uuid}}',",
" 'update',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000'",
")"
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"contact_id\":\"abc-contact\",",
" \"date_of_birth\":\"1990-01-01T00:00:00Z\",",
" \"first_name\":\"LookupFirst\",",
" \"id\":\"22222222-2222-2222-2222-222222222222\",",
" \"last_name\":\"LookupLast\",",
" \"modified_at\":\"2026-03-10T00:00:00Z\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"pronouns\":\"they/them\",",
" \"type\":\"person\"",
" },",
" \"new\":{",
" \"contact_id\":\"abc-contact\",",
" \"type\":\"person\"",
" },",
" \"old\":{",
" \"contact_id\":\"old-contact\"",
" },",
" \"replaces\":\"33333333-3333-3333-3333-333333333333\"",
" }')"
]
]
}
},
{
"description": "Replace existing person with id and no changes (lookup)",
"action": "merge",
"data": {
"id": "33333333-3333-3333-3333-333333333333",
"type": "person",
"first_name": "LookupFirst",
"last_name": "LookupLast",
"date_of_birth": "1990-01-01T00:00:00Z",
"pronouns": "they/them"
},
"mocks": [
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "person",
"first_name": "LookupFirst",
"last_name": "LookupLast",
"date_of_birth": "1990-01-01T00:00:00Z",
"pronouns": "they/them",
"contact_id": "old-contact"
}
],
"schema_id": "person",
"expect": {
"success": true,
"sql": [
[
"SELECT to_jsonb(t1.*) || to_jsonb(t2.*) || to_jsonb(t3.*) || to_jsonb(t4.*)",
"FROM agreego.\"person\" t1",
"LEFT JOIN agreego.\"user\" t2 ON t2.id = t1.id",
"LEFT JOIN agreego.\"organization\" t3 ON t3.id = t1.id",
"LEFT JOIN agreego.\"entity\" t4 ON t4.id = t1.id",
"WHERE",
" t1.id = '33333333-3333-3333-3333-333333333333'",
" OR (",
" \"first_name\" = 'LookupFirst'",
" AND \"last_name\" = 'LookupLast'",
" AND \"date_of_birth\" = '1990-01-01T00:00:00Z'",
" AND \"pronouns\" = 'they/them'",
" )"
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"contact_id\":\"old-contact\",",
" \"date_of_birth\":\"1990-01-01T00:00:00Z\",",
" \"first_name\":\"LookupFirst\",",
" \"id\":\"22222222-2222-2222-2222-222222222222\",",
" \"last_name\":\"LookupLast\",",
" \"modified_at\":\"2026-03-10T00:00:00Z\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"pronouns\":\"they/them\",",
" \"type\":\"person\"",
" },",
" \"new\":{",
" \"type\":\"person\"",
" },",
" \"replaces\":\"33333333-3333-3333-3333-333333333333\"",
" }')"
]
]
}
},
{
"description": "Update existing person with id (no lookup)",
"action": "merge",
@ -1484,7 +1684,7 @@
"SELECT to_jsonb(t1.*) || to_jsonb(t2.*)",
"FROM agreego.\"order\" t1",
"LEFT JOIN agreego.\"entity\" t2 ON t2.id = t1.id",
"WHERE t1.id = 'abc'"
"WHERE t1.id = 'abc' OR (\"id\" = 'abc')"
],
[
"INSERT INTO agreego.\"entity\" (",
@ -1658,16 +1858,18 @@
"type": "contact",
"is_primary": false,
"target": {
"type": "phone_number",
"number": "555-0002"
"type": "email_address",
"address": "test@example.com"
}
},
}
],
"email_addresses": [
{
"type": "contact",
"is_primary": false,
"target": {
"type": "email_address",
"address": "test@example.com"
"address": "test2@example.com"
}
}
]
@ -1759,7 +1961,10 @@
" modified_by",
") VALUES (",
" NULL,",
" '{\"number\":\"555-0001\",\"type\":\"phone_number\"}',",
" '{",
" \"number\":\"555-0001\",",
" \"type\":\"phone_number\"",
" }',",
" '{{uuid:phone1_id}}',",
" '{{uuid}}',",
" 'create',",
@ -1830,115 +2035,6 @@
" '00000000-0000-0000-0000-000000000000'",
")"
],
[
"INSERT INTO agreego.\"entity\" (",
" \"created_at\",",
" \"created_by\",",
" \"id\",",
" \"modified_at\",",
" \"modified_by\",",
" \"type\"",
") VALUES (",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" '{{uuid:phone2_id}}',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" 'phone_number'",
")"
],
[
"INSERT INTO agreego.\"phone_number\" (",
" \"number\"",
") VALUES (",
" '555-0002'",
")"
],
[
"INSERT INTO agreego.change (",
" \"old\",",
" \"new\",",
" entity_id,",
" id,",
" kind,",
" modified_at,",
" modified_by",
") VALUES (",
" NULL,",
" '{",
" \"number\":\"555-0002\",",
" \"type\":\"phone_number\"",
" }',",
" '{{uuid:phone2_id}}',",
" '{{uuid}}',",
" 'create',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000'",
")"
],
[
"INSERT INTO agreego.\"entity\" (",
" \"created_at\",",
" \"created_by\",",
" \"id\",",
" \"modified_at\",",
" \"modified_by\",",
" \"type\"",
") VALUES (",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" '{{uuid:contact2_id}}',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" 'contact'",
")"
],
[
"INSERT INTO agreego.\"relationship\" (",
" \"source_id\",",
" \"source_type\",",
" \"target_id\",",
" \"target_type\"",
") VALUES (",
" '{{uuid:person_id}}',",
" 'person',",
" '{{uuid:phone2_id}}',",
" 'phone_number'",
")"
],
[
"INSERT INTO agreego.\"contact\" (",
" \"is_primary\"",
") VALUES (",
" false",
")"
],
[
"INSERT INTO agreego.change (",
" \"old\",",
" \"new\",",
" entity_id,",
" id,",
" kind,",
" modified_at,",
" modified_by",
") VALUES (",
" NULL,",
" '{",
" \"is_primary\":false,",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:phone2_id}}\",",
" \"target_type\":\"phone_number\",",
" \"type\":\"contact\"",
" }',",
" '{{uuid:contact2_id}}',",
" '{{uuid}}',",
" 'create',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000'",
")"
],
[
"INSERT INTO agreego.\"entity\" (",
" \"created_at\",",
@ -1996,7 +2092,7 @@
") VALUES (",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" '{{uuid:contact3_id}}',",
" '{{uuid:contact2_id}}',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" 'contact'",
@ -2041,6 +2137,115 @@
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" }',",
" '{{uuid:contact2_id}}',",
" '{{uuid}}',",
" 'create',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000'",
")"
],
[
"INSERT INTO agreego.\"entity\" (",
" \"created_at\",",
" \"created_by\",",
" \"id\",",
" \"modified_at\",",
" \"modified_by\",",
" \"type\"",
") VALUES (",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" '{{uuid:email2_id}}',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" 'email_address'",
")"
],
[
"INSERT INTO agreego.\"email_address\" (",
" \"address\"",
") VALUES (",
" 'test2@example.com'",
")"
],
[
"INSERT INTO agreego.change (",
" \"old\",",
" \"new\",",
" entity_id,",
" id,",
" kind,",
" modified_at,",
" modified_by",
") VALUES (",
" NULL,",
" '{",
" \"address\":\"test2@example.com\",",
" \"type\":\"email_address\"",
" }',",
" '{{uuid:email2_id}}',",
" '{{uuid}}',",
" 'create',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000'",
")"
],
[
"INSERT INTO agreego.\"entity\" (",
" \"created_at\",",
" \"created_by\",",
" \"id\",",
" \"modified_at\",",
" \"modified_by\",",
" \"type\"",
") VALUES (",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" '{{uuid:contact3_id}}',",
" '{{timestamp}}',",
" '00000000-0000-0000-0000-000000000000',",
" 'contact'",
")"
],
[
"INSERT INTO agreego.\"relationship\" (",
" \"source_id\",",
" \"source_type\",",
" \"target_id\",",
" \"target_type\"",
") VALUES (",
" '{{uuid:person_id}}',",
" 'person',",
" '{{uuid:email2_id}}',",
" 'email_address'",
")"
],
[
"INSERT INTO agreego.\"contact\" (",
" \"is_primary\"",
") VALUES (",
" false",
")"
],
[
"INSERT INTO agreego.change (",
" \"old\",",
" \"new\",",
" entity_id,",
" id,",
" kind,",
" modified_at,",
" modified_by",
") VALUES (",
" NULL,",
" '{",
" \"is_primary\":false,",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email2_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" }',",
" '{{uuid:contact3_id}}',",
" '{{uuid}}',",
" 'create',",
@ -2073,16 +2278,16 @@
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"first_name\":\"Relation\",",
" \"id\":\"{{uuid:person_id}}\",",
" \"last_name\":\"Test\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"type\":\"person\"",
" },",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"first_name\":\"Relation\",",
" \"id\":\"{{uuid:person_id}}\",",
" \"last_name\":\"Test\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"type\":\"person\"",
" },",
" \"new\":{",
" \"first_name\":\"Relation\",",
" \"last_name\":\"Test\",",
@ -2092,19 +2297,19 @@
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:contact1_id}}\",",
" \"is_primary\":true,",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:phone1_id}}\",",
" \"target_type\":\"phone_number\",",
" \"type\":\"contact\"",
" },",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:contact1_id}}\",",
" \"is_primary\":true,",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:phone1_id}}\",",
" \"target_type\":\"phone_number\",",
" \"type\":\"contact\"",
" },",
" \"new\":{",
" \"is_primary\":true,",
" \"source_id\":\"{{uuid:person_id}}\",",
@ -2117,15 +2322,15 @@
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:phone1_id}}\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"number\":\"555-0001\",",
" \"type\":\"phone_number\"",
" },",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:phone1_id}}\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"number\":\"555-0001\",",
" \"type\":\"phone_number\"",
" },",
" \"new\":{",
" \"number\":\"555-0001\",",
" \"type\":\"phone_number\"",
@ -2134,87 +2339,87 @@
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:contact2_id}}\",",
" \"is_primary\":false,",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:phone2_id}}\",",
" \"target_type\":\"phone_number\",",
" \"type\":\"contact\"",
" },",
" \"new\":{",
" \"is_primary\":false,",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:phone2_id}}\",",
" \"target_type\":\"phone_number\",",
" \"type\":\"contact\"",
" }",
" }')"
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:contact2_id}}\",",
" \"is_primary\":false,",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email1_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" },",
" \"new\":{",
" \"is_primary\":false,",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email1_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" }",
"}')"
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:phone2_id}}\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"number\":\"555-0002\",",
" \"type\":\"phone_number\"",
" },",
" \"new\":{",
" \"number\":\"555-0002\",",
" \"type\":\"phone_number\"",
" }",
" }')"
" \"complete\":{",
" \"address\":\"test@example.com\",",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:email1_id}}\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"type\":\"email_address\"",
" },",
" \"new\":{",
" \"address\":\"test@example.com\",",
" \"type\":\"email_address\"",
" }",
"}')"
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:contact3_id}}\",",
" \"is_primary\":false,",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email1_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" },",
" \"new\":{",
" \"is_primary\":false,",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email1_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" }",
" }')"
" \"complete\":{",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:contact3_id}}\",",
" \"is_primary\":false,",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email2_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" },",
" \"new\":{",
" \"is_primary\":false,",
" \"source_id\":\"{{uuid:person_id}}\",",
" \"source_type\":\"person\",",
" \"target_id\":\"{{uuid:email2_id}}\",",
" \"target_type\":\"email_address\",",
" \"type\":\"contact\"",
" }",
"}')"
],
[
"SELECT pg_notify('entity', '{",
" \"complete\":{",
" \"address\":\"test@example.com\",",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:email1_id}}\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"type\":\"email_address\"",
" },",
" \"new\":{",
" \"address\":\"test@example.com\",",
" \"type\":\"email_address\"",
" }",
" }')"
" \"complete\":{",
" \"address\":\"test2@example.com\",",
" \"created_at\":\"{{timestamp}}\",",
" \"created_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"id\":\"{{uuid:email2_id}}\",",
" \"modified_at\":\"{{timestamp}}\",",
" \"modified_by\":\"00000000-0000-0000-0000-000000000000\",",
" \"type\":\"email_address\"",
" },",
" \"new\":{",
" \"address\":\"test2@example.com\",",
" \"type\":\"email_address\"",
" }",
"}')"
]
]
}

View File

@ -75,13 +75,30 @@
{
"description": "happy path passes structural validation",
"data": {
"primitives": ["a", "b"],
"ad_hoc_objects": [{"name": "obj1"}],
"entities": [{"id": "entity-1", "value": 15}],
"primitives": [
"a",
"b"
],
"ad_hoc_objects": [
{
"name": "obj1"
}
],
"entities": [
{
"id": "entity-1",
"value": 15
}
],
"deep_entities": [
{
"id": "parent-1",
"nested": [{"id": "child-1", "flag": true}]
"nested": [
{
"id": "child-1",
"flag": true
}
]
}
]
},
@ -94,7 +111,10 @@
{
"description": "primitive arrays use numeric indexing",
"data": {
"primitives": ["a", 123]
"primitives": [
"a",
123
]
},
"schema_id": "hybrid_pathing",
"action": "validate",
@ -103,7 +123,7 @@
"errors": [
{
"code": "INVALID_TYPE",
"path": "/primitives/1"
"details": { "path": "primitives/1" }
}
]
}
@ -112,8 +132,12 @@
"description": "ad-hoc objects without ids use numeric indexing",
"data": {
"ad_hoc_objects": [
{"name": "valid"},
{"age": 30}
{
"name": "valid"
},
{
"age": 30
}
]
},
"schema_id": "hybrid_pathing",
@ -123,7 +147,7 @@
"errors": [
{
"code": "REQUIRED_FIELD_MISSING",
"path": "/ad_hoc_objects/1/name"
"details": { "path": "ad_hoc_objects/1/name" }
}
]
}
@ -132,8 +156,14 @@
"description": "arrays of objects with ids use topological uuid indexing",
"data": {
"entities": [
{"id": "entity-alpha", "value": 20},
{"id": "entity-beta", "value": 5}
{
"id": "entity-alpha",
"value": 20
},
{
"id": "entity-beta",
"value": 5
}
]
},
"schema_id": "hybrid_pathing",
@ -143,7 +173,7 @@
"errors": [
{
"code": "MINIMUM_VIOLATED",
"path": "/entities/entity-beta/value"
"details": { "path": "entities/entity-beta/value" }
}
]
}
@ -155,8 +185,14 @@
{
"id": "parent-omega",
"nested": [
{"id": "child-alpha", "flag": true},
{"id": "child-beta", "flag": "invalid-string"}
{
"id": "child-alpha",
"flag": true
},
{
"id": "child-beta",
"flag": "invalid-string"
}
]
}
]
@ -168,11 +204,11 @@
"errors": [
{
"code": "INVALID_TYPE",
"path": "/deep_entities/parent-omega/nested/child-beta/flag"
"details": { "path": "deep_entities/parent-omega/nested/child-beta/flag" }
}
]
}
}
]
}
]
]

View File

@ -4,20 +4,32 @@
"database": {
"puncs": [
{
"name": "get_entities",
"name": "get_organization",
"schemas": [
{
"$id": "get_entities.response",
"$family": "organization"
"$id": "get_organization.response",
"$ref": "organization"
}
]
},
{
"name": "get_persons",
"name": "get_organizations",
"schemas": [
{
"$id": "get_persons.response",
"$family": "base.person"
"$id": "get_organizations.response",
"type": "array",
"items": {
"$family": "organization"
}
}
]
},
{
"name": "get_person",
"schemas": [
{
"$id": "get_person.response",
"$family": "person"
}
]
},
@ -27,7 +39,9 @@
{
"$id": "get_orders.response",
"type": "array",
"items": { "$ref": "light.order" }
"items": {
"$ref": "light.order"
}
}
]
}
@ -69,7 +83,7 @@
{
"id": "22222222-2222-2222-2222-222222222222",
"type": "relation",
"constraint": "fk_order_customer",
"constraint": "fk_order_customer_person",
"source_type": "order",
"source_columns": [
"customer_id"
@ -80,6 +94,22 @@
],
"prefix": "customer"
},
{
"id": "22222222-2222-2222-2222-222222222227",
"type": "relation",
"constraint": "fk_order_counterparty_entity",
"source_type": "order",
"source_columns": [
"counterparty_id",
"counterparty_type"
],
"destination_type": "entity",
"destination_columns": [
"id",
"type"
],
"prefix": "counterparty"
},
{
"id": "33333333-3333-3333-3333-333333333333",
"type": "relation",
@ -91,8 +121,7 @@
"destination_type": "order",
"destination_columns": [
"id"
],
"prefix": "lines"
]
}
],
"types": [
@ -105,14 +134,12 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
]
},
"field_types": {
"id": "uuid",
"name": "text",
"archived": "boolean",
"created_at": "timestamptz",
"type": "text"
@ -129,9 +156,6 @@
"type": {
"type": "string"
},
"name": {
"type": "string"
},
"archived": {
"type": "boolean"
},
@ -148,7 +172,6 @@
"fields": [
"id",
"type",
"name",
"archived",
"created_at"
],
@ -183,11 +206,12 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
"organization": []
"organization": [
"name"
]
},
"field_types": {
"id": "uuid",
@ -210,6 +234,17 @@
"bot",
"organization",
"person"
],
"schemas": [
{
"$id": "organization",
"$ref": "entity",
"properties": {
"name": {
"type": "string"
}
}
}
]
},
{
@ -231,11 +266,12 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
"organization": [],
"organization": [
"name"
],
"bot": [
"token"
]
@ -284,11 +320,12 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
"organization": [],
"organization": [
"name"
],
"person": [
"first_name",
"last_name",
@ -307,7 +344,7 @@
},
"schemas": [
{
"$id": "base.person",
"$id": "person",
"$ref": "organization",
"properties": {
"first_name": {
@ -323,12 +360,12 @@
},
{
"$id": "light.person",
"$ref": "base.person",
"$ref": "person",
"properties": {}
},
{
"$id": "full.person",
"$ref": "base.person",
"$ref": "person",
"properties": {
"phone_numbers": {
"type": "array",
@ -405,7 +442,6 @@
"target_type",
"id",
"type",
"name",
"archived",
"created_at"
],
@ -413,7 +449,6 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
@ -432,7 +467,6 @@
"source_type": "text",
"target_id": "uuid",
"target_type": "text",
"name": "text",
"created_at": "timestamptz"
},
"schemas": [
@ -463,7 +497,6 @@
"target_type",
"id",
"type",
"name",
"archived",
"created_at"
],
@ -471,7 +504,6 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
@ -494,7 +526,6 @@
"target_id": "uuid",
"target_type": "text",
"is_primary": "boolean",
"name": "text",
"created_at": "timestamptz"
},
"schemas": [
@ -522,7 +553,6 @@
"number",
"id",
"type",
"name",
"archived",
"created_at"
],
@ -530,7 +560,6 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
@ -543,7 +572,6 @@
"type": "text",
"archived": "boolean",
"number": "text",
"name": "text",
"created_at": "timestamptz"
},
"schemas": [
@ -571,7 +599,6 @@
"address",
"id",
"type",
"name",
"archived",
"created_at"
],
@ -579,7 +606,6 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
@ -592,7 +618,6 @@
"type": "text",
"archived": "boolean",
"address": "text",
"name": "text",
"created_at": "timestamptz"
},
"schemas": [
@ -620,7 +645,6 @@
"city",
"id",
"type",
"name",
"archived",
"created_at"
],
@ -628,7 +652,6 @@
"entity": [
"id",
"type",
"name",
"archived",
"created_at"
],
@ -641,7 +664,6 @@
"type": "text",
"archived": "boolean",
"city": "text",
"name": "text",
"created_at": "timestamptz"
},
"schemas": [
@ -679,7 +701,7 @@
"$ref": "order",
"properties": {
"customer": {
"$ref": "base.person"
"$ref": "person"
}
}
},
@ -688,7 +710,7 @@
"$ref": "order",
"properties": {
"customer": {
"$ref": "base.person"
"$ref": "person"
},
"lines": {
"type": "array",
@ -706,26 +728,28 @@
"fields": [
"id",
"type",
"name",
"total",
"customer_id",
"created_at",
"created_by",
"modified_at",
"modified_by",
"archived"
"archived",
"counterparty_id",
"counterparty_type"
],
"grouped_fields": {
"order": [
"id",
"type",
"total",
"customer_id"
"customer_id",
"counterparty_id",
"counterparty_type"
],
"entity": [
"id",
"type",
"name",
"created_at",
"created_by",
"modified_at",
@ -741,14 +765,15 @@
"field_types": {
"id": "uuid",
"type": "text",
"name": "text",
"archived": "boolean",
"total": "numeric",
"customer_id": "uuid",
"created_at": "timestamptz",
"created_by": "uuid",
"modified_at": "timestamptz",
"modified_by": "uuid"
"modified_by": "uuid",
"counterparty_id": "uuid",
"counterparty_type": "text"
},
"variations": [
"order"
@ -780,7 +805,6 @@
"fields": [
"id",
"type",
"name",
"order_id",
"product",
"price",
@ -801,7 +825,6 @@
"entity": [
"id",
"type",
"name",
"created_at",
"created_by",
"modified_at",
@ -815,7 +838,6 @@
"field_types": {
"id": "uuid",
"type": "text",
"name": "text",
"archived": "boolean",
"order_id": "uuid",
"product": "text",
@ -829,31 +851,6 @@
"order_line"
]
}
],
"schemas": [
{
"$id": "entity",
"type": "object",
"properties": {}
},
{
"$id": "organization",
"type": "object",
"$ref": "entity",
"properties": {}
},
{
"$id": "bot",
"type": "object",
"$ref": "bot",
"properties": {}
},
{
"$id": "person",
"type": "object",
"$ref": "base.person",
"properties": {}
}
]
},
"tests": [
@ -869,7 +866,6 @@
" 'archived', entity_1.archived,",
" 'created_at', entity_1.created_at,",
" 'id', entity_1.id,",
" 'name', entity_1.name,",
" 'type', entity_1.type)",
"FROM agreego.entity entity_1",
"WHERE NOT entity_1.archived)"
@ -892,22 +888,6 @@
"123e4567-e89b-12d3-a456-426614174001"
]
},
"name": {
"$eq": "Jane%",
"$ne": "John%",
"$gt": "A",
"$gte": "B",
"$lt": "Z",
"$lte": "Y",
"$in": [
"Jane",
"John"
],
"$nin": [
"Bob",
"Alice"
]
},
"created_at": {
"$eq": "2023-01-01T00:00:00Z",
"$ne": "2023-01-02T00:00:00Z",
@ -929,7 +909,6 @@
" 'archived', entity_1.archived,",
" 'created_at', entity_1.created_at,",
" 'id', entity_1.id,",
" 'name', entity_1.name,",
" 'type', entity_1.type",
")",
"FROM agreego.entity entity_1",
@ -947,14 +926,6 @@
" AND entity_1.id IN (SELECT value::uuid FROM jsonb_array_elements_text(($10#>>'{}')::jsonb))",
" AND entity_1.id != ($11#>>'{}')::uuid",
" AND entity_1.id NOT IN (SELECT value::uuid FROM jsonb_array_elements_text(($12#>>'{}')::jsonb))",
" AND entity_1.name ILIKE $13#>>'{}'",
" AND entity_1.name > ($14#>>'{}')",
" AND entity_1.name >= ($15#>>'{}')",
" AND entity_1.name IN (SELECT value FROM jsonb_array_elements_text(($16#>>'{}')::jsonb))",
" AND entity_1.name < ($17#>>'{}')",
" AND entity_1.name <= ($18#>>'{}')",
" AND entity_1.name NOT ILIKE $19#>>'{}'",
" AND entity_1.name NOT IN (SELECT value FROM jsonb_array_elements_text(($20#>>'{}')::jsonb))",
")"
]
]
@ -963,7 +934,7 @@
{
"description": "Person select on base schema",
"action": "query",
"schema_id": "base.person",
"schema_id": "person",
"expect": {
"success": true,
"sql": [
@ -975,7 +946,7 @@
" 'first_name', person_1.first_name,",
" 'id', entity_3.id,",
" 'last_name', person_1.last_name,",
" 'name', entity_3.name,",
" 'name', organization_2.name,",
" 'type', entity_3.type)",
"FROM agreego.person person_1",
"JOIN agreego.organization organization_2 ON organization_2.id = person_1.id",
@ -1000,14 +971,12 @@
" 'created_at', entity_6.created_at,",
" 'id', entity_6.id,",
" 'is_primary', contact_4.is_primary,",
" 'name', entity_6.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'archived', entity_8.archived,",
" 'city', address_7.city,",
" 'created_at', entity_8.created_at,",
" 'id', entity_8.id,",
" 'name', entity_8.name,",
" 'type', entity_8.type",
" )",
" FROM agreego.address address_7",
@ -1032,7 +1001,6 @@
" 'created_at', entity_11.created_at,",
" 'id', entity_11.id,",
" 'is_primary', contact_9.is_primary,",
" 'name', entity_11.name,",
" 'target', CASE",
" WHEN entity_11.target_type = 'address' THEN",
" ((SELECT jsonb_build_object(",
@ -1040,7 +1008,6 @@
" 'city', address_16.city,",
" 'created_at', entity_17.created_at,",
" 'id', entity_17.id,",
" 'name', entity_17.name,",
" 'type', entity_17.type",
" )",
" FROM agreego.address address_16",
@ -1054,7 +1021,6 @@
" 'archived', entity_15.archived,",
" 'created_at', entity_15.created_at,",
" 'id', entity_15.id,",
" 'name', entity_15.name,",
" 'type', entity_15.type",
" )",
" FROM agreego.email_address email_address_14",
@ -1067,7 +1033,6 @@
" 'archived', entity_13.archived,",
" 'created_at', entity_13.created_at,",
" 'id', entity_13.id,",
" 'name', entity_13.name,",
" 'number', phone_number_12.number,",
" 'type', entity_13.type",
" )",
@ -1092,14 +1057,12 @@
" 'created_at', entity_20.created_at,",
" 'id', entity_20.id,",
" 'is_primary', contact_18.is_primary,",
" 'name', entity_20.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'address', email_address_21.address,",
" 'archived', entity_22.archived,",
" 'created_at', entity_22.created_at,",
" 'id', entity_22.id,",
" 'name', entity_22.name,",
" 'type', entity_22.type",
" )",
" FROM agreego.email_address email_address_21",
@ -1119,20 +1082,18 @@
" 'first_name', person_1.first_name,",
" 'id', entity_3.id,",
" 'last_name', person_1.last_name,",
" 'name', entity_3.name,",
" 'name', organization_2.name,",
" 'phone_numbers',",
" (SELECT COALESCE(jsonb_agg(jsonb_build_object(",
" 'archived', entity_25.archived,",
" 'created_at', entity_25.created_at,",
" 'id', entity_25.id,",
" 'is_primary', contact_23.is_primary,",
" 'name', entity_25.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'archived', entity_27.archived,",
" 'created_at', entity_27.created_at,",
" 'id', entity_27.id,",
" 'name', entity_27.name,",
" 'number', phone_number_26.number,",
" 'type', entity_27.type",
" )",
@ -1185,8 +1146,10 @@
"$eq": true,
"$ne": false
},
"contacts/is_primary": {
"$eq": true
"contacts": {
"is_primary": {
"$eq": true
}
},
"created_at": {
"$eq": "2020-01-01T00:00:00Z",
@ -1225,8 +1188,12 @@
"$eq": "%Doe%",
"$ne": "%Smith%"
},
"phone_numbers/target/number": {
"$eq": "555-1234"
"phone_numbers": {
"target": {
"number": {
"$eq": "555-1234"
}
}
}
},
"expect": {
@ -1240,14 +1207,12 @@
" 'created_at', entity_6.created_at,",
" 'id', entity_6.id,",
" 'is_primary', contact_4.is_primary,",
" 'name', entity_6.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'archived', entity_8.archived,",
" 'city', address_7.city,",
" 'created_at', entity_8.created_at,",
" 'id', entity_8.id,",
" 'name', entity_8.name,",
" 'type', entity_8.type",
" )",
" FROM agreego.address address_7",
@ -1272,7 +1237,6 @@
" 'created_at', entity_11.created_at,",
" 'id', entity_11.id,",
" 'is_primary', contact_9.is_primary,",
" 'name', entity_11.name,",
" 'target', CASE",
" WHEN entity_11.target_type = 'address' THEN",
" ((SELECT jsonb_build_object(",
@ -1280,7 +1244,6 @@
" 'city', address_16.city,",
" 'created_at', entity_17.created_at,",
" 'id', entity_17.id,",
" 'name', entity_17.name,",
" 'type', entity_17.type",
" )",
" FROM agreego.address address_16",
@ -1294,7 +1257,6 @@
" 'archived', entity_15.archived,",
" 'created_at', entity_15.created_at,",
" 'id', entity_15.id,",
" 'name', entity_15.name,",
" 'type', entity_15.type",
" )",
" FROM agreego.email_address email_address_14",
@ -1307,7 +1269,6 @@
" 'archived', entity_13.archived,",
" 'created_at', entity_13.created_at,",
" 'id', entity_13.id,",
" 'name', entity_13.name,",
" 'number', phone_number_12.number,",
" 'type', entity_13.type",
" )",
@ -1333,14 +1294,12 @@
" 'created_at', entity_20.created_at,",
" 'id', entity_20.id,",
" 'is_primary', contact_18.is_primary,",
" 'name', entity_20.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'address', email_address_21.address,",
" 'archived', entity_22.archived,",
" 'created_at', entity_22.created_at,",
" 'id', entity_22.id,",
" 'name', entity_22.name,",
" 'type', entity_22.type",
" )",
" FROM agreego.email_address email_address_21",
@ -1360,20 +1319,18 @@
" 'first_name', person_1.first_name,",
" 'id', entity_3.id,",
" 'last_name', person_1.last_name,",
" 'name', entity_3.name,",
" 'name', organization_2.name,",
" 'phone_numbers',",
" (SELECT COALESCE(jsonb_agg(jsonb_build_object(",
" 'archived', entity_25.archived,",
" 'created_at', entity_25.created_at,",
" 'id', entity_25.id,",
" 'is_primary', contact_23.is_primary,",
" 'name', entity_25.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'archived', entity_27.archived,",
" 'created_at', entity_27.created_at,",
" 'id', entity_27.id,",
" 'name', entity_27.name,",
" 'number', phone_number_26.number,",
" 'type', entity_27.type",
" )",
@ -1446,14 +1403,12 @@
" 'created_at', entity_3.created_at,",
" 'id', entity_3.id,",
" 'is_primary', contact_1.is_primary,",
" 'name', entity_3.name,",
" 'target',",
" (SELECT jsonb_build_object(",
" 'address', email_address_4.address,",
" 'archived', entity_5.archived,",
" 'created_at', entity_5.created_at,",
" 'id', entity_5.id,",
" 'name', entity_5.name,",
" 'type', entity_5.type",
" )",
" FROM agreego.email_address email_address_4",
@ -1492,7 +1447,7 @@
" 'first_name', person_3.first_name,",
" 'id', entity_5.id,",
" 'last_name', person_3.last_name,",
" 'name', entity_5.name,",
" 'name', organization_4.name,",
" 'type', entity_5.type",
" )",
" FROM agreego.person person_3",
@ -1508,7 +1463,6 @@
" 'archived', entity_7.archived,",
" 'created_at', entity_7.created_at,",
" 'id', entity_7.id,",
" 'name', entity_7.name,",
" 'order_id', order_line_6.order_id,",
" 'price', order_line_6.price,",
" 'product', order_line_6.product,",
@ -1519,7 +1473,6 @@
" WHERE",
" NOT entity_7.archived",
" AND order_line_6.order_id = order_1.id),",
" 'name', entity_2.name,",
" 'total', order_1.total,",
" 'type', entity_2.type",
")",
@ -1531,14 +1484,36 @@
}
},
{
"description": "Base entity family select on polymorphic tree",
"description": "Organization select via a punc response with ref",
"action": "query",
"schema_id": "get_entities.response",
"schema_id": "get_organization.response",
"expect": {
"success": true,
"sql": [
[
"(SELECT jsonb_build_object(",
" 'archived', entity_2.archived,",
" 'created_at', entity_2.created_at,",
" 'id', entity_2.id,",
" 'name', organization_1.name,",
" 'type', entity_2.type",
")",
"FROM agreego.organization organization_1",
"JOIN agreego.entity entity_2 ON entity_2.id = organization_1.id",
"WHERE NOT entity_2.archived)"
]
]
}
},
{
"description": "Organizations select via a punc response with family",
"action": "query",
"schema_id": "get_organizations.response",
"expect": {
"success": true,
"sql": [
[
"(SELECT COALESCE(jsonb_agg(jsonb_build_object(",
" 'id', organization_1.id,",
" 'type', CASE",
" WHEN organization_1.type = 'bot' THEN",
@ -1546,7 +1521,7 @@
" 'archived', entity_5.archived,",
" 'created_at', entity_5.created_at,",
" 'id', entity_5.id,",
" 'name', entity_5.name,",
" 'name', organization_4.name,",
" 'token', bot_3.token,",
" 'type', entity_5.type",
" )",
@ -1559,7 +1534,7 @@
" 'archived', entity_7.archived,",
" 'created_at', entity_7.created_at,",
" 'id', entity_7.id,",
" 'name', entity_7.name,",
" 'name', organization_6.name,",
" 'type', entity_7.type",
" )",
" FROM agreego.organization organization_6",
@ -1573,7 +1548,7 @@
" 'first_name', person_8.first_name,",
" 'id', entity_10.id,",
" 'last_name', person_8.last_name,",
" 'name', entity_10.name,",
" 'name', organization_9.name,",
" 'type', entity_10.type",
" )",
" FROM agreego.person person_8",
@ -1581,7 +1556,7 @@
" JOIN agreego.entity entity_10 ON entity_10.id = organization_9.id",
" WHERE NOT entity_10.archived))",
" ELSE NULL END",
")",
")), '[]'::jsonb)",
"FROM agreego.organization organization_1",
"JOIN agreego.entity entity_2 ON entity_2.id = organization_1.id",
"WHERE NOT entity_2.archived)"
@ -1590,7 +1565,33 @@
}
},
{
"description": "Root Array SQL evaluation for Order fetching Light Order",
"description": "Person select via a punc response with family",
"action": "query",
"schema_id": "get_person.response",
"expect": {
"success": true,
"sql": [
[
"(SELECT jsonb_build_object(",
" 'age', person_1.age,",
" 'archived', entity_3.archived,",
" 'created_at', entity_3.created_at,",
" 'first_name', person_1.first_name,",
" 'id', entity_3.id,",
" 'last_name', person_1.last_name,",
" 'name', organization_2.name,",
" 'type', entity_3.type",
")",
"FROM agreego.person person_1",
"JOIN agreego.organization organization_2 ON organization_2.id = person_1.id",
"JOIN agreego.entity entity_3 ON entity_3.id = organization_2.id",
"WHERE NOT entity_3.archived)"
]
]
}
},
{
"description": "Orders select via a punc with items",
"action": "query",
"schema_id": "get_orders.response",
"expect": {
@ -1608,7 +1609,7 @@
" 'first_name', person_3.first_name,",
" 'id', entity_5.id,",
" 'last_name', person_3.last_name,",
" 'name', entity_5.name,",
" 'name', organization_4.name,",
" 'type', entity_5.type",
" )",
" FROM agreego.person person_3",
@ -1619,7 +1620,6 @@
" AND order_1.customer_id = person_3.id),",
" 'customer_id', order_1.customer_id,",
" 'id', entity_2.id,",
" 'name', entity_2.name,",
" 'total', order_1.total,",
" 'type', entity_2.type",
")), '[]'::jsonb)",

View File

@ -676,8 +676,8 @@
"success": false,
"errors": [
{
"code": "TYPE_MISMATCH",
"path": "/type"
"code": "CONST_VIOLATED",
"details": { "path": "type" }
}
]
}
@ -781,8 +781,8 @@
"success": false,
"errors": [
{
"code": "TYPE_MISMATCH",
"path": "/type"
"code": "CONST_VIOLATED",
"details": { "path": "type" }
}
]
}

2
flows

Submodule flows updated: a7b0f5dc4d...4d61e13e00

View File

@ -44,8 +44,8 @@ impl MockExecutor {
#[cfg(test)]
impl DatabaseExecutor for MockExecutor {
fn query(&self, sql: &str, _args: Option<&[Value]>) -> Result<Value, String> {
println!("DEBUG SQL QUERY: {}", sql);
fn query(&self, sql: &str, _args: Option<Vec<Value>>) -> Result<Value, String> {
println!("JSPG_SQL: {}", sql);
MOCK_STATE.with(|state| {
let mut s = state.borrow_mut();
s.captured_queries.push(sql.to_string());
@ -65,8 +65,8 @@ impl DatabaseExecutor for MockExecutor {
})
}
fn execute(&self, sql: &str, _args: Option<&[Value]>) -> Result<(), String> {
println!("DEBUG SQL EXECUTE: {}", sql);
fn execute(&self, sql: &str, _args: Option<Vec<Value>>) -> Result<(), String> {
println!("JSPG_SQL: {}", sql);
MOCK_STATE.with(|state| {
let mut s = state.borrow_mut();
s.captured_queries.push(sql.to_string());
@ -124,42 +124,23 @@ fn parse_and_match_mocks(sql: &str, mocks: &[Value]) -> Option<Vec<Value>> {
return None;
};
// 2. Extract WHERE conditions
let mut conditions = Vec::new();
// 2. Extract WHERE conditions string
let mut where_clause = String::new();
if let Some(where_idx) = sql_upper.find(" WHERE ") {
let mut where_end = sql_upper.find(" ORDER BY ").unwrap_or(sql.len());
let mut where_end = sql_upper.find(" ORDER BY ").unwrap_or(sql_upper.len());
if let Some(limit_idx) = sql_upper.find(" LIMIT ") {
if limit_idx < where_end {
where_end = limit_idx;
}
}
let where_clause = &sql[where_idx + 7..where_end];
let and_regex = Regex::new(r"(?i)\s+AND\s+").ok()?;
let parts = and_regex.split(where_clause);
for part in parts {
if let Some(eq_idx) = part.find('=') {
let left = part[..eq_idx]
.trim()
.split('.')
.last()
.unwrap_or("")
.trim_matches('"');
let right = part[eq_idx + 1..].trim().trim_matches('\'');
conditions.push((left.to_string(), right.to_string()));
} else if part.to_uppercase().contains(" IS NULL") {
let left = part[..part.to_uppercase().find(" IS NULL").unwrap()]
.trim()
.split('.')
.last()
.unwrap_or("")
.replace('"', ""); // Remove quotes explicitly
conditions.push((left, "null".to_string()));
}
}
where_clause = sql[where_idx + 7..where_end].to_string();
}
// 3. Find matching mocks
let mut matches = Vec::new();
let or_regex = Regex::new(r"(?i)\s+OR\s+").ok()?;
let and_regex = Regex::new(r"(?i)\s+AND\s+").ok()?;
for mock in mocks {
if let Some(mock_obj) = mock.as_object() {
if let Some(t) = mock_obj.get("type") {
@ -168,25 +149,66 @@ fn parse_and_match_mocks(sql: &str, mocks: &[Value]) -> Option<Vec<Value>> {
}
}
let mut matches_all = true;
for (k, v) in &conditions {
let mock_val_str = match mock_obj.get(k) {
Some(Value::String(s)) => s.clone(),
Some(Value::Number(n)) => n.to_string(),
Some(Value::Bool(b)) => b.to_string(),
Some(Value::Null) => "null".to_string(),
_ => {
matches_all = false;
break;
if where_clause.is_empty() {
matches.push(mock.clone());
continue;
}
let or_parts = or_regex.split(&where_clause);
let mut any_branch_matched = false;
for or_part in or_parts {
let branch_str = or_part.replace('(', "").replace(')', "");
let mut branch_matches = true;
for part in and_regex.split(&branch_str) {
if let Some(eq_idx) = part.find('=') {
let left = part[..eq_idx]
.trim()
.split('.')
.last()
.unwrap_or("")
.trim_matches('"');
let right = part[eq_idx + 1..].trim().trim_matches('\'');
let mock_val_str = match mock_obj.get(left) {
Some(Value::String(s)) => s.clone(),
Some(Value::Number(n)) => n.to_string(),
Some(Value::Bool(b)) => b.to_string(),
Some(Value::Null) => "null".to_string(),
_ => "".to_string(),
};
if mock_val_str != right {
branch_matches = false;
break;
}
} else if part.to_uppercase().contains(" IS NULL") {
let left = part[..part.to_uppercase().find(" IS NULL").unwrap()]
.trim()
.split('.')
.last()
.unwrap_or("")
.trim_matches('"');
let mock_val_str = match mock_obj.get(left) {
Some(Value::Null) => "null".to_string(),
_ => "".to_string(),
};
if mock_val_str != "null" {
branch_matches = false;
break;
}
}
};
if mock_val_str != *v {
matches_all = false;
}
if branch_matches {
any_branch_matched = true;
break;
}
}
if matches_all {
if any_branch_matched {
matches.push(mock.clone());
}
}

View File

@ -9,10 +9,10 @@ use serde_json::Value;
/// without a live Postgres SPI connection.
pub trait DatabaseExecutor: Send + Sync {
/// Executes a query expecting a single JSONB return, representing rows.
fn query(&self, sql: &str, args: Option<&[Value]>) -> Result<Value, String>;
fn query(&self, sql: &str, args: Option<Vec<Value>>) -> Result<Value, String>;
/// Executes an operation (INSERT, UPDATE, DELETE, or pg_notify) that does not return rows.
fn execute(&self, sql: &str, args: Option<&[Value]>) -> Result<(), String>;
fn execute(&self, sql: &str, args: Option<Vec<Value>>) -> Result<(), String>;
/// Returns the current authenticated user's ID
fn auth_user_id(&self) -> Result<String, String>;

View File

@ -67,21 +67,17 @@ impl SpiExecutor {
}
impl DatabaseExecutor for SpiExecutor {
fn query(&self, sql: &str, args: Option<&[Value]>) -> Result<Value, String> {
let mut json_args = Vec::new();
fn query(&self, sql: &str, args: Option<Vec<Value>>) -> Result<Value, String> {
let mut args_with_oid: Vec<pgrx::datum::DatumWithOid> = Vec::new();
if let Some(params) = args {
for val in params {
json_args.push(pgrx::JsonB(val.clone()));
}
for j_val in json_args.into_iter() {
args_with_oid.push(pgrx::datum::DatumWithOid::from(j_val));
args_with_oid.push(pgrx::datum::DatumWithOid::from(pgrx::JsonB(val)));
}
}
pgrx::debug1!("JSPG_SQL: {}", sql);
self.transact(|| {
Spi::connect(|client| {
pgrx::notice!("JSPG_SQL: {}", sql);
match client.select(sql, Some(args_with_oid.len() as i64), &args_with_oid) {
Ok(tup_table) => {
let mut results = Vec::new();
@ -98,21 +94,17 @@ impl DatabaseExecutor for SpiExecutor {
})
}
fn execute(&self, sql: &str, args: Option<&[Value]>) -> Result<(), String> {
let mut json_args = Vec::new();
fn execute(&self, sql: &str, args: Option<Vec<Value>>) -> Result<(), String> {
let mut args_with_oid: Vec<pgrx::datum::DatumWithOid> = Vec::new();
if let Some(params) = args {
for val in params {
json_args.push(pgrx::JsonB(val.clone()));
}
for j_val in json_args.into_iter() {
args_with_oid.push(pgrx::datum::DatumWithOid::from(j_val));
args_with_oid.push(pgrx::datum::DatumWithOid::from(pgrx::JsonB(val)));
}
}
pgrx::debug1!("JSPG_SQL: {}", sql);
self.transact(|| {
Spi::connect_mut(|client| {
pgrx::notice!("JSPG_SQL: {}", sql);
match client.update(sql, Some(args_with_oid.len() as i64), &args_with_oid) {
Ok(_) => Ok(()),
Err(e) => Err(format!("SPI Execution Failure: {}", e)),

View File

@ -53,18 +53,38 @@ impl Database {
executor: Box::new(MockExecutor::new()),
};
let mut errors = Vec::new();
if let Some(arr) = val.get("enums").and_then(|v| v.as_array()) {
for item in arr {
if let Ok(def) = serde_json::from_value::<Enum>(item.clone()) {
db.enums.insert(def.name.clone(), def);
match serde_json::from_value::<Enum>(item.clone()) {
Ok(def) => {
db.enums.insert(def.name.clone(), def);
}
Err(e) => {
errors.push(crate::drop::Error {
code: "DATABASE_ENUM_PARSE_FAILED".to_string(),
message: format!("Failed to parse database enum: {}", e),
details: crate::drop::ErrorDetails::default(),
});
}
}
}
}
if let Some(arr) = val.get("types").and_then(|v| v.as_array()) {
for item in arr {
if let Ok(def) = serde_json::from_value::<Type>(item.clone()) {
db.types.insert(def.name.clone(), def);
match serde_json::from_value::<Type>(item.clone()) {
Ok(def) => {
db.types.insert(def.name.clone(), def);
}
Err(e) => {
errors.push(crate::drop::Error {
code: "DATABASE_TYPE_PARSE_FAILED".to_string(),
message: format!("Failed to parse database type: {}", e),
details: crate::drop::ErrorDetails::default(),
});
}
}
}
}
@ -80,16 +100,11 @@ impl Database {
}
}
Err(e) => {
return Err(crate::drop::Drop::with_errors(vec![crate::drop::Error {
errors.push(crate::drop::Error {
code: "DATABASE_RELATION_PARSE_FAILED".to_string(),
message: format!("Failed to parse database relation: {}", e),
details: crate::drop::ErrorDetails {
path: "".to_string(),
cause: None,
context: None,
schema: None,
},
}]));
details: crate::drop::ErrorDetails::default(),
});
}
}
}
@ -97,27 +112,48 @@ impl Database {
if let Some(arr) = val.get("puncs").and_then(|v| v.as_array()) {
for item in arr {
if let Ok(def) = serde_json::from_value::<Punc>(item.clone()) {
db.puncs.insert(def.name.clone(), def);
match serde_json::from_value::<Punc>(item.clone()) {
Ok(def) => {
db.puncs.insert(def.name.clone(), def);
}
Err(e) => {
errors.push(crate::drop::Error {
code: "DATABASE_PUNC_PARSE_FAILED".to_string(),
message: format!("Failed to parse database punc: {}", e),
details: crate::drop::ErrorDetails::default(),
});
}
}
}
}
if let Some(arr) = val.get("schemas").and_then(|v| v.as_array()) {
for (i, item) in arr.iter().enumerate() {
if let Ok(mut schema) = serde_json::from_value::<Schema>(item.clone()) {
let id = schema
.obj
.id
.clone()
.unwrap_or_else(|| format!("schema_{}", i));
schema.obj.id = Some(id.clone());
db.schemas.insert(id, schema);
match serde_json::from_value::<Schema>(item.clone()) {
Ok(mut schema) => {
let id = schema
.obj
.id
.clone()
.unwrap_or_else(|| format!("schema_{}", i));
schema.obj.id = Some(id.clone());
db.schemas.insert(id, schema);
}
Err(e) => {
errors.push(crate::drop::Error {
code: "DATABASE_SCHEMA_PARSE_FAILED".to_string(),
message: format!("Failed to parse database schema: {}", e),
details: crate::drop::ErrorDetails::default(),
});
}
}
}
}
db.compile()?;
db.compile(&mut errors);
if !errors.is_empty() {
return Err(crate::drop::Drop::with_errors(errors));
}
Ok(db)
}
@ -128,12 +164,12 @@ impl Database {
}
/// Executes a query expecting a single JSONB array return, representing rows.
pub fn query(&self, sql: &str, args: Option<&[Value]>) -> Result<Value, String> {
pub fn query(&self, sql: &str, args: Option<Vec<Value>>) -> Result<Value, String> {
self.executor.query(sql, args)
}
/// Executes an operation (INSERT, UPDATE, DELETE, or pg_notify) that does not return rows.
pub fn execute(&self, sql: &str, args: Option<&[Value]>) -> Result<(), String> {
pub fn execute(&self, sql: &str, args: Option<Vec<Value>>) -> Result<(), String> {
self.executor.execute(sql, args)
}
@ -147,68 +183,48 @@ impl Database {
self.executor.timestamp()
}
pub fn compile(&mut self) -> Result<(), crate::drop::Drop> {
pub fn compile(&mut self, errors: &mut Vec<crate::drop::Error>) {
let mut harvested = Vec::new();
for schema in self.schemas.values_mut() {
if let Err(msg) = schema.collect_schemas(None, &mut harvested) {
return Err(crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "SCHEMA_VALIDATION_FAILED".to_string(),
message: msg,
details: crate::drop::ErrorDetails { path: "".to_string(), cause: None, context: None, schema: None },
}]));
}
schema.collect_schemas(None, &mut harvested, errors);
}
self.schemas.extend(harvested);
if let Err(msg) = self.collect_schemas() {
return Err(crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "SCHEMA_VALIDATION_FAILED".to_string(),
message: msg,
details: crate::drop::ErrorDetails {
path: "".to_string(),
cause: None,
context: None,
schema: None,
},
}]));
}
self.collect_schemas(errors);
self.collect_depths();
self.collect_descendants();
// Mathematically evaluate all property inheritances, formats, schemas, and foreign key edges topographically over OnceLocks
let mut visited = std::collections::HashSet::new();
for schema in self.schemas.values() {
schema.compile(self, &mut visited);
schema.compile(self, &mut visited, errors);
}
Ok(())
}
fn collect_schemas(&mut self) -> Result<(), String> {
fn collect_schemas(&mut self, errors: &mut Vec<crate::drop::Error>) {
let mut to_insert = Vec::new();
// Pass 1: Extract all Schemas structurally off top level definitions into the master registry.
// Validate every node recursively via string filters natively!
for type_def in self.types.values() {
for mut schema in type_def.schemas.clone() {
schema.collect_schemas(None, &mut to_insert)?;
schema.collect_schemas(None, &mut to_insert, errors);
}
}
for punc_def in self.puncs.values() {
for mut schema in punc_def.schemas.clone() {
schema.collect_schemas(None, &mut to_insert)?;
schema.collect_schemas(None, &mut to_insert, errors);
}
}
for enum_def in self.enums.values() {
for mut schema in enum_def.schemas.clone() {
schema.collect_schemas(None, &mut to_insert)?;
schema.collect_schemas(None, &mut to_insert, errors);
}
}
for (id, schema) in to_insert {
self.schemas.insert(id, schema);
}
Ok(())
}
fn collect_depths(&mut self) {
@ -247,19 +263,15 @@ impl Database {
}
}
// Cache generic descendants for $family runtime lookups
// Cache exhaustive descendants matrix for generic $family string lookups natively
let mut descendants = HashMap::new();
for (id, schema) in &self.schemas {
if let Some(family_target) = &schema.obj.family {
let mut desc_set = HashSet::new();
Self::collect_descendants_recursively(family_target, &direct_refs, &mut desc_set);
let mut desc_vec: Vec<String> = desc_set.into_iter().collect();
desc_vec.sort();
for id in self.schemas.keys() {
let mut desc_set = HashSet::new();
Self::collect_descendants_recursively(id, &direct_refs, &mut desc_set);
let mut desc_vec: Vec<String> = desc_set.into_iter().collect();
desc_vec.sort();
// By placing all descendants directly onto the ID mapped location of the Family declaration,
// we can lookup descendants natively in ValidationContext without AST replacement overrides.
descendants.insert(id.clone(), desc_vec);
}
descendants.insert(id.clone(), desc_vec);
}
self.descendants = descendants;
}

View File

@ -255,6 +255,7 @@ impl Schema {
&self,
db: &crate::database::Database,
visited: &mut std::collections::HashSet<String>,
errors: &mut Vec<crate::drop::Error>,
) {
if self.obj.compiled_properties.get().is_some() {
return;
@ -301,7 +302,7 @@ impl Schema {
// 1. Resolve INHERITANCE dependencies first
if let Some(ref_id) = &self.obj.r#ref {
if let Some(parent) = db.schemas.get(ref_id) {
parent.compile(db, visited);
parent.compile(db, visited, errors);
if let Some(p_props) = parent.obj.compiled_properties.get() {
props.extend(p_props.clone());
}
@ -310,7 +311,7 @@ impl Schema {
if let Some(all_of) = &self.obj.all_of {
for ao in all_of {
ao.compile(db, visited);
ao.compile(db, visited, errors);
if let Some(ao_props) = ao.obj.compiled_properties.get() {
props.extend(ao_props.clone());
}
@ -318,14 +319,14 @@ impl Schema {
}
if let Some(then_schema) = &self.obj.then_ {
then_schema.compile(db, visited);
then_schema.compile(db, visited, errors);
if let Some(t_props) = then_schema.obj.compiled_properties.get() {
props.extend(t_props.clone());
}
}
if let Some(else_schema) = &self.obj.else_ {
else_schema.compile(db, visited);
else_schema.compile(db, visited, errors);
if let Some(e_props) = else_schema.obj.compiled_properties.get() {
props.extend(e_props.clone());
}
@ -345,47 +346,47 @@ impl Schema {
let _ = self.obj.compiled_property_names.set(names);
// 4. Compute Edges natively
let schema_edges = self.compile_edges(db, visited, &props);
let schema_edges = self.compile_edges(db, visited, &props, errors);
let _ = self.obj.compiled_edges.set(schema_edges);
// 5. Build our inline children properties recursively NOW! (Depth-first search)
if let Some(local_props) = &self.obj.properties {
for child in local_props.values() {
child.compile(db, visited);
child.compile(db, visited, errors);
}
}
if let Some(items) = &self.obj.items {
items.compile(db, visited);
items.compile(db, visited, errors);
}
if let Some(pattern_props) = &self.obj.pattern_properties {
for child in pattern_props.values() {
child.compile(db, visited);
child.compile(db, visited, errors);
}
}
if let Some(additional_props) = &self.obj.additional_properties {
additional_props.compile(db, visited);
additional_props.compile(db, visited, errors);
}
if let Some(one_of) = &self.obj.one_of {
for child in one_of {
child.compile(db, visited);
child.compile(db, visited, errors);
}
}
if let Some(arr) = &self.obj.prefix_items {
for child in arr {
child.compile(db, visited);
child.compile(db, visited, errors);
}
}
if let Some(child) = &self.obj.not {
child.compile(db, visited);
child.compile(db, visited, errors);
}
if let Some(child) = &self.obj.contains {
child.compile(db, visited);
child.compile(db, visited, errors);
}
if let Some(child) = &self.obj.property_names {
child.compile(db, visited);
child.compile(db, visited, errors);
}
if let Some(child) = &self.obj.if_ {
child.compile(db, visited);
child.compile(db, visited, errors);
}
if let Some(id) = &self.obj.id {
@ -394,30 +395,38 @@ impl Schema {
}
#[allow(unused_variables)]
fn validate_identifier(id: &str, field_name: &str) -> Result<(), String> {
fn validate_identifier(id: &str, field_name: &str, errors: &mut Vec<crate::drop::Error>) {
#[cfg(not(test))]
for c in id.chars() {
if !c.is_ascii_lowercase() && !c.is_ascii_digit() && c != '_' && c != '.' {
return Err(format!("Invalid character '{}' in JSON Schema '{}' property: '{}'. Identifiers must exclusively contain [a-z0-9_.]", c, field_name, id));
errors.push(crate::drop::Error {
code: "INVALID_IDENTIFIER".to_string(),
message: format!(
"Invalid character '{}' in JSON Schema '{}' property: '{}'. Identifiers must exclusively contain [a-z0-9_.]",
c, field_name, id
),
details: crate::drop::ErrorDetails::default(),
});
return;
}
}
Ok(())
}
pub fn collect_schemas(
&mut self,
tracking_path: Option<String>,
to_insert: &mut Vec<(String, Schema)>,
) -> Result<(), String> {
errors: &mut Vec<crate::drop::Error>,
) {
if let Some(id) = &self.obj.id {
Self::validate_identifier(id, "$id")?;
Self::validate_identifier(id, "$id", errors);
to_insert.push((id.clone(), self.clone()));
}
if let Some(r#ref) = &self.obj.r#ref {
Self::validate_identifier(r#ref, "$ref")?;
Self::validate_identifier(r#ref, "$ref", errors);
}
if let Some(family) = &self.obj.family {
Self::validate_identifier(family, "$family")?;
Self::validate_identifier(family, "$family", errors);
}
// Is this schema an inline ad-hoc composition?
@ -431,20 +440,20 @@ impl Schema {
// Provide the path origin to children natively, prioritizing the explicit `$id` boundary if one exists
let origin_path = self.obj.id.clone().or(tracking_path);
self.collect_child_schemas(origin_path, to_insert)?;
Ok(())
self.collect_child_schemas(origin_path, to_insert, errors);
}
pub fn collect_child_schemas(
&mut self,
origin_path: Option<String>,
to_insert: &mut Vec<(String, Schema)>,
) -> Result<(), String> {
errors: &mut Vec<crate::drop::Error>,
) {
if let Some(props) = &mut self.obj.properties {
for (k, v) in props.iter_mut() {
let mut inner = (**v).clone();
let next_path = origin_path.as_ref().map(|o| format!("{}/{}", o, k));
inner.collect_schemas(next_path, to_insert)?;
inner.collect_schemas(next_path, to_insert, errors);
*v = Arc::new(inner);
}
}
@ -453,101 +462,134 @@ impl Schema {
for (k, v) in pattern_props.iter_mut() {
let mut inner = (**v).clone();
let next_path = origin_path.as_ref().map(|o| format!("{}/{}", o, k));
inner.collect_schemas(next_path, to_insert)?;
inner.collect_schemas(next_path, to_insert, errors);
*v = Arc::new(inner);
}
}
let mut map_arr = |arr: &mut Vec<Arc<Schema>>| -> Result<(), String> {
let mut map_arr = |arr: &mut Vec<Arc<Schema>>| {
for v in arr.iter_mut() {
let mut inner = (**v).clone();
inner.collect_schemas(origin_path.clone(), to_insert)?;
inner.collect_schemas(origin_path.clone(), to_insert, errors);
*v = Arc::new(inner);
}
Ok(())
};
if let Some(arr) = &mut self.obj.prefix_items { map_arr(arr)?; }
if let Some(arr) = &mut self.obj.all_of { map_arr(arr)?; }
if let Some(arr) = &mut self.obj.one_of { map_arr(arr)?; }
if let Some(arr) = &mut self.obj.prefix_items {
map_arr(arr);
}
if let Some(arr) = &mut self.obj.all_of {
map_arr(arr);
}
if let Some(arr) = &mut self.obj.one_of {
map_arr(arr);
}
let mut map_opt = |opt: &mut Option<Arc<Schema>>, pass_path: bool| -> Result<(), String> {
let mut map_opt = |opt: &mut Option<Arc<Schema>>, pass_path: bool| {
if let Some(v) = opt {
let mut inner = (**v).clone();
let next = if pass_path { origin_path.clone() } else { None };
inner.collect_schemas(next, to_insert)?;
inner.collect_schemas(next, to_insert, errors);
*v = Arc::new(inner);
}
Ok(())
};
map_opt(&mut self.obj.additional_properties, false)?;
map_opt(&mut self.obj.additional_properties, false);
// `items` absolutely must inherit the EXACT property path assigned to the Array wrapper!
// This allows nested Arrays enclosing bare Entity structs to correctly register as the boundary mapping.
map_opt(&mut self.obj.items, true)?;
map_opt(&mut self.obj.not, false)?;
map_opt(&mut self.obj.contains, false)?;
map_opt(&mut self.obj.property_names, false)?;
map_opt(&mut self.obj.if_, false)?;
map_opt(&mut self.obj.then_, false)?;
map_opt(&mut self.obj.else_, false)?;
map_opt(&mut self.obj.items, true);
Ok(())
map_opt(&mut self.obj.not, false);
map_opt(&mut self.obj.contains, false);
map_opt(&mut self.obj.property_names, false);
map_opt(&mut self.obj.if_, false);
map_opt(&mut self.obj.then_, false);
map_opt(&mut self.obj.else_, false);
}
/// Dynamically infers and compiles all structural database relationships between this Schema
/// and its nested children. This functions recursively traverses the JSON Schema abstract syntax
/// tree, identifies physical PostgreSQL table boundaries, and locks the resulting relation
/// constraint paths directly onto the `compiled_edges` map in O(1) memory.
pub fn compile_edges(
&self,
db: &crate::database::Database,
visited: &mut std::collections::HashSet<String>,
props: &std::collections::BTreeMap<String, std::sync::Arc<Schema>>,
errors: &mut Vec<crate::drop::Error>,
) -> std::collections::BTreeMap<String, crate::database::edge::Edge> {
let mut schema_edges = std::collections::BTreeMap::new();
// Determine the physical Database Table Name this schema structurally represents
// Plucks the polymorphic discriminator via dot-notation (e.g. extracting "person" from "full.person")
let mut parent_type_name = None;
if let Some(family) = &self.obj.family {
parent_type_name = Some(family.split('.').next_back().unwrap_or(family).to_string());
} else if let Some(identifier) = self.obj.identifier() {
parent_type_name = Some(identifier);
parent_type_name = Some(
identifier
.split('.')
.next_back()
.unwrap_or(&identifier)
.to_string(),
);
}
if let Some(p_type) = parent_type_name {
// Proceed only if the resolved table physically exists within the Postgres Type hierarchy
if db.types.contains_key(&p_type) {
// Iterate over all discovered schema boundaries mapped inside the object
for (prop_name, prop_schema) in props {
let mut child_type_name = None;
let mut target_schema = prop_schema.clone();
let mut is_array = false;
// Structurally unpack the inner target entity if the object maps to an array list
if let Some(crate::database::schema::SchemaTypeOrArray::Single(t)) =
&prop_schema.obj.type_
{
if t == "array" {
is_array = true;
if let Some(items) = &prop_schema.obj.items {
target_schema = items.clone();
}
}
}
// Determine the physical Postgres table backing the nested child schema recursively
if let Some(family) = &target_schema.obj.family {
child_type_name = Some(family.split('.').next_back().unwrap_or(family).to_string());
} else if let Some(ref_id) = target_schema.obj.identifier() {
child_type_name = Some(ref_id);
child_type_name = Some(ref_id.split('.').next_back().unwrap_or(&ref_id).to_string());
} else if let Some(arr) = &target_schema.obj.one_of {
if let Some(first) = arr.first() {
if let Some(ref_id) = first.obj.identifier() {
child_type_name = Some(ref_id);
child_type_name =
Some(ref_id.split('.').next_back().unwrap_or(&ref_id).to_string());
}
}
}
if let Some(c_type) = child_type_name {
if db.types.contains_key(&c_type) {
target_schema.compile(db, visited);
// Ensure the child Schema's AST has accurately compiled its own physical property keys so we can
// inject them securely for Many-to-Many Twin Deduction disambiguation matching.
target_schema.compile(db, visited, errors);
if let Some(compiled_target_props) = target_schema.obj.compiled_properties.get() {
let keys_for_ambiguity: Vec<String> =
compiled_target_props.keys().cloned().collect();
if let Some((relation, is_forward)) =
resolve_relation(db, &p_type, &c_type, prop_name, Some(&keys_for_ambiguity))
{
// Interrogate the Database catalog graph to discover the exact Foreign Key Constraint connecting the components
if let Some((relation, is_forward)) = resolve_relation(
db,
&p_type,
&c_type,
prop_name,
Some(&keys_for_ambiguity),
is_array,
errors,
) {
schema_edges.insert(
prop_name.clone(),
crate::database::edge::Edge {
@ -566,15 +608,20 @@ impl Schema {
}
}
/// Inspects the Postgres pg_constraint relations catalog to securely identify
/// the precise Foreign Key connecting a parent and child hierarchy path.
pub(crate) fn resolve_relation<'a>(
db: &'a crate::database::Database,
parent_type: &str,
child_type: &str,
prop_name: &str,
relative_keys: Option<&Vec<String>>,
is_array: bool,
errors: &mut Vec<crate::drop::Error>,
) -> Option<(&'a crate::database::relation::Relation, bool)> {
// Enforce graph locality by ensuring we don't accidentally crawl to pure structural entity boundaries
if parent_type == "entity" && child_type == "entity" {
return None;
return None;
}
let p_def = db.types.get(parent_type)?;
@ -583,11 +630,25 @@ pub(crate) fn resolve_relation<'a>(
let mut matching_rels = Vec::new();
let mut directions = Vec::new();
for rel in db.relations.values() {
let is_forward = p_def.hierarchy.contains(&rel.source_type)
&& c_def.hierarchy.contains(&rel.destination_type);
let is_reverse = p_def.hierarchy.contains(&rel.destination_type)
&& c_def.hierarchy.contains(&rel.source_type);
// Scour the complete catalog for any Edge matching the inheritance scope of the two objects
// This automatically binds polymorphic structures (e.g. recognizing a relationship targeting User
// also natively binds instances specifically typed as Person).
let mut all_rels: Vec<&crate::database::relation::Relation> = db.relations.values().collect();
all_rels.sort_by(|a, b| a.constraint.cmp(&b.constraint));
for rel in all_rels {
let mut is_forward =
p_def.hierarchy.contains(&rel.source_type) && c_def.hierarchy.contains(&rel.destination_type);
let is_reverse =
p_def.hierarchy.contains(&rel.destination_type) && c_def.hierarchy.contains(&rel.source_type);
// Structural Cardinality Filtration:
// If the schema requires a collection (Array), it is mathematically impossible for a pure
// Forward scalar edge (where the parent holds exactly one UUID pointer) to fulfill a One-to-Many request.
// Thus, if it's an array, we fully reject pure Forward edges and only accept Reverse edges (or Junction edges).
if is_array && is_forward && !is_reverse {
is_forward = false;
}
if is_forward {
matching_rels.push(rel);
@ -598,10 +659,20 @@ pub(crate) fn resolve_relation<'a>(
}
}
// Abort relation discovery early if no hierarchical inheritance match was found
if matching_rels.is_empty() {
errors.push(crate::drop::Error {
code: "EDGE_MISSING".to_string(),
message: format!(
"No database relation exists between '{}' and '{}' for property '{}'",
parent_type, child_type, prop_name
),
details: crate::drop::ErrorDetails::default(),
});
return None;
}
// Ideal State: The objects only share a solitary structural relation, resolving ambiguity instantly.
if matching_rels.len() == 1 {
return Some((matching_rels[0], directions[0]));
}
@ -609,6 +680,8 @@ pub(crate) fn resolve_relation<'a>(
let mut chosen_idx = 0;
let mut resolved = false;
// Exact Prefix Disambiguation: Determine if the database specifically names this constraint
// directly mapping to the JSON Schema property name (e.g., `fk_{child}_{property_name}`)
for (i, rel) in matching_rels.iter().enumerate() {
if let Some(prefix) = &rel.prefix {
if prop_name.starts_with(prefix)
@ -622,21 +695,74 @@ pub(crate) fn resolve_relation<'a>(
}
}
// Complex Subgraph Resolution: The database contains multiple equally explicit foreign key constraints
// linking these objects (such as pointing to `source` and `target` in Many-to-Many junction models).
if !resolved && relative_keys.is_some() {
// Twin Deduction Pass 1: We inspect the exact properties structurally defined inside the compiled payload
// to observe which explicit relation arrow the child payload natively consumes.
let keys = relative_keys.unwrap();
let mut missing_prefix_ids = Vec::new();
let mut consumed_rel_idx = None;
for (i, rel) in matching_rels.iter().enumerate() {
if let Some(prefix) = &rel.prefix {
if !keys.contains(prefix) {
missing_prefix_ids.push(i);
if keys.contains(prefix) {
consumed_rel_idx = Some(i);
break; // Found the routing edge explicitly consumed by the schema payload
}
}
}
if missing_prefix_ids.len() == 1 {
chosen_idx = missing_prefix_ids[0];
// Twin Deduction Pass 2: Knowing which arrow points outbound, we can mathematically deduce its twin
// providing the reverse ownership on the same junction boundary must be the incoming Edge to the parent.
if let Some(used_idx) = consumed_rel_idx {
let used_rel = matching_rels[used_idx];
let mut twin_ids = Vec::new();
for (i, rel) in matching_rels.iter().enumerate() {
if i != used_idx
&& rel.source_type == used_rel.source_type
&& rel.destination_type == used_rel.destination_type
&& rel.prefix.is_some()
{
twin_ids.push(i);
}
}
if twin_ids.len() == 1 {
chosen_idx = twin_ids[0];
resolved = true;
}
}
}
// Implicit Base Fallback: If no complex explicit paths resolve, but exactly one relation
// sits entirely naked (without a constraint prefix), it must be the core structural parent ownership.
if !resolved {
let mut null_prefix_ids = Vec::new();
for (i, rel) in matching_rels.iter().enumerate() {
if rel.prefix.is_none() {
null_prefix_ids.push(i);
}
}
if null_prefix_ids.len() == 1 {
chosen_idx = null_prefix_ids[0];
resolved = true;
}
}
// If we exhausted all mathematical deduction pathways and STILL cannot isolate a single edge,
// we must abort rather than silently guessing. Returning None prevents arbitrary SQL generation
// and forces a clean structural error for the architect.
if !resolved {
errors.push(crate::drop::Error {
code: "AMBIGUOUS_TYPE_RELATIONS".to_string(),
message: format!(
"Ambiguous database relation between '{}' and '{}' for property '{}'",
parent_type, child_type, prop_name
),
details: crate::drop::ErrorDetails::default(),
});
return None;
}
Some((matching_rels[chosen_idx], directions[chosen_idx]))
}

View File

@ -64,7 +64,7 @@ pub struct Error {
pub details: ErrorDetails,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct ErrorDetails {
pub path: String,
#[serde(skip_serializing_if = "Option::is_none")]

View File

@ -3,8 +3,8 @@
pub mod cache;
use crate::database::r#type::Type;
use crate::database::Database;
use crate::database::r#type::Type;
use serde_json::Value;
use std::sync::Arc;
@ -25,22 +25,22 @@ impl Merger {
let mut notifications_queue = Vec::new();
let target_schema = match self.db.schemas.get(schema_id) {
Some(s) => Arc::new(s.clone()),
None => {
return crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "MERGE_FAILED".to_string(),
message: format!("Unknown schema_id: {}", schema_id),
details: crate::drop::ErrorDetails {
path: "".to_string(),
cause: None,
context: Some(data),
schema: None,
},
}]);
}
Some(s) => Arc::new(s.clone()),
None => {
return crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "MERGE_FAILED".to_string(),
message: format!("Unknown schema_id: {}", schema_id),
details: crate::drop::ErrorDetails {
path: "".to_string(),
cause: None,
context: Some(data),
schema: None,
},
}]);
}
};
let result = self.merge_internal(target_schema, data.clone(), &mut notifications_queue);
let result = self.merge_internal(target_schema, data, &mut notifications_queue);
let val_resolved = match result {
Ok(val) => val,
@ -50,18 +50,24 @@ impl Merger {
let mut final_cause = None;
if let Ok(Value::Object(map)) = serde_json::from_str::<Value>(&msg) {
if let (Some(Value::String(e_msg)), Some(Value::String(e_code))) = (map.get("error"), map.get("code")) {
if let (Some(Value::String(e_msg)), Some(Value::String(e_code))) =
(map.get("error"), map.get("code"))
{
final_message = e_msg.clone();
final_code = e_code.clone();
let mut cause_parts = Vec::new();
if let Some(Value::String(d)) = map.get("detail") {
if !d.is_empty() { cause_parts.push(d.clone()); }
if !d.is_empty() {
cause_parts.push(d.clone());
}
}
if let Some(Value::String(h)) = map.get("hint") {
if !h.is_empty() { cause_parts.push(h.clone()); }
if !h.is_empty() {
cause_parts.push(h.clone());
}
}
if !cause_parts.is_empty() {
final_cause = Some(cause_parts.join("\n"));
final_cause = Some(cause_parts.join("\n"));
}
}
}
@ -72,7 +78,7 @@ impl Merger {
details: crate::drop::ErrorDetails {
path: "".to_string(),
cause: final_cause,
context: Some(data),
context: None,
schema: None,
},
}]);
@ -144,11 +150,11 @@ impl Merger {
) -> Result<Value, String> {
let mut item_schema = schema.clone();
if let Some(crate::database::schema::SchemaTypeOrArray::Single(t)) = &schema.obj.type_ {
if t == "array" {
if let Some(items_def) = &schema.obj.items {
item_schema = items_def.clone();
}
if t == "array" {
if let Some(items_def) = &schema.obj.items {
item_schema = items_def.clone();
}
}
}
let mut resolved_items = Vec::new();
@ -178,8 +184,8 @@ impl Merger {
};
let compiled_props = match schema.obj.compiled_properties.get() {
Some(props) => props,
None => return Err("Schema has no compiled properties for merging".to_string()),
Some(props) => props,
None => return Err("Schema has no compiled properties for merging".to_string()),
};
let mut entity_fields = serde_json::Map::new();
@ -189,37 +195,37 @@ impl Merger {
for (k, v) in obj {
// Always retain system and unmapped core fields natively implicitly mapped to the Postgres tables
if k == "id" || k == "type" || k == "created" {
entity_fields.insert(k.clone(), v.clone());
continue;
entity_fields.insert(k.clone(), v.clone());
continue;
}
if let Some(prop_schema) = compiled_props.get(&k) {
let mut is_edge = false;
if let Some(edges) = schema.obj.compiled_edges.get() {
if edges.contains_key(&k) {
is_edge = true;
}
}
if is_edge {
let typeof_v = match &v {
Value::Object(_) => "object",
Value::Array(_) => "array",
_ => "field", // Malformed edge data?
};
if typeof_v == "object" {
entity_objects.insert(k.clone(), (v.clone(), prop_schema.clone()));
} else if typeof_v == "array" {
entity_arrays.insert(k.clone(), (v.clone(), prop_schema.clone()));
} else {
entity_fields.insert(k.clone(), v.clone());
}
} else {
// Not an edge! It's a raw Postgres column (e.g., JSONB, text[])
entity_fields.insert(k.clone(), v.clone());
}
} else if type_def.fields.contains(&k) {
let mut is_edge = false;
if let Some(edges) = schema.obj.compiled_edges.get() {
if edges.contains_key(&k) {
is_edge = true;
}
}
if is_edge {
let typeof_v = match &v {
Value::Object(_) => "object",
Value::Array(_) => "array",
_ => "field", // Malformed edge data?
};
if typeof_v == "object" {
entity_objects.insert(k.clone(), (v.clone(), prop_schema.clone()));
} else if typeof_v == "array" {
entity_arrays.insert(k.clone(), (v.clone(), prop_schema.clone()));
} else {
entity_fields.insert(k.clone(), v.clone());
}
} else {
// Not an edge! It's a raw Postgres column (e.g., JSONB, text[])
entity_fields.insert(k.clone(), v.clone());
}
} else if type_def.fields.contains(&k) {
entity_fields.insert(k.clone(), v.clone());
}
}
@ -228,13 +234,15 @@ impl Merger {
let mut entity_change_kind = None;
let mut entity_fetched = None;
let mut entity_replaces = None;
if !type_def.relationship {
let (fields, kind, fetched) =
self.stage_entity(entity_fields.clone(), type_def, &user_id, &timestamp)?;
let (fields, kind, fetched, replaces) =
self.stage_entity(entity_fields, type_def, &user_id, &timestamp)?;
entity_fields = fields;
entity_change_kind = kind;
entity_fetched = fetched;
entity_replaces = replaces;
}
let mut entity_response = serde_json::Map::new();
@ -251,12 +259,10 @@ impl Merger {
};
if let Some(compiled_edges) = schema.obj.compiled_edges.get() {
println!("Compiled Edges keys for relation {}: {:?}", relation_name, compiled_edges.keys().collect::<Vec<_>>());
if let Some(edge) = compiled_edges.get(&relation_name) {
println!("FOUND EDGE {} -> {:?}", relation_name, edge.constraint);
if let Some(relation) = self.db.relations.get(&edge.constraint) {
let parent_is_source = edge.forward;
if parent_is_source {
if !relative.contains_key("organization_id") {
if let Some(org_id) = entity_fields.get("organization_id") {
@ -264,15 +270,16 @@ impl Merger {
}
}
let mut merged_relative = match self.merge_internal(rel_schema.clone(), Value::Object(relative), notifications)? {
let mut merged_relative = match self.merge_internal(
rel_schema.clone(),
Value::Object(relative),
notifications,
)? {
Value::Object(m) => m,
_ => continue,
};
merged_relative.insert(
"type".to_string(),
Value::String(relative_type_name),
);
merged_relative.insert("type".to_string(), Value::String(relative_type_name));
Self::apply_entity_relation(
&mut entity_fields,
@ -295,7 +302,11 @@ impl Merger {
&entity_fields,
);
let merged_relative = match self.merge_internal(rel_schema.clone(), Value::Object(relative), notifications)? {
let merged_relative = match self.merge_internal(
rel_schema.clone(),
Value::Object(relative),
notifications,
)? {
Value::Object(m) => m,
_ => continue,
};
@ -308,11 +319,12 @@ impl Merger {
}
if type_def.relationship {
let (fields, kind, fetched) =
self.stage_entity(entity_fields.clone(), type_def, &user_id, &timestamp)?;
let (fields, kind, fetched, replaces) =
self.stage_entity(entity_fields, type_def, &user_id, &timestamp)?;
entity_fields = fields;
entity_change_kind = kind;
entity_fetched = fetched;
entity_replaces = replaces;
}
self.merge_entity_fields(
@ -357,19 +369,24 @@ impl Merger {
);
let mut item_schema = rel_schema.clone();
if let Some(crate::database::schema::SchemaTypeOrArray::Single(t)) = &rel_schema.obj.type_ {
if t == "array" {
if let Some(items_def) = &rel_schema.obj.items {
item_schema = items_def.clone();
}
if let Some(crate::database::schema::SchemaTypeOrArray::Single(t)) =
&rel_schema.obj.type_
{
if t == "array" {
if let Some(items_def) = &rel_schema.obj.items {
item_schema = items_def.clone();
}
}
}
let merged_relative =
match self.merge_internal(item_schema, Value::Object(relative_item), notifications)? {
Value::Object(m) => m,
_ => continue,
};
let merged_relative = match self.merge_internal(
item_schema,
Value::Object(relative_item),
notifications,
)? {
Value::Object(m) => m,
_ => continue,
};
relative_responses.push(Value::Object(merged_relative));
}
@ -388,6 +405,7 @@ impl Merger {
entity_change_kind.as_deref(),
&user_id,
&timestamp,
entity_replaces.as_deref(),
)?;
if let Some(sql) = notify_sql {
@ -419,6 +437,7 @@ impl Merger {
serde_json::Map<String, Value>,
Option<String>,
Option<serde_json::Map<String, Value>>,
Option<String>,
),
String,
> {
@ -428,8 +447,8 @@ impl Merger {
// An anchor is STRICTLY a struct containing merely an `id` and `type`.
// We aggressively bypass Database SPI `SELECT` fetches because there are no primitive
// mutations to apply to the row. PostgreSQL inherently protects relationships via Foreign Keys downstream.
let is_anchor = entity_fields.len() == 2
&& entity_fields.contains_key("id")
let is_anchor = entity_fields.len() == 2
&& entity_fields.contains_key("id")
&& entity_fields.contains_key("type");
let has_valid_id = entity_fields
@ -438,11 +457,22 @@ impl Merger {
.map_or(false, |s| !s.is_empty());
if is_anchor && has_valid_id {
return Ok((entity_fields, None, None));
return Ok((entity_fields, None, None, None));
}
let entity_fetched = self.fetch_entity(&entity_fields, type_def)?;
let mut replaces_id = None;
if let Some(ref fetched_row) = entity_fetched {
let provided_id = entity_fields.get("id").and_then(|v| v.as_str());
let fetched_id = fetched_row.get("id").and_then(|v| v.as_str());
if let (Some(pid), Some(fid)) = (provided_id, fetched_id) {
if !pid.is_empty() && pid != fid {
replaces_id = Some(pid.to_string());
}
}
}
let system_keys = vec![
"id".to_string(),
"type".to_string(),
@ -492,7 +522,7 @@ impl Merger {
);
entity_fields = new_fields;
} else if changes.is_empty() {
} else if changes.is_empty() && replaces_id.is_none() {
let mut new_fields = serde_json::Map::new();
new_fields.insert(
"id".to_string(),
@ -508,6 +538,8 @@ impl Merger {
.unwrap_or(false);
entity_change_kind = if is_archived {
Some("delete".to_string())
} else if changes.is_empty() && replaces_id.is_some() {
Some("replace".to_string())
} else {
Some("update".to_string())
};
@ -530,7 +562,12 @@ impl Merger {
entity_fields = new_fields;
}
Ok((entity_fields, entity_change_kind, entity_fetched))
Ok((
entity_fields,
entity_change_kind,
entity_fetched,
replaces_id,
))
}
fn fetch_entity(
@ -585,11 +622,14 @@ impl Merger {
template
};
let where_clause = if let Some(id) = id_val {
format!("WHERE t1.id = {}", Self::quote_literal(id))
} else if lookup_complete {
let mut lookup_predicates = Vec::new();
let mut where_parts = Vec::new();
if let Some(id) = id_val {
where_parts.push(format!("t1.id = {}", Self::quote_literal(id)));
}
if lookup_complete {
let mut lookup_predicates = Vec::new();
for column in &entity_type.lookup_fields {
let val = entity_fields.get(column).unwrap_or(&Value::Null);
if column == "type" {
@ -598,10 +638,14 @@ impl Merger {
lookup_predicates.push(format!("\"{}\" = {}", column, Self::quote_literal(val)));
}
}
format!("WHERE {}", lookup_predicates.join(" AND "))
} else {
where_parts.push(format!("({})", lookup_predicates.join(" AND ")));
}
if where_parts.is_empty() {
return Ok(None);
};
}
let where_clause = format!("WHERE {}", where_parts.join(" OR "));
let final_sql = format!("{} {}", fetch_sql_template, where_clause);
@ -710,9 +754,7 @@ impl Merger {
columns.join(", "),
values.join(", ")
);
self
.db
.execute(&sql, None)?;
self.db.execute(&sql, None)?;
} else if change_kind == "update" || change_kind == "delete" {
entity_pairs.remove("id");
entity_pairs.remove("type");
@ -744,9 +786,7 @@ impl Merger {
set_clauses.join(", "),
Self::quote_literal(&Value::String(id_str.to_string()))
);
self
.db
.execute(&sql, None)?;
self.db.execute(&sql, None)?;
}
}
@ -761,6 +801,7 @@ impl Merger {
entity_change_kind: Option<&str>,
user_id: &str,
timestamp: &str,
replaces_id: Option<&str>,
) -> Result<Option<String>, String> {
let change_kind = match entity_change_kind {
Some(k) => k,
@ -772,9 +813,9 @@ impl Merger {
let mut old_vals = serde_json::Map::new();
let mut new_vals = serde_json::Map::new();
let is_update = change_kind == "update" || change_kind == "delete";
let exists = change_kind == "update" || change_kind == "delete" || change_kind == "replace";
if !is_update {
if !exists {
let system_keys = vec![
"id".to_string(),
"created_by".to_string(),
@ -811,7 +852,7 @@ impl Merger {
}
let mut complete = entity_fields.clone();
if is_update {
if exists {
if let Some(fetched) = entity_fetched {
let mut temp = fetched.clone();
for (k, v) in entity_fields {
@ -831,13 +872,17 @@ impl Merger {
let mut notification = serde_json::Map::new();
notification.insert("complete".to_string(), Value::Object(complete));
notification.insert("new".to_string(), new_val_obj.clone());
if old_val_obj != Value::Null {
notification.insert("old".to_string(), old_val_obj.clone());
notification.insert("old".to_string(), old_val_obj.clone());
}
if let Some(rep) = replaces_id {
notification.insert("replaces".to_string(), Value::String(rep.to_string()));
}
let mut notify_sql = None;
if type_obj.historical {
if type_obj.historical && change_kind != "replace" {
let change_sql = format!(
"INSERT INTO agreego.change (\"old\", \"new\", entity_id, id, kind, modified_at, modified_by) VALUES ({}, {}, {}, {}, {}, {}, {})",
Self::quote_literal(&old_val_obj),

View File

@ -67,7 +67,10 @@ impl<'a> Compiler<'a> {
if let Some(items) = &node.schema.obj.items {
let mut resolved_type = None;
if let Some(family_target) = items.obj.family.as_ref() {
let base_type_name = family_target.split('.').next_back().unwrap_or(family_target);
let base_type_name = family_target
.split('.')
.next_back()
.unwrap_or(family_target);
resolved_type = self.db.types.get(base_type_name);
} else if let Some(base_type_name) = items.obj.identifier() {
resolved_type = self.db.types.get(&base_type_name);
@ -89,7 +92,10 @@ impl<'a> Compiler<'a> {
}
// 3. Fallback for root execution of standalone non-entity arrays
Err("Cannot compile a root array without a valid entity reference or table mapped via `items`.".to_string())
Err(
"Cannot compile a root array without a valid entity reference or table mapped via `items`."
.to_string(),
)
}
fn compile_reference(&mut self, node: Node<'a>) -> Result<(String, String), String> {
@ -118,33 +124,28 @@ impl<'a> Compiler<'a> {
}
// Handle $family Polymorphism fallbacks for relations
if let Some(family_target) = &node.schema.obj.family {
let base_type_name = family_target
.split('.')
.next_back()
.unwrap_or(family_target)
.to_string();
if let Some(type_def) = self.db.types.get(&base_type_name) {
if type_def.variations.len() == 1 {
let mut bypass_schema = crate::database::schema::Schema::default();
bypass_schema.obj.r#ref = Some(family_target.clone());
let mut bypass_node = node.clone();
bypass_node.schema = std::sync::Arc::new(bypass_schema);
return self.compile_node(bypass_node);
}
let mut sorted_variations: Vec<String> = type_def.variations.iter().cloned().collect();
sorted_variations.sort();
let mut family_schemas = Vec::new();
for variation in &sorted_variations {
let mut ref_schema = crate::database::schema::Schema::default();
ref_schema.obj.r#ref = Some(variation.clone());
family_schemas.push(std::sync::Arc::new(ref_schema));
}
return self.compile_one_of(&family_schemas, node);
let mut all_targets = vec![family_target.clone()];
if let Some(descendants) = self.db.descendants.get(family_target) {
all_targets.extend(descendants.clone());
}
if all_targets.len() == 1 {
let mut bypass_schema = crate::database::schema::Schema::default();
bypass_schema.obj.r#ref = Some(all_targets[0].clone());
let mut bypass_node = node.clone();
bypass_node.schema = std::sync::Arc::new(bypass_schema);
return self.compile_node(bypass_node);
}
all_targets.sort();
let mut family_schemas = Vec::new();
for variation in &all_targets {
let mut ref_schema = crate::database::schema::Schema::default();
ref_schema.obj.r#ref = Some(variation.clone());
family_schemas.push(std::sync::Arc::new(ref_schema));
}
return self.compile_one_of(&family_schemas, node);
}
// Handle oneOf Polymorphism fallbacks for relations
@ -224,49 +225,62 @@ impl<'a> Compiler<'a> {
let mut select_args = Vec::new();
if let Some(family_target) = node.schema.obj.family.as_ref() {
let base_type_name = family_target
.split('.')
.next_back()
.unwrap_or(family_target)
.to_string();
let family_prefix = family_target.rfind('.').map(|idx| &family_target[..idx]);
if let Some(fam_type_def) = self.db.types.get(&base_type_name) {
if fam_type_def.variations.len() == 1 {
let mut bypass_schema = crate::database::schema::Schema::default();
bypass_schema.obj.r#ref = Some(family_target.clone());
bypass_schema.compile(self.db, &mut std::collections::HashSet::new());
let mut all_targets = vec![family_target.clone()];
if let Some(descendants) = self.db.descendants.get(family_target) {
all_targets.extend(descendants.clone());
}
// Filter targets to EXACTLY match the family_target prefix
let mut final_targets = Vec::new();
for target in all_targets {
let target_prefix = target.rfind('.').map(|idx| &target[..idx]);
if target_prefix == family_prefix {
final_targets.push(target);
}
}
final_targets.sort();
final_targets.dedup();
if final_targets.len() == 1 {
let variation = &final_targets[0];
if let Some(target_schema) = self.db.schemas.get(variation) {
let mut bypass_node = node.clone();
bypass_node.schema = std::sync::Arc::new(bypass_schema);
bypass_node.schema = std::sync::Arc::new(target_schema.clone());
let mut bypassed_args = self.compile_select_clause(r#type, table_aliases, bypass_node)?;
select_args.append(&mut bypassed_args);
} else {
let mut family_schemas = Vec::new();
let mut sorted_fam_variations: Vec<String> =
fam_type_def.variations.iter().cloned().collect();
sorted_fam_variations.sort();
for variation in &sorted_fam_variations {
let mut ref_schema = crate::database::schema::Schema::default();
ref_schema.obj.r#ref = Some(variation.clone());
ref_schema.compile(self.db, &mut std::collections::HashSet::new());
family_schemas.push(std::sync::Arc::new(ref_schema));
}
let base_alias = table_aliases
.get(&r#type.name)
.cloned()
.unwrap_or_else(|| node.parent_alias.to_string());
select_args.push(format!("'id', {}.id", base_alias));
let mut case_node = node.clone();
case_node.parent_alias = base_alias.clone();
let arc_aliases = std::sync::Arc::new(table_aliases.clone());
case_node.parent_type_aliases = Some(arc_aliases);
let (case_sql, _) = self.compile_one_of(&family_schemas, case_node)?;
select_args.push(format!("'type', {}", case_sql));
return Err(format!("Could not find schema for variation {}", variation));
}
} else {
let mut family_schemas = Vec::new();
for variation in &final_targets {
if let Some(target_schema) = self.db.schemas.get(variation) {
family_schemas.push(std::sync::Arc::new(target_schema.clone()));
} else {
return Err(format!(
"Could not find schema metadata for variation {}",
variation
));
}
}
let base_alias = table_aliases
.get(&r#type.name)
.cloned()
.unwrap_or_else(|| node.parent_alias.to_string());
select_args.push(format!("'id', {}.id", base_alias));
let mut case_node = node.clone();
case_node.parent_alias = base_alias.clone();
let arc_aliases = std::sync::Arc::new(table_aliases.clone());
case_node.parent_type_aliases = Some(arc_aliases);
let (case_sql, _) = self.compile_one_of(&family_schemas, case_node)?;
select_args.push(format!("'type', {}", case_sql));
}
} else if let Some(one_of) = &node.schema.obj.one_of {
let base_alias = table_aliases
@ -328,10 +342,7 @@ impl<'a> Compiler<'a> {
};
for option_schema in schemas {
if let Some(ref_id) = &option_schema.obj.r#ref {
// Find the physical type this ref maps to
let base_type_name = ref_id.split('.').next_back().unwrap_or("").to_string();
if let Some(base_type_name) = option_schema.obj.identifier() {
// Generate the nested SQL for this specific target type
let mut child_node = node.clone();
child_node.schema = std::sync::Arc::clone(option_schema);
@ -452,7 +463,6 @@ impl<'a> Compiler<'a> {
},
};
let (val_sql, val_type) = self.compile_node(child_node)?;
if val_type != "abort" {
@ -515,7 +525,13 @@ impl<'a> Compiler<'a> {
// Determine if the property schema resolves to a physical Database Entity
let mut bound_type_name = None;
if let Some(family_target) = prop_schema.obj.family.as_ref() {
bound_type_name = Some(family_target.split('.').next_back().unwrap_or(family_target).to_string());
bound_type_name = Some(
family_target
.split('.')
.next_back()
.unwrap_or(family_target)
.to_string(),
);
} else if let Some(lookup_key) = prop_schema.obj.identifier() {
bound_type_name = Some(lookup_key);
}
@ -536,7 +552,10 @@ impl<'a> Compiler<'a> {
}
if let Some(col) = poly_col {
if let Some(alias) = type_aliases.get(table_to_alias).or_else(|| type_aliases.get(&node.parent_alias)) {
if let Some(alias) = type_aliases
.get(table_to_alias)
.or_else(|| type_aliases.get(&node.parent_alias))
{
where_clauses.push(format!("{}.{} = '{}'", alias, col, type_name));
}
}
@ -710,8 +729,6 @@ impl<'a> Compiler<'a> {
) -> Result<(), String> {
if let Some(prop_ref) = &node.property_name {
let prop = prop_ref.as_str();
println!("DEBUG: Eval prop: {}", prop);
let mut parent_relation_alias = node.parent_alias.clone();
let mut child_relation_alias = base_alias.to_string();

View File

@ -51,7 +51,7 @@ impl Queryer {
};
// 3. Execute via Database Executor
self.execute_sql(schema_id, &sql, &args)
self.execute_sql(schema_id, &sql, args)
}
fn extract_filters(
@ -151,7 +151,7 @@ impl Queryer {
&self,
schema_id: &str,
sql: &str,
args: &[serde_json::Value],
args: Vec<serde_json::Value>,
) -> crate::drop::Drop {
match self.db.query(sql, Some(args)) {
Ok(serde_json::Value::Array(table)) => {

View File

@ -1463,6 +1463,18 @@ fn test_queryer_0_8() {
crate::tests::runner::run_test_case(&path, 0, 8).unwrap();
}
#[test]
fn test_queryer_0_9() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 9).unwrap();
}
#[test]
fn test_queryer_0_10() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 10).unwrap();
}
#[test]
fn test_not_0_0() {
let path = format!("{}/fixtures/not.json", env!("CARGO_MANIFEST_DIR"));
@ -3467,6 +3479,36 @@ fn test_if_then_else_13_1() {
crate::tests::runner::run_test_case(&path, 13, 1).unwrap();
}
#[test]
fn test_database_0_0() {
let path = format!("{}/fixtures/database.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 0).unwrap();
}
#[test]
fn test_database_1_0() {
let path = format!("{}/fixtures/database.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 0).unwrap();
}
#[test]
fn test_database_2_0() {
let path = format!("{}/fixtures/database.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 2, 0).unwrap();
}
#[test]
fn test_database_3_0() {
let path = format!("{}/fixtures/database.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 3, 0).unwrap();
}
#[test]
fn test_database_4_0() {
let path = format!("{}/fixtures/database.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 4, 0).unwrap();
}
#[test]
fn test_empty_string_0_0() {
let path = format!("{}/fixtures/emptyString.json", env!("CARGO_MANIFEST_DIR"));
@ -8596,3 +8638,15 @@ fn test_merger_0_10() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 10).unwrap();
}
#[test]
fn test_merger_0_11() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 11).unwrap();
}
#[test]
fn test_merger_0_12() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 12).unwrap();
}

View File

@ -134,12 +134,12 @@ fn test_library_api() {
{
"code": "REQUIRED_FIELD_MISSING",
"message": "Missing name",
"details": { "path": "/name" }
"details": { "path": "name" }
},
{
"code": "STRICT_PROPERTY_VIOLATION",
"message": "Unexpected property 'wrong'",
"details": { "path": "/wrong" }
"details": { "path": "wrong" }
}
]
})

View File

@ -14,7 +14,7 @@ where
}
// Type alias for easier reading
type CompiledSuite = Arc<Vec<(Suite, Arc<crate::database::Database>)>>;
type CompiledSuite = Arc<Vec<(Suite, Arc<Result<Arc<crate::database::Database>, crate::drop::Drop>>)>>;
// Global cache mapping filename -> Vector of (Parsed JSON suite, Compiled Database)
static CACHE: OnceLock<RwLock<HashMap<String, CompiledSuite>>> = OnceLock::new();
@ -43,19 +43,11 @@ fn get_cached_file(path: &str) -> CompiledSuite {
let mut compiled_suites = Vec::new();
for suite in suites {
let db_result = crate::database::Database::new(&suite.database);
if let Err(drop) = db_result {
let error_messages: Vec<String> = drop
.errors
.into_iter()
.map(|e| format!("Error {} at path {}: {}", e.code, e.details.path, e.message))
.collect();
panic!(
"System Setup Compilation failed for {}:\n{}",
path,
error_messages.join("\n")
);
}
compiled_suites.push((suite, Arc::new(db_result.unwrap())));
let compiled_db = match db_result {
Ok(db) => Ok(Arc::new(db)),
Err(drop) => Err(drop),
};
compiled_suites.push((suite, Arc::new(compiled_db)));
}
let new_data = Arc::new(compiled_suites);
@ -85,11 +77,36 @@ pub fn run_test_case(path: &str, suite_idx: usize, case_idx: usize) -> Result<()
let test = &group.tests[case_idx];
let mut failures = Vec::<String>::new();
// For validate/merge/query, if setup failed we must structurally fail this test
let db_unwrapped = if test.action.as_str() != "compile" {
match &**db {
Ok(valid_db) => Some(valid_db.clone()),
Err(drop) => {
let error_messages: Vec<String> = drop
.errors
.iter()
.map(|e| format!("Error {} at path {}: {}", e.code, e.details.path, e.message))
.collect();
failures.push(format!(
"[{}] Cannot run '{}' test '{}': System Setup Compilation structurally failed:\n{}",
group.description, test.action, test.description, error_messages.join("\n")
));
None
}
}
} else {
None
};
if !failures.is_empty() {
return Err(failures.join("\n"));
}
// 4. Run Tests
match test.action.as_str() {
"compile" => {
let result = test.run_compile(db.clone());
let result = test.run_compile(db);
if let Err(e) = result {
println!("TEST COMPILE ERROR FOR '{}': {}", test.description, e);
failures.push(format!(
@ -99,7 +116,7 @@ pub fn run_test_case(path: &str, suite_idx: usize, case_idx: usize) -> Result<()
}
}
"validate" => {
let result = test.run_validate(db.clone());
let result = test.run_validate(db_unwrapped.unwrap());
if let Err(e) = result {
println!("TEST VALIDATE ERROR FOR '{}': {}", test.description, e);
failures.push(format!(
@ -109,7 +126,7 @@ pub fn run_test_case(path: &str, suite_idx: usize, case_idx: usize) -> Result<()
}
}
"merge" => {
let result = test.run_merge(db.clone());
let result = test.run_merge(db_unwrapped.unwrap());
if let Err(e) = result {
println!("TEST MERGE ERROR FOR '{}': {}", test.description, e);
failures.push(format!(
@ -119,7 +136,7 @@ pub fn run_test_case(path: &str, suite_idx: usize, case_idx: usize) -> Result<()
}
}
"query" => {
let result = test.run_query(db.clone());
let result = test.run_query(db_unwrapped.unwrap());
if let Err(e) = result {
println!("TEST QUERY ERROR FOR '{}': {}", test.description, e);
failures.push(format!(

View File

@ -35,21 +35,21 @@ fn default_action() -> String {
}
impl Case {
pub fn run_compile(&self, _db: Arc<Database>) -> Result<(), String> {
let expected_success = self.expect.as_ref().map(|e| e.success).unwrap_or(false);
pub fn run_compile(
&self,
db_res: &Result<Arc<Database>, crate::drop::Drop>,
) -> Result<(), String> {
let expect = match &self.expect {
Some(e) => e,
None => return Ok(()),
};
// We assume db has already been setup and compiled successfully by runner.rs's `jspg_setup`
// We just need to check if there are compilation errors vs expected success
let got_success = true; // Setup ensures success unless setup fails, which runner handles
let result = match db_res {
Ok(_) => crate::drop::Drop::success(),
Err(d) => d.clone(),
};
if expected_success != got_success {
return Err(format!(
"Expected success: {}, Got: {}",
expected_success, got_success
));
}
Ok(())
expect.assert_drop(&result)
}
pub fn run_validate(&self, db: Arc<Database>) -> Result<(), String> {
@ -57,8 +57,6 @@ impl Case {
let validator = Validator::new(db);
let expected_success = self.expect.as_ref().map(|e| e.success).unwrap_or(false);
let schema_id = &self.schema_id;
if !validator.db.schemas.contains_key(schema_id) {
return Err(format!(
@ -70,19 +68,8 @@ impl Case {
let test_data = self.data.clone().unwrap_or(Value::Null);
let result = validator.validate(schema_id, &test_data);
let got_valid = result.errors.is_empty();
if got_valid != expected_success {
let error_msg = if result.errors.is_empty() {
"None".to_string()
} else {
format!("{:?}", result.errors)
};
return Err(format!(
"Expected: {}, Got: {}. Errors: {}",
expected_success, got_valid, error_msg
));
if let Some(expect) = &self.expect {
expect.assert_drop(&result)?;
}
Ok(())
@ -101,24 +88,16 @@ impl Case {
let test_data = self.data.clone().unwrap_or(Value::Null);
let result = merger.merge(&self.schema_id, test_data);
let expected_success = self.expect.as_ref().map(|e| e.success).unwrap_or(false);
let got_success = result.errors.is_empty();
let error_msg = if result.errors.is_empty() {
"None".to_string()
} else {
format!("{:?}", result.errors)
};
let return_val = if expected_success != got_success {
Err(format!(
"Merge Expected: {}, Got: {}. Errors: {}",
expected_success, got_success, error_msg
))
} else if let Some(expect) = &self.expect {
let queries = db.executor.get_queries();
expect.assert_pattern(&queries)?;
expect.assert_sql(&queries)
let return_val = if let Some(expect) = &self.expect {
if let Err(e) = expect.assert_drop(&result) {
Err(format!("Merge {}", e))
} else if result.errors.is_empty() {
// Only assert SQL if merge succeeded
let queries = db.executor.get_queries();
expect.assert_pattern(&queries).and_then(|_| expect.assert_sql(&queries))
} else {
Ok(())
}
} else {
Ok(())
};
@ -139,24 +118,15 @@ impl Case {
let result = queryer.query(&self.schema_id, self.filters.as_ref());
let expected_success = self.expect.as_ref().map(|e| e.success).unwrap_or(false);
let got_success = result.errors.is_empty();
let error_msg = if result.errors.is_empty() {
"None".to_string()
} else {
format!("{:?}", result.errors)
};
let return_val = if expected_success != got_success {
Err(format!(
"Query Expected: {}, Got: {}. Errors: {}",
expected_success, got_success, error_msg
))
} else if let Some(expect) = &self.expect {
let queries = db.executor.get_queries();
expect.assert_pattern(&queries)?;
expect.assert_sql(&queries)
let return_val = if let Some(expect) = &self.expect {
if let Err(e) = expect.assert_drop(&result) {
Err(format!("Query {}", e))
} else if result.errors.is_empty() {
let queries = db.executor.get_queries();
expect.assert_pattern(&queries).and_then(|_| expect.assert_sql(&queries))
} else {
Ok(())
}
} else {
Ok(())
};

View File

@ -0,0 +1,78 @@
use super::Expect;
impl Expect {
pub fn assert_drop(&self, drop: &crate::drop::Drop) -> Result<(), String> {
let got_success = drop.errors.is_empty();
if self.success != got_success {
let mut err_msg = format!("Expected success: {}, Got: {}.", self.success, got_success);
if !drop.errors.is_empty() {
err_msg.push_str(&format!(" Actual Errors: {:?}", drop.errors));
}
return Err(err_msg);
}
if !self.success {
if let Some(expected_errors) = &self.errors {
let actual_values: Vec<serde_json::Value> = drop.errors
.iter()
.map(|e| serde_json::to_value(e).unwrap())
.collect();
for (i, expected_val) in expected_errors.iter().enumerate() {
let mut matched = false;
for actual_val in &actual_values {
if subset_match(expected_val, actual_val) {
matched = true;
break;
}
}
if !matched {
return Err(format!(
"Expected error {} was not found in actual errors.\nExpected subset: {}\nActual full errors: {:?}",
i,
serde_json::to_string_pretty(expected_val).unwrap(),
drop.errors,
));
}
}
}
}
Ok(())
}
}
// Helper to check if `expected` is a structural subset of `actual`
fn subset_match(expected: &serde_json::Value, actual: &serde_json::Value) -> bool {
match (expected, actual) {
(serde_json::Value::Object(exp_map), serde_json::Value::Object(act_map)) => {
for (k, v) in exp_map {
if let Some(act_v) = act_map.get(k) {
if !subset_match(v, act_v) {
return false;
}
} else {
return false;
}
}
true
}
(serde_json::Value::Array(exp_arr), serde_json::Value::Array(act_arr)) => {
// Basic check: array sizes and elements must match exactly in order
if exp_arr.len() != act_arr.len() {
return false;
}
for (e, a) in exp_arr.iter().zip(act_arr.iter()) {
if !subset_match(e, a) {
return false;
}
}
true
}
// For primitives, exact match
(e, a) => e == a,
}
}

View File

@ -1,5 +1,6 @@
pub mod pattern;
pub mod sql;
pub mod drop;
use serde::Deserialize;

View File

@ -41,6 +41,14 @@ impl<'a> ValidationContext<'a> {
}
}
pub fn join_path(&self, key: &str) -> String {
if self.path.is_empty() {
key.to_string()
} else {
format!("{}/{}", self.path, key)
}
}
pub fn derive(
&self,
schema: &'a Schema,

View File

@ -92,10 +92,10 @@ impl<'a> ValidationContext<'a> {
for (i, sub_schema) in prefix.iter().enumerate() {
if i < len {
if let Some(child_instance) = arr.get(i) {
let mut item_path = format!("{}/{}", self.path, i);
let mut item_path = self.join_path(&i.to_string());
if let Some(obj) = child_instance.as_object() {
if let Some(id_str) = obj.get("id").and_then(|v| v.as_str()) {
item_path = format!("{}/{}", self.path, id_str);
item_path = self.join_path(id_str);
}
}
let derived = self.derive(
@ -118,10 +118,10 @@ impl<'a> ValidationContext<'a> {
if let Some(ref items_schema) = self.schema.items {
for i in validation_index..len {
if let Some(child_instance) = arr.get(i) {
let mut item_path = format!("{}/{}", self.path, i);
let mut item_path = self.join_path(&i.to_string());
if let Some(obj) = child_instance.as_object() {
if let Some(id_str) = obj.get("id").and_then(|v| v.as_str()) {
item_path = format!("{}/{}", self.path, id_str);
item_path = self.join_path(id_str);
}
}
let derived = self.derive(

View File

@ -44,7 +44,7 @@ impl<'a> ValidationContext<'a> {
result.errors.push(ValidationError {
code: "STRICT_PROPERTY_VIOLATION".to_string(),
message: format!("Unexpected property '{}'", key),
path: format!("{}/{}", self.path, key),
path: self.join_path(key),
});
}
}
@ -53,11 +53,11 @@ impl<'a> ValidationContext<'a> {
if let Some(arr) = self.instance.as_array() {
for i in 0..arr.len() {
if !result.evaluated_indices.contains(&i) {
let mut item_path = format!("{}/{}", self.path, i);
let mut item_path = self.join_path(&i.to_string());
if let Some(child_instance) = arr.get(i) {
if let Some(obj) = child_instance.as_object() {
if let Some(id_str) = obj.get("id").and_then(|v| v.as_str()) {
item_path = format!("{}/{}", self.path, id_str);
item_path = self.join_path(id_str);
}
}
}

View File

@ -32,7 +32,7 @@ impl<'a> ValidationContext<'a> {
"Type '{}' is not a valid descendant for this entity bound schema",
type_str
),
path: format!("{}/type", self.path),
path: self.join_path("type"),
});
}
} else {
@ -70,7 +70,7 @@ impl<'a> ValidationContext<'a> {
result.errors.push(ValidationError {
code: "REQUIRED_FIELD_MISSING".to_string(),
message: format!("Missing {}", field),
path: format!("{}/{}", self.path, field),
path: self.join_path(field),
});
}
}
@ -109,7 +109,7 @@ impl<'a> ValidationContext<'a> {
}
if let Some(child_instance) = obj.get(key) {
let new_path = format!("{}/{}", self.path, key);
let new_path = self.join_path(key);
let is_ref = sub_schema.r#ref.is_some();
let next_extensible = if is_ref { false } else { self.extensible };
@ -147,7 +147,7 @@ impl<'a> ValidationContext<'a> {
for (compiled_re, sub_schema) in compiled_pp {
for (key, child_instance) in obj {
if compiled_re.0.is_match(key) {
let new_path = format!("{}/{}", self.path, key);
let new_path = self.join_path(key);
let is_ref = sub_schema.r#ref.is_some();
let next_extensible = if is_ref { false } else { self.extensible };
@ -186,7 +186,7 @@ impl<'a> ValidationContext<'a> {
}
if !locally_matched {
let new_path = format!("{}/{}", self.path, key);
let new_path = self.join_path(key);
let is_ref = additional_schema.r#ref.is_some();
let next_extensible = if is_ref { false } else { self.extensible };
@ -207,7 +207,7 @@ impl<'a> ValidationContext<'a> {
if let Some(ref property_names) = self.schema.property_names {
for key in obj.keys() {
let _new_path = format!("{}/propertyNames/{}", self.path, key);
let _new_path = self.join_path(&format!("propertyNames/{}", key));
let val_str = Value::String(key.to_string());
let ctx = ValidationContext::new(

View File

@ -31,10 +31,7 @@ impl<'a> ValidationContext<'a> {
}
if let Some(family_target) = &self.schema.family {
// The descendants map is keyed by the schema's own $id, not the target string.
if let Some(schema_id) = &self.schema.id
&& let Some(descendants) = self.db.descendants.get(schema_id)
{
if let Some(descendants) = self.db.descendants.get(family_target) {
// Validate against all descendants simulating strict oneOf logic
let mut passed_candidates: Vec<(String, usize, ValidationResult)> = Vec::new();

View File

@ -1 +1 @@
1.0.92
1.0.103