Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| eb91b65e65 | |||
| 8bf3649465 | |||
| 9fe5a34163 | |||
| f5bf21eb58 | |||
| 9dcafed406 | |||
| ffd6c27da3 | |||
| 4941dc6069 | |||
| a8a15a82ef | |||
| 8dcc714963 | |||
| f87ac81f3b | |||
| 8ca9017cc4 | |||
| 10c57e59ec | |||
| ef4571767c | |||
| 29bd25eaff |
23
GEMINI.md
23
GEMINI.md
@ -7,14 +7,14 @@
|
||||
JSPG operates by deeply integrating the JSON Schema Draft 2020-12 specification directly into the Postgres session lifecycle. It is built around three core pillars:
|
||||
* **Validator**: In-memory, near-instant JSON structural validation and type polymorphism routing.
|
||||
* **Merger**: Automatically traverse and UPSERT deeply nested JSON graphs into normalized relational tables.
|
||||
* **Queryer**: Compile JSON Schemas into static, cached SQL SPI `SELECT` plans for fetching full entities or isolated "Stems".
|
||||
* **Queryer**: Compile JSON Schemas into static, cached SQL SPI `SELECT` plans for fetching full entities or isolated ad-hoc object boundaries.
|
||||
|
||||
### 🎯 Goals
|
||||
1. **Draft 2020-12 Compliance**: Attempt to adhere to the official JSON Schema Draft 2020-12 specification.
|
||||
2. **Ultra-Fast Execution**: Compile schemas into optimized in-memory validation trees and cached SQL SPIs to bypass Postgres Query Builder overheads.
|
||||
3. **Connection-Bound Caching**: Leverage the PostgreSQL session lifecycle using an **Atomic Swap** pattern. Schemas are 100% frozen, completely eliminating locks during read access.
|
||||
4. **Structural Inheritance**: Support object-oriented schema design via Implicit Keyword Shadowing and virtual `$family` references natively mapped to Postgres table constraints.
|
||||
5. **Reactive Beats**: Provide natively generated "Stems" (isolated payload fragments) for dynamic websocket reactivity.
|
||||
5. **Reactive Beats**: Provide ultra-fast natively generated flat payloads mapping directly to the Dart topological state for dynamic websocket reactivity.
|
||||
|
||||
### Concurrency & Threading ("Immutable Graphs")
|
||||
To support high-throughput operations while allowing for runtime updates (e.g., during hot-reloading), JSPG uses an **Atomic Swap** pattern:
|
||||
@ -118,22 +118,11 @@ The Queryer transforms Postgres into a pre-compiled Semantic Query Engine, desig
|
||||
* **Multi-Table Branching**: If the Physical Table is a parent to other tables (e.g. `organization` has variations `["organization", "bot", "person"]`), the compiler generates a dynamic `CASE WHEN type = '...' THEN ...` query, expanding into `JOIN`s for each variation.
|
||||
* **Single-Table Bypass**: If the Physical Table is a leaf node with only one variation (e.g. `person` has variations `["person"]`), the compiler cleanly bypasses `CASE` generation and compiles a simple `SELECT` across the base table, as all schema extensions (e.g. `light.person`, `full.person`) are guaranteed to reside in the exact same physical row.
|
||||
|
||||
### The Stem Engine
|
||||
### Ad-Hoc Schema Promotion
|
||||
|
||||
Rather than over-fetching heavy Entity payloads and trimming them, Punc Framework Websockets depend on isolated subgraphs defined as **Stems**.
|
||||
A `Stem` is a declaration of an **Entity Type boundary** that exists somewhere within the compiled JSON Schema graph, expressed using **`gjson` multipath syntax** (e.g., `contacts.#.phone_numbers.#`).
|
||||
|
||||
Because `pg_notify` (Beats) fire rigidly from physical Postgres tables (e.g. `{"type": "phone_number"}`), the Go Framework only ever needs to know: "Does the schema `with_contacts.person` contain the `phone_number` Entity anywhere inside its tree, and if so, what is the gjson path to iterate its payload?"
|
||||
|
||||
* **Initialization:** During startup (`jspg_stems()`), the database crawls all Schemas and maps out every physical Entity Type it references. It builds a highly optimized `HashMap<String, HashMap<String, Arc<Stem>>>` providing strictly `O(1)` memory lookups mapping `Schema ID -> { Stem Path -> Entity Type }`.
|
||||
* **GJSON Pathing:** Unlike standard JSON Pointers, stems utilize `.#` array iterator syntax. The Go web server consumes this native path (e.g. `lines.#`) across the raw Postgres JSON byte payload, extracting all active UUIDs in one massive sub-millisecond sweep without unmarshaling Go ASTs.
|
||||
* **Polymorphic Condition Selectors:** When trailing paths would otherwise collide because of abstract polymorphic type definitions (e.g., a `target` property bounded by a `oneOf` taking either `phone_number` or `email_address`), JSPG natively appends evaluated `gjson` type conditions into the path (e.g. `contacts.#.target#(type=="phone_number")`). This guarantees `O(1)` key uniqueness in the HashMap while retaining extreme array extraction speeds natively without runtime AST evaluation.
|
||||
* **Identifier Prioritization:** When determining if a nested object boundary is an Entity, JSPG natively prioritizes defined `$id` tags over `$ref` inheritance pointers to prevent polymorphic boundaries from devolving into their generic base classes.
|
||||
* **Cyclical Deduplication:** Because Punc relationships often reference back on themselves via deeply nested classes, the Stem Engine applies intelligent path deduplication. If the active `current_path` already ends with the target entity string, it traverses the inheritance properties without appending the entity to the stem path again, eliminating infinite powerset loops.
|
||||
* **Relationship Path Squashing:** When calculating string paths structurally, JSPG intentionally **omits** properties natively named `target` or `source` if they belong to a native database `relationship` table override.
|
||||
* **The Go Router**: The Golang Punc framework uses this exact mapping to register WebSocket Beat frequencies exclusively on the Entity types discovered.
|
||||
* **The Queryer Execution**: When the Go framework asks JSPG to hydrate a partial `phone_number` stem for the `with_contacts.person` schema, instead of jumping through string paths, the SQL Compiler simply reaches into the Schema's AST using the `phone_number` Type string, pulls out exactly that entity's mapping rules, and returns a fully correlated `SELECT` block! This natively handles nested array properties injected via `oneOf` or array references efficiently bypassing runtime powerset expansion.
|
||||
* **Performance:** These Stem execution structures are fully statically compiled via SPI and map perfectly to `O(1)` real-time routing logic on the application tier.
|
||||
To seamlessly support deeply nested, inline Object definitions that don't declare an explicit `$id`, JSPG aggressively promotes them to standalone topological entities during the database compilation phase.
|
||||
* **Hash Generation:** While evaluating the unified graph, if the compiler enters an `Object` or `Array` structure completely lacking an `$id`, it dynamically calculates a localized hash alias representing exactly its structural constraints.
|
||||
* **Promotion:** This inline chunk is mathematically elevated to its own `$id` in the `db.schemas` cache registry. This guarantees that $O(1)$ WebSockets or isolated queries can natively target any arbitrary sub-object of a massive database topology directly without recursively re-parsing its parent's AST block every read.
|
||||
|
||||
## 5. Testing & Execution Architecture
|
||||
|
||||
|
||||
178
fixtures/paths.json
Normal file
178
fixtures/paths.json
Normal file
@ -0,0 +1,178 @@
|
||||
[
|
||||
{
|
||||
"description": "Hybrid Array Pathing",
|
||||
"database": {
|
||||
"schemas": [
|
||||
{
|
||||
"$id": "hybrid_pathing",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"primitives": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"ad_hoc_objects": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"name"
|
||||
]
|
||||
}
|
||||
},
|
||||
"entities": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string"
|
||||
},
|
||||
"value": {
|
||||
"type": "number",
|
||||
"minimum": 10
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"deep_entities": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string"
|
||||
},
|
||||
"nested": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string"
|
||||
},
|
||||
"flag": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"tests": [
|
||||
{
|
||||
"description": "happy path passes structural validation",
|
||||
"data": {
|
||||
"primitives": ["a", "b"],
|
||||
"ad_hoc_objects": [{"name": "obj1"}],
|
||||
"entities": [{"id": "entity-1", "value": 15}],
|
||||
"deep_entities": [
|
||||
{
|
||||
"id": "parent-1",
|
||||
"nested": [{"id": "child-1", "flag": true}]
|
||||
}
|
||||
]
|
||||
},
|
||||
"schema_id": "hybrid_pathing",
|
||||
"action": "validate",
|
||||
"expect": {
|
||||
"success": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "primitive arrays use numeric indexing",
|
||||
"data": {
|
||||
"primitives": ["a", 123]
|
||||
},
|
||||
"schema_id": "hybrid_pathing",
|
||||
"action": "validate",
|
||||
"expect": {
|
||||
"success": false,
|
||||
"errors": [
|
||||
{
|
||||
"code": "INVALID_TYPE",
|
||||
"path": "/primitives/1"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "ad-hoc objects without ids use numeric indexing",
|
||||
"data": {
|
||||
"ad_hoc_objects": [
|
||||
{"name": "valid"},
|
||||
{"age": 30}
|
||||
]
|
||||
},
|
||||
"schema_id": "hybrid_pathing",
|
||||
"action": "validate",
|
||||
"expect": {
|
||||
"success": false,
|
||||
"errors": [
|
||||
{
|
||||
"code": "REQUIRED_FIELD_MISSING",
|
||||
"path": "/ad_hoc_objects/1/name"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "arrays of objects with ids use topological uuid indexing",
|
||||
"data": {
|
||||
"entities": [
|
||||
{"id": "entity-alpha", "value": 20},
|
||||
{"id": "entity-beta", "value": 5}
|
||||
]
|
||||
},
|
||||
"schema_id": "hybrid_pathing",
|
||||
"action": "validate",
|
||||
"expect": {
|
||||
"success": false,
|
||||
"errors": [
|
||||
{
|
||||
"code": "MINIMUM_VIOLATED",
|
||||
"path": "/entities/entity-beta/value"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "deeply nested entity arrays retain full topological paths",
|
||||
"data": {
|
||||
"deep_entities": [
|
||||
{
|
||||
"id": "parent-omega",
|
||||
"nested": [
|
||||
{"id": "child-alpha", "flag": true},
|
||||
{"id": "child-beta", "flag": "invalid-string"}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"schema_id": "hybrid_pathing",
|
||||
"action": "validate",
|
||||
"expect": {
|
||||
"success": false,
|
||||
"errors": [
|
||||
{
|
||||
"code": "INVALID_TYPE",
|
||||
"path": "/deep_entities/parent-omega/nested/child-beta/flag"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@ -20,6 +20,16 @@
|
||||
"$family": "base.person"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "get_orders",
|
||||
"schemas": [
|
||||
{
|
||||
"$id": "get_orders.response",
|
||||
"type": "array",
|
||||
"items": { "$ref": "light.order" }
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"enums": [],
|
||||
@ -664,6 +674,15 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$id": "light.order",
|
||||
"$ref": "order",
|
||||
"properties": {
|
||||
"customer": {
|
||||
"$ref": "base.person"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$id": "full.order",
|
||||
"$ref": "order",
|
||||
@ -1003,6 +1022,7 @@
|
||||
" JOIN agreego.entity entity_6 ON entity_6.id = relationship_5.id",
|
||||
" WHERE",
|
||||
" NOT entity_6.archived",
|
||||
" AND relationship_5.target_type = 'address'",
|
||||
" AND relationship_5.source_id = entity_3.id),",
|
||||
" 'age', person_1.age,",
|
||||
" 'archived', entity_3.archived,",
|
||||
@ -1094,6 +1114,7 @@
|
||||
" JOIN agreego.entity entity_20 ON entity_20.id = relationship_19.id",
|
||||
" WHERE",
|
||||
" NOT entity_20.archived",
|
||||
" AND relationship_19.target_type = 'email_address'",
|
||||
" AND relationship_19.source_id = entity_3.id),",
|
||||
" 'first_name', person_1.first_name,",
|
||||
" 'id', entity_3.id,",
|
||||
@ -1127,6 +1148,7 @@
|
||||
" JOIN agreego.entity entity_25 ON entity_25.id = relationship_24.id",
|
||||
" WHERE",
|
||||
" NOT entity_25.archived",
|
||||
" AND relationship_24.target_type = 'phone_number'",
|
||||
" AND relationship_24.source_id = entity_3.id),",
|
||||
" 'type', entity_3.type",
|
||||
")",
|
||||
@ -1163,7 +1185,7 @@
|
||||
"$eq": true,
|
||||
"$ne": false
|
||||
},
|
||||
"contacts.#.is_primary": {
|
||||
"contacts/is_primary": {
|
||||
"$eq": true
|
||||
},
|
||||
"created_at": {
|
||||
@ -1203,7 +1225,7 @@
|
||||
"$eq": "%Doe%",
|
||||
"$ne": "%Smith%"
|
||||
},
|
||||
"phone_numbers.#.target.number": {
|
||||
"phone_numbers/target/number": {
|
||||
"$eq": "555-1234"
|
||||
}
|
||||
},
|
||||
@ -1240,6 +1262,7 @@
|
||||
" JOIN agreego.entity entity_6 ON entity_6.id = relationship_5.id",
|
||||
" WHERE",
|
||||
" NOT entity_6.archived",
|
||||
" AND relationship_5.target_type = 'address'",
|
||||
" AND relationship_5.source_id = entity_3.id),",
|
||||
" 'age', person_1.age,",
|
||||
" 'archived', entity_3.archived,",
|
||||
@ -1332,6 +1355,7 @@
|
||||
" JOIN agreego.entity entity_20 ON entity_20.id = relationship_19.id",
|
||||
" WHERE",
|
||||
" NOT entity_20.archived",
|
||||
" AND relationship_19.target_type = 'email_address'",
|
||||
" AND relationship_19.source_id = entity_3.id),",
|
||||
" 'first_name', person_1.first_name,",
|
||||
" 'id', entity_3.id,",
|
||||
@ -1366,6 +1390,7 @@
|
||||
" JOIN agreego.entity entity_25 ON entity_25.id = relationship_24.id",
|
||||
" WHERE",
|
||||
" NOT entity_25.archived",
|
||||
" AND relationship_24.target_type = 'phone_number'",
|
||||
" AND relationship_24.source_id = entity_3.id),",
|
||||
" 'type', entity_3.type",
|
||||
")",
|
||||
@ -1441,7 +1466,9 @@
|
||||
"FROM agreego.contact contact_1",
|
||||
"JOIN agreego.relationship relationship_2 ON relationship_2.id = contact_1.id",
|
||||
"JOIN agreego.entity entity_3 ON entity_3.id = relationship_2.id",
|
||||
"WHERE NOT entity_3.archived)"
|
||||
"WHERE",
|
||||
" NOT entity_3.archived",
|
||||
" AND relationship_2.target_type = 'email_address')"
|
||||
]
|
||||
]
|
||||
}
|
||||
@ -1561,6 +1588,47 @@
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "Root Array SQL evaluation for Order fetching Light Order",
|
||||
"action": "query",
|
||||
"schema_id": "get_orders.response",
|
||||
"expect": {
|
||||
"success": true,
|
||||
"sql": [
|
||||
[
|
||||
"(SELECT COALESCE(jsonb_agg(jsonb_build_object(",
|
||||
" 'archived', entity_2.archived,",
|
||||
" 'created_at', entity_2.created_at,",
|
||||
" 'customer',",
|
||||
" (SELECT jsonb_build_object(",
|
||||
" 'age', person_3.age,",
|
||||
" 'archived', entity_5.archived,",
|
||||
" 'created_at', entity_5.created_at,",
|
||||
" 'first_name', person_3.first_name,",
|
||||
" 'id', entity_5.id,",
|
||||
" 'last_name', person_3.last_name,",
|
||||
" 'name', entity_5.name,",
|
||||
" 'type', entity_5.type",
|
||||
" )",
|
||||
" FROM agreego.person person_3",
|
||||
" JOIN agreego.organization organization_4 ON organization_4.id = person_3.id",
|
||||
" JOIN agreego.entity entity_5 ON entity_5.id = organization_4.id",
|
||||
" WHERE",
|
||||
" NOT entity_5.archived",
|
||||
" AND order_1.customer_id = person_3.id),",
|
||||
" 'customer_id', order_1.customer_id,",
|
||||
" 'id', entity_2.id,",
|
||||
" 'name', entity_2.name,",
|
||||
" 'total', order_1.total,",
|
||||
" 'type', entity_2.type",
|
||||
")), '[]'::jsonb)",
|
||||
"FROM agreego.order order_1",
|
||||
"JOIN agreego.entity entity_2 ON entity_2.id = order_1.id",
|
||||
"WHERE NOT entity_2.archived)"
|
||||
]
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@ -9,6 +9,61 @@ impl SpiExecutor {
|
||||
pub fn new() -> Self {
|
||||
Self {}
|
||||
}
|
||||
|
||||
fn transact<F, R>(&self, f: F) -> Result<R, String>
|
||||
where
|
||||
F: FnOnce() -> Result<R, String>,
|
||||
{
|
||||
unsafe {
|
||||
let oldcontext = pgrx::pg_sys::CurrentMemoryContext;
|
||||
let oldowner = pgrx::pg_sys::CurrentResourceOwner;
|
||||
pgrx::pg_sys::BeginInternalSubTransaction(std::ptr::null());
|
||||
pgrx::pg_sys::MemoryContextSwitchTo(oldcontext);
|
||||
|
||||
let runner = std::panic::AssertUnwindSafe(move || {
|
||||
let res = f();
|
||||
|
||||
pgrx::pg_sys::ReleaseCurrentSubTransaction();
|
||||
pgrx::pg_sys::MemoryContextSwitchTo(oldcontext);
|
||||
pgrx::pg_sys::CurrentResourceOwner = oldowner;
|
||||
|
||||
res
|
||||
});
|
||||
|
||||
pgrx::PgTryBuilder::new(runner)
|
||||
.catch_rust_panic(|cause| {
|
||||
pgrx::pg_sys::RollbackAndReleaseCurrentSubTransaction();
|
||||
pgrx::pg_sys::MemoryContextSwitchTo(oldcontext);
|
||||
pgrx::pg_sys::CurrentResourceOwner = oldowner;
|
||||
|
||||
// Rust panics are fatal bugs, not validation errors. Rethrow so they bubble up.
|
||||
cause.rethrow()
|
||||
})
|
||||
.catch_others(|cause| {
|
||||
pgrx::pg_sys::RollbackAndReleaseCurrentSubTransaction();
|
||||
pgrx::pg_sys::MemoryContextSwitchTo(oldcontext);
|
||||
pgrx::pg_sys::CurrentResourceOwner = oldowner;
|
||||
|
||||
let error_msg = match &cause {
|
||||
pgrx::pg_sys::panic::CaughtError::PostgresError(e)
|
||||
| pgrx::pg_sys::panic::CaughtError::ErrorReport(e) => {
|
||||
let json_err = serde_json::json!({
|
||||
"error": e.message(),
|
||||
"code": format!("{:?}", e.sql_error_code()),
|
||||
"detail": e.detail(),
|
||||
"hint": e.hint()
|
||||
});
|
||||
json_err.to_string()
|
||||
}
|
||||
_ => format!("{:?}", cause),
|
||||
};
|
||||
|
||||
pgrx::warning!("JSPG Caught Native Postgres Error: {}", error_msg);
|
||||
Err(error_msg)
|
||||
})
|
||||
.execute()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl DatabaseExecutor for SpiExecutor {
|
||||
@ -24,7 +79,7 @@ impl DatabaseExecutor for SpiExecutor {
|
||||
}
|
||||
}
|
||||
|
||||
pgrx::PgTryBuilder::new(|| {
|
||||
self.transact(|| {
|
||||
Spi::connect(|client| {
|
||||
pgrx::notice!("JSPG_SQL: {}", sql);
|
||||
match client.select(sql, Some(args_with_oid.len() as i64), &args_with_oid) {
|
||||
@ -41,11 +96,6 @@ impl DatabaseExecutor for SpiExecutor {
|
||||
}
|
||||
})
|
||||
})
|
||||
.catch_others(|cause| {
|
||||
pgrx::warning!("JSPG Caught Native Postgres Error: {:?}", cause);
|
||||
Err(format!("{:?}", cause))
|
||||
})
|
||||
.execute()
|
||||
}
|
||||
|
||||
fn execute(&self, sql: &str, args: Option<&[Value]>) -> Result<(), String> {
|
||||
@ -60,7 +110,7 @@ impl DatabaseExecutor for SpiExecutor {
|
||||
}
|
||||
}
|
||||
|
||||
pgrx::PgTryBuilder::new(|| {
|
||||
self.transact(|| {
|
||||
Spi::connect_mut(|client| {
|
||||
pgrx::notice!("JSPG_SQL: {}", sql);
|
||||
match client.update(sql, Some(args_with_oid.len() as i64), &args_with_oid) {
|
||||
@ -69,44 +119,43 @@ impl DatabaseExecutor for SpiExecutor {
|
||||
}
|
||||
})
|
||||
})
|
||||
.catch_others(|cause| {
|
||||
pgrx::warning!("JSPG Caught Native Postgres Error: {:?}", cause);
|
||||
Err(format!("{:?}", cause))
|
||||
})
|
||||
.execute()
|
||||
}
|
||||
|
||||
fn auth_user_id(&self) -> Result<String, String> {
|
||||
Spi::connect(|client| {
|
||||
let mut tup_table = client
|
||||
.select(
|
||||
"SELECT COALESCE(current_setting('auth.user_id', true), 'ffffffff-ffff-ffff-ffff-ffffffffffff')",
|
||||
None,
|
||||
&[],
|
||||
)
|
||||
.map_err(|e| format!("SPI Select Error: {}", e))?;
|
||||
self.transact(|| {
|
||||
Spi::connect(|client| {
|
||||
let mut tup_table = client
|
||||
.select(
|
||||
"SELECT COALESCE(current_setting('auth.user_id', true), 'ffffffff-ffff-ffff-ffff-ffffffffffff')",
|
||||
None,
|
||||
&[],
|
||||
)
|
||||
.map_err(|e| format!("SPI Select Error: {}", e))?;
|
||||
|
||||
let row = tup_table
|
||||
.next()
|
||||
.ok_or("No user id setting returned from context".to_string())?;
|
||||
let user_id: Option<String> = row.get(1).map_err(|e| e.to_string())?;
|
||||
let row = tup_table
|
||||
.next()
|
||||
.ok_or("No user id setting returned from context".to_string())?;
|
||||
let user_id: Option<String> = row.get(1).map_err(|e| e.to_string())?;
|
||||
|
||||
user_id.ok_or("Missing user_id".to_string())
|
||||
user_id.ok_or("Missing user_id".to_string())
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
fn timestamp(&self) -> Result<String, String> {
|
||||
Spi::connect(|client| {
|
||||
let mut tup_table = client
|
||||
.select("SELECT clock_timestamp()::text", None, &[])
|
||||
.map_err(|e| format!("SPI Select Error: {}", e))?;
|
||||
self.transact(|| {
|
||||
Spi::connect(|client| {
|
||||
let mut tup_table = client
|
||||
.select("SELECT clock_timestamp()::text", None, &[])
|
||||
.map_err(|e| format!("SPI Select Error: {}", e))?;
|
||||
|
||||
let row = tup_table
|
||||
.next()
|
||||
.ok_or("No clock timestamp returned".to_string())?;
|
||||
let timestamp: Option<String> = row.get(1).map_err(|e| e.to_string())?;
|
||||
let row = tup_table
|
||||
.next()
|
||||
.ok_or("No clock timestamp returned".to_string())?;
|
||||
let timestamp: Option<String> = row.get(1).map_err(|e| e.to_string())?;
|
||||
|
||||
timestamp.ok_or("Missing timestamp".to_string())
|
||||
timestamp.ok_or("Missing timestamp".to_string())
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -507,10 +507,8 @@ impl Schema {
|
||||
let mut parent_type_name = None;
|
||||
if let Some(family) = &self.obj.family {
|
||||
parent_type_name = Some(family.split('.').next_back().unwrap_or(family).to_string());
|
||||
} else if let Some(id) = &self.obj.id {
|
||||
parent_type_name = Some(id.split('.').next_back().unwrap_or("").to_string());
|
||||
} else if let Some(ref_id) = &self.obj.r#ref {
|
||||
parent_type_name = Some(ref_id.split('.').next_back().unwrap_or("").to_string());
|
||||
} else if let Some(identifier) = self.obj.identifier() {
|
||||
parent_type_name = Some(identifier);
|
||||
}
|
||||
|
||||
if let Some(p_type) = parent_type_name {
|
||||
@ -531,12 +529,12 @@ impl Schema {
|
||||
|
||||
if let Some(family) = &target_schema.obj.family {
|
||||
child_type_name = Some(family.split('.').next_back().unwrap_or(family).to_string());
|
||||
} else if let Some(ref_id) = target_schema.obj.r#ref.as_ref() {
|
||||
child_type_name = Some(ref_id.split('.').next_back().unwrap_or("").to_string());
|
||||
} else if let Some(ref_id) = target_schema.obj.identifier() {
|
||||
child_type_name = Some(ref_id);
|
||||
} else if let Some(arr) = &target_schema.obj.one_of {
|
||||
if let Some(first) = arr.first() {
|
||||
if let Some(ref_id) = first.obj.id.as_ref().or(first.obj.r#ref.as_ref()) {
|
||||
child_type_name = Some(ref_id.split('.').next_back().unwrap_or("").to_string());
|
||||
if let Some(ref_id) = first.obj.identifier() {
|
||||
child_type_name = Some(ref_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -697,6 +695,16 @@ impl<'de> Deserialize<'de> for Schema {
|
||||
}
|
||||
}
|
||||
|
||||
impl SchemaObject {
|
||||
pub fn identifier(&self) -> Option<String> {
|
||||
if let Some(lookup_key) = self.id.as_ref().or(self.r#ref.as_ref()) {
|
||||
Some(lookup_key.split('.').next_back().unwrap_or("").to_string())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(untagged)]
|
||||
pub enum SchemaTypeOrArray {
|
||||
|
||||
@ -45,12 +45,33 @@ impl Merger {
|
||||
let val_resolved = match result {
|
||||
Ok(val) => val,
|
||||
Err(msg) => {
|
||||
let mut final_code = "MERGE_FAILED".to_string();
|
||||
let mut final_message = msg.clone();
|
||||
let mut final_cause = None;
|
||||
|
||||
if let Ok(Value::Object(map)) = serde_json::from_str::<Value>(&msg) {
|
||||
if let (Some(Value::String(e_msg)), Some(Value::String(e_code))) = (map.get("error"), map.get("code")) {
|
||||
final_message = e_msg.clone();
|
||||
final_code = e_code.clone();
|
||||
let mut cause_parts = Vec::new();
|
||||
if let Some(Value::String(d)) = map.get("detail") {
|
||||
if !d.is_empty() { cause_parts.push(d.clone()); }
|
||||
}
|
||||
if let Some(Value::String(h)) = map.get("hint") {
|
||||
if !h.is_empty() { cause_parts.push(h.clone()); }
|
||||
}
|
||||
if !cause_parts.is_empty() {
|
||||
final_cause = Some(cause_parts.join("\n"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return crate::drop::Drop::with_errors(vec![crate::drop::Error {
|
||||
code: "MERGE_FAILED".to_string(),
|
||||
message: msg,
|
||||
code: final_code,
|
||||
message: final_message,
|
||||
details: crate::drop::ErrorDetails {
|
||||
path: "".to_string(),
|
||||
cause: None,
|
||||
cause: final_cause,
|
||||
context: Some(data),
|
||||
schema: None,
|
||||
},
|
||||
@ -691,8 +712,7 @@ impl Merger {
|
||||
);
|
||||
self
|
||||
.db
|
||||
.execute(&sql, None)
|
||||
.map_err(|e| format!("SPI Error in INSERT: {:?}", e))?;
|
||||
.execute(&sql, None)?;
|
||||
} else if change_kind == "update" || change_kind == "delete" {
|
||||
entity_pairs.remove("id");
|
||||
entity_pairs.remove("type");
|
||||
@ -726,8 +746,7 @@ impl Merger {
|
||||
);
|
||||
self
|
||||
.db
|
||||
.execute(&sql, None)
|
||||
.map_err(|e| format!("SPI Error in UPDATE: {:?}", e))?;
|
||||
.execute(&sql, None)?;
|
||||
}
|
||||
}
|
||||
|
||||
@ -830,10 +849,7 @@ impl Merger {
|
||||
Self::quote_literal(&Value::String(user_id.to_string()))
|
||||
);
|
||||
|
||||
self
|
||||
.db
|
||||
.execute(&change_sql, None)
|
||||
.map_err(|e| format!("Executor Error in change: {:?}", e))?;
|
||||
self.db.execute(&change_sql, None)?;
|
||||
}
|
||||
|
||||
if type_obj.notify {
|
||||
|
||||
@ -63,37 +63,33 @@ impl<'a> Compiler<'a> {
|
||||
}
|
||||
|
||||
fn compile_array(&mut self, node: Node<'a>) -> Result<(String, String), String> {
|
||||
// 1. Array of DB Entities (`$ref` or `$family` pointing to a table limit)
|
||||
if let Some(items) = &node.schema.obj.items {
|
||||
let next_path = if node.ast_path.is_empty() {
|
||||
String::from("#")
|
||||
} else {
|
||||
format!("{}.#", node.ast_path)
|
||||
};
|
||||
|
||||
if let Some(ref_id) = &items.obj.r#ref {
|
||||
if let Some(type_def) = self.db.types.get(ref_id) {
|
||||
let mut entity_node = node.clone();
|
||||
entity_node.ast_path = next_path;
|
||||
entity_node.schema = std::sync::Arc::clone(items);
|
||||
return self.compile_entity(type_def, entity_node, true);
|
||||
}
|
||||
let mut resolved_type = None;
|
||||
if let Some(family_target) = items.obj.family.as_ref() {
|
||||
let base_type_name = family_target.split('.').next_back().unwrap_or(family_target);
|
||||
resolved_type = self.db.types.get(base_type_name);
|
||||
} else if let Some(base_type_name) = items.obj.identifier() {
|
||||
resolved_type = self.db.types.get(&base_type_name);
|
||||
}
|
||||
|
||||
let mut next_node = node.clone();
|
||||
next_node.depth += 1;
|
||||
next_node.ast_path = next_path;
|
||||
next_node.schema = std::sync::Arc::clone(items);
|
||||
let (item_sql, _) = self.compile_node(next_node)?;
|
||||
if let Some(type_def) = resolved_type {
|
||||
let mut entity_node = node.clone();
|
||||
entity_node.schema = std::sync::Arc::clone(items);
|
||||
return self.compile_entity(type_def, entity_node, true);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Arrays of mapped Native Postgres Columns (e.g. `jsonb`, `text[]`)
|
||||
if let Some(prop) = &node.property_name {
|
||||
return Ok((
|
||||
format!("(SELECT jsonb_agg({}) FROM TODO)", item_sql),
|
||||
format!("{}.{}", node.parent_alias, prop),
|
||||
"array".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
Ok((
|
||||
"SELECT jsonb_agg(TODO) FROM TODO".to_string(),
|
||||
"array".to_string(),
|
||||
))
|
||||
// 3. Fallback for root execution of standalone non-entity arrays
|
||||
Err("Cannot compile a root array without a valid entity reference or table mapped via `items`.".to_string())
|
||||
}
|
||||
|
||||
fn compile_reference(&mut self, node: Node<'a>) -> Result<(String, String), String> {
|
||||
@ -102,14 +98,7 @@ impl<'a> Compiler<'a> {
|
||||
|
||||
if let Some(family_target) = node.schema.obj.family.as_ref() {
|
||||
resolved_type = self.db.types.get(family_target);
|
||||
} else if let Some(lookup_key) = node
|
||||
.schema
|
||||
.obj
|
||||
.id
|
||||
.as_ref()
|
||||
.or(node.schema.obj.r#ref.as_ref())
|
||||
{
|
||||
let base_type_name = lookup_key.split('.').next_back().unwrap_or("").to_string();
|
||||
} else if let Some(base_type_name) = node.schema.obj.identifier() {
|
||||
resolved_type = self.db.types.get(&base_type_name);
|
||||
}
|
||||
|
||||
@ -448,22 +437,21 @@ impl<'a> Compiler<'a> {
|
||||
}
|
||||
}
|
||||
|
||||
let mut child_node = node.clone();
|
||||
child_node.parent_alias = owner_alias.clone();
|
||||
let arc_aliases = std::sync::Arc::new(table_aliases.clone());
|
||||
child_node.parent_type_aliases = Some(arc_aliases);
|
||||
child_node.parent_type = Some(r#type);
|
||||
child_node.parent_schema = Some(std::sync::Arc::clone(&node.schema));
|
||||
child_node.property_name = Some(prop_key.clone());
|
||||
child_node.depth += 1;
|
||||
let next_path = if node.ast_path.is_empty() {
|
||||
prop_key.clone()
|
||||
} else {
|
||||
format!("{}.{}", node.ast_path, prop_key)
|
||||
let child_node = Node {
|
||||
schema: std::sync::Arc::clone(prop_schema),
|
||||
parent_alias: owner_alias.clone(),
|
||||
parent_type_aliases: Some(std::sync::Arc::new(table_aliases.clone())),
|
||||
parent_type: Some(r#type),
|
||||
parent_schema: Some(std::sync::Arc::clone(&node.schema)),
|
||||
property_name: Some(prop_key.clone()),
|
||||
depth: node.depth + 1,
|
||||
ast_path: if node.ast_path.is_empty() {
|
||||
prop_key.clone()
|
||||
} else {
|
||||
format!("{}/{}", node.ast_path, prop_key)
|
||||
},
|
||||
};
|
||||
|
||||
child_node.ast_path = next_path;
|
||||
child_node.schema = std::sync::Arc::clone(prop_schema);
|
||||
|
||||
let (val_sql, val_type) = self.compile_node(child_node)?;
|
||||
|
||||
@ -491,9 +479,17 @@ impl<'a> Compiler<'a> {
|
||||
.unwrap_or_else(|| base_alias.clone());
|
||||
|
||||
let mut where_clauses = Vec::new();
|
||||
where_clauses.push(format!("NOT {}.archived", entity_alias));
|
||||
|
||||
// Dynamically apply the 'active-only' default ONLY if the client
|
||||
// didn't explicitly request to filter on 'archived' themselves!
|
||||
let has_archived_override = self.filter_keys.iter().any(|k| k == "archived");
|
||||
|
||||
if !has_archived_override {
|
||||
where_clauses.push(format!("NOT {}.archived", entity_alias));
|
||||
}
|
||||
|
||||
self.compile_filter_conditions(r#type, type_aliases, &node, &base_alias, &mut where_clauses);
|
||||
self.compile_polymorphic_bounds(r#type, type_aliases, &node, &mut where_clauses);
|
||||
self.compile_relation_conditions(
|
||||
r#type,
|
||||
type_aliases,
|
||||
@ -505,6 +501,54 @@ impl<'a> Compiler<'a> {
|
||||
Ok(where_clauses)
|
||||
}
|
||||
|
||||
fn compile_polymorphic_bounds(
|
||||
&self,
|
||||
_type: &crate::database::r#type::Type,
|
||||
type_aliases: &std::collections::HashMap<String, String>,
|
||||
node: &Node,
|
||||
where_clauses: &mut Vec<String>,
|
||||
) {
|
||||
if let Some(edges) = node.schema.obj.compiled_edges.get() {
|
||||
if let Some(props) = node.schema.obj.compiled_properties.get() {
|
||||
for (prop_name, edge) in edges {
|
||||
if let Some(prop_schema) = props.get(prop_name) {
|
||||
// Determine if the property schema resolves to a physical Database Entity
|
||||
let mut bound_type_name = None;
|
||||
if let Some(family_target) = prop_schema.obj.family.as_ref() {
|
||||
bound_type_name = Some(family_target.split('.').next_back().unwrap_or(family_target).to_string());
|
||||
} else if let Some(lookup_key) = prop_schema.obj.identifier() {
|
||||
bound_type_name = Some(lookup_key);
|
||||
}
|
||||
|
||||
if let Some(type_name) = bound_type_name {
|
||||
// Ensure this type actually exists
|
||||
if self.db.types.contains_key(&type_name) {
|
||||
if let Some(relation) = self.db.relations.get(&edge.constraint) {
|
||||
let mut poly_col = None;
|
||||
let mut table_to_alias = "";
|
||||
|
||||
if edge.forward && relation.source_columns.len() > 1 {
|
||||
poly_col = Some(&relation.source_columns[1]); // e.g., target_type
|
||||
table_to_alias = &relation.source_type; // e.g., relationship
|
||||
} else if !edge.forward && relation.destination_columns.len() > 1 {
|
||||
poly_col = Some(&relation.destination_columns[1]); // e.g., source_type
|
||||
table_to_alias = &relation.destination_type; // e.g., relationship
|
||||
}
|
||||
|
||||
if let Some(col) = poly_col {
|
||||
if let Some(alias) = type_aliases.get(table_to_alias).or_else(|| type_aliases.get(&node.parent_alias)) {
|
||||
where_clauses.push(format!("{}.{} = '{}'", alias, col, type_name));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn resolve_filter_alias(
|
||||
r#type: &crate::database::r#type::Type,
|
||||
type_aliases: &std::collections::HashMap<String, String>,
|
||||
@ -580,15 +624,15 @@ impl<'a> Compiler<'a> {
|
||||
let op = parts.next().unwrap_or("$eq");
|
||||
|
||||
let field_name = if node.ast_path.is_empty() {
|
||||
if full_field_path.contains('.') || full_field_path.contains('#') {
|
||||
if full_field_path.contains('/') {
|
||||
continue;
|
||||
}
|
||||
full_field_path
|
||||
} else {
|
||||
let prefix = format!("{}.", node.ast_path);
|
||||
let prefix = format!("{}/", node.ast_path);
|
||||
if full_field_path.starts_with(&prefix) {
|
||||
let remainder = &full_field_path[prefix.len()..];
|
||||
if remainder.contains('.') || remainder.contains('#') {
|
||||
if remainder.contains('/') {
|
||||
continue;
|
||||
}
|
||||
remainder
|
||||
|
||||
@ -54,6 +54,45 @@ impl Queryer {
|
||||
self.execute_sql(schema_id, &sql, &args)
|
||||
}
|
||||
|
||||
fn extract_filters(
|
||||
prefix: String,
|
||||
val: &serde_json::Value,
|
||||
entries: &mut Vec<(String, serde_json::Value)>,
|
||||
) -> Result<(), String> {
|
||||
if let Some(obj) = val.as_object() {
|
||||
let mut is_op_obj = false;
|
||||
if let Some(first_key) = obj.keys().next() {
|
||||
if first_key.starts_with('$') {
|
||||
is_op_obj = true;
|
||||
}
|
||||
}
|
||||
|
||||
if is_op_obj {
|
||||
for (op, op_val) in obj {
|
||||
if !op.starts_with('$') {
|
||||
return Err(format!("Filter operator must start with '$', got: {}", op));
|
||||
}
|
||||
entries.push((format!("{}:{}", prefix, op), op_val.clone()));
|
||||
}
|
||||
} else {
|
||||
for (k, v) in obj {
|
||||
let next_prefix = if prefix.is_empty() {
|
||||
k.clone()
|
||||
} else {
|
||||
format!("{}/{}", prefix, k)
|
||||
};
|
||||
Self::extract_filters(next_prefix, v, entries)?;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return Err(format!(
|
||||
"Filter for path '{}' must be an operator object like {{$eq: ...}} or a nested map.",
|
||||
prefix
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn parse_filter_entries(
|
||||
&self,
|
||||
filters_map: Option<&serde_json::Map<String, serde_json::Value>>,
|
||||
@ -61,19 +100,7 @@ impl Queryer {
|
||||
let mut filter_entries: Vec<(String, serde_json::Value)> = Vec::new();
|
||||
if let Some(fm) = filters_map {
|
||||
for (key, val) in fm {
|
||||
if let Some(obj) = val.as_object() {
|
||||
for (op, op_val) in obj {
|
||||
if !op.starts_with('$') {
|
||||
return Err(format!("Filter operator must start with '$', got: {}", op));
|
||||
}
|
||||
filter_entries.push((format!("{}:{}", key, op), op_val.clone()));
|
||||
}
|
||||
} else {
|
||||
return Err(format!(
|
||||
"Filter for field '{}' must be an object with operators like $eq, $in, etc.",
|
||||
key
|
||||
));
|
||||
}
|
||||
Self::extract_filters(key.clone(), val, &mut filter_entries)?;
|
||||
}
|
||||
}
|
||||
filter_entries.sort_by(|a, b| a.0.cmp(&b.0));
|
||||
|
||||
@ -1457,6 +1457,12 @@ fn test_queryer_0_7() {
|
||||
crate::tests::runner::run_test_case(&path, 0, 7).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_queryer_0_8() {
|
||||
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
|
||||
crate::tests::runner::run_test_case(&path, 0, 8).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_not_0_0() {
|
||||
let path = format!("{}/fixtures/not.json", env!("CARGO_MANIFEST_DIR"));
|
||||
@ -2921,6 +2927,36 @@ fn test_minimum_1_6() {
|
||||
crate::tests::runner::run_test_case(&path, 1, 6).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_paths_0_0() {
|
||||
let path = format!("{}/fixtures/paths.json", env!("CARGO_MANIFEST_DIR"));
|
||||
crate::tests::runner::run_test_case(&path, 0, 0).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_paths_0_1() {
|
||||
let path = format!("{}/fixtures/paths.json", env!("CARGO_MANIFEST_DIR"));
|
||||
crate::tests::runner::run_test_case(&path, 0, 1).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_paths_0_2() {
|
||||
let path = format!("{}/fixtures/paths.json", env!("CARGO_MANIFEST_DIR"));
|
||||
crate::tests::runner::run_test_case(&path, 0, 2).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_paths_0_3() {
|
||||
let path = format!("{}/fixtures/paths.json", env!("CARGO_MANIFEST_DIR"));
|
||||
crate::tests::runner::run_test_case(&path, 0, 3).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_paths_0_4() {
|
||||
let path = format!("{}/fixtures/paths.json", env!("CARGO_MANIFEST_DIR"));
|
||||
crate::tests::runner::run_test_case(&path, 0, 4).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_one_of_0_0() {
|
||||
let path = format!("{}/fixtures/oneOf.json", env!("CARGO_MANIFEST_DIR"));
|
||||
|
||||
@ -91,12 +91,17 @@ impl<'a> ValidationContext<'a> {
|
||||
if let Some(ref prefix) = self.schema.prefix_items {
|
||||
for (i, sub_schema) in prefix.iter().enumerate() {
|
||||
if i < len {
|
||||
let path = format!("{}/{}", self.path, i);
|
||||
if let Some(child_instance) = arr.get(i) {
|
||||
let mut item_path = format!("{}/{}", self.path, i);
|
||||
if let Some(obj) = child_instance.as_object() {
|
||||
if let Some(id_str) = obj.get("id").and_then(|v| v.as_str()) {
|
||||
item_path = format!("{}/{}", self.path, id_str);
|
||||
}
|
||||
}
|
||||
let derived = self.derive(
|
||||
sub_schema,
|
||||
child_instance,
|
||||
&path,
|
||||
&item_path,
|
||||
HashSet::new(),
|
||||
self.extensible,
|
||||
false,
|
||||
@ -112,12 +117,17 @@ impl<'a> ValidationContext<'a> {
|
||||
|
||||
if let Some(ref items_schema) = self.schema.items {
|
||||
for i in validation_index..len {
|
||||
let path = format!("{}/{}", self.path, i);
|
||||
if let Some(child_instance) = arr.get(i) {
|
||||
let mut item_path = format!("{}/{}", self.path, i);
|
||||
if let Some(obj) = child_instance.as_object() {
|
||||
if let Some(id_str) = obj.get("id").and_then(|v| v.as_str()) {
|
||||
item_path = format!("{}/{}", self.path, id_str);
|
||||
}
|
||||
}
|
||||
let derived = self.derive(
|
||||
items_schema,
|
||||
child_instance,
|
||||
&path,
|
||||
&item_path,
|
||||
HashSet::new(),
|
||||
self.extensible,
|
||||
false,
|
||||
|
||||
@ -53,10 +53,18 @@ impl<'a> ValidationContext<'a> {
|
||||
if let Some(arr) = self.instance.as_array() {
|
||||
for i in 0..arr.len() {
|
||||
if !result.evaluated_indices.contains(&i) {
|
||||
let mut item_path = format!("{}/{}", self.path, i);
|
||||
if let Some(child_instance) = arr.get(i) {
|
||||
if let Some(obj) = child_instance.as_object() {
|
||||
if let Some(id_str) = obj.get("id").and_then(|v| v.as_str()) {
|
||||
item_path = format!("{}/{}", self.path, id_str);
|
||||
}
|
||||
}
|
||||
}
|
||||
result.errors.push(ValidationError {
|
||||
code: "STRICT_ITEM_VIOLATION".to_string(),
|
||||
message: format!("Unexpected item at index {}", i),
|
||||
path: format!("{}/{}", self.path, i),
|
||||
path: item_path,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@ -14,16 +14,13 @@ impl<'a> ValidationContext<'a> {
|
||||
let current = self.instance;
|
||||
if let Some(obj) = current.as_object() {
|
||||
// Entity implicit type validation
|
||||
// Use the specific schema id or ref as a fallback
|
||||
if let Some(identifier) = self.schema.id.as_ref().or(self.schema.r#ref.as_ref()) {
|
||||
if let Some(schema_identifier) = self.schema.identifier() {
|
||||
// Kick in if the data object has a type field
|
||||
if let Some(type_val) = obj.get("type")
|
||||
&& let Some(type_str) = type_val.as_str()
|
||||
{
|
||||
// Get the string or the final segment as the base
|
||||
let base = identifier.split('.').next_back().unwrap_or("").to_string();
|
||||
// Check if the base is a global type name
|
||||
if let Some(type_def) = self.db.types.get(&base) {
|
||||
// Check if the identifier is a global type name
|
||||
if let Some(type_def) = self.db.types.get(&schema_identifier) {
|
||||
// Ensure the instance type is a variation of the global type
|
||||
if type_def.variations.contains(type_str) {
|
||||
// Ensure it passes strict mode
|
||||
@ -40,7 +37,7 @@ impl<'a> ValidationContext<'a> {
|
||||
}
|
||||
} else {
|
||||
// Ad-Hoc schemas natively use strict schema discriminator strings instead of variation inheritance
|
||||
if type_str == identifier {
|
||||
if type_str == schema_identifier.as_str() {
|
||||
result.evaluated_keys.insert("type".to_string());
|
||||
}
|
||||
}
|
||||
@ -128,14 +125,9 @@ impl<'a> ValidationContext<'a> {
|
||||
|
||||
// Entity Bound Implicit Type Interception
|
||||
if key == "type"
|
||||
&& let Some(schema_bound) = sub_schema.id.as_ref().or(sub_schema.r#ref.as_ref())
|
||||
&& let Some(schema_bound) = sub_schema.identifier()
|
||||
{
|
||||
let physical_type_name = schema_bound
|
||||
.split('.')
|
||||
.next_back()
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
if let Some(type_def) = self.db.types.get(&physical_type_name)
|
||||
if let Some(type_def) = self.db.types.get(&schema_bound)
|
||||
&& let Some(instance_type) = child_instance.as_str()
|
||||
&& type_def.variations.contains(instance_type)
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user