Compare commits

...

10 Commits

Author SHA1 Message Date
797a0a5460 version: 1.0.58 2026-03-12 19:32:42 -04:00
cfcb259eab final queryer and merger test suite installed; docs fully up to date 2026-03-12 18:37:50 -04:00
f666e608da more tests progress 2026-03-12 18:24:13 -04:00
732034bbc7 more tests progress 2026-03-12 17:46:38 -04:00
5b183a1aba test cleanup 2026-03-11 23:19:51 -04:00
be1367930d test cleanups passing 2026-03-11 21:55:37 -04:00
44ba3e0e18 test cleanup 2026-03-11 21:11:31 -04:00
e692fc52ee all tests passing again 2026-03-11 20:38:53 -04:00
641f7b5d92 all tests passing again 2026-03-11 18:38:37 -04:00
c007d7d479 test checkpoint 2026-03-11 17:46:08 -04:00
14 changed files with 3000 additions and 1104 deletions

View File

@ -38,6 +38,8 @@ The Validator provides strict, schema-driven evaluation for the "Punc" architect
### Custom Features & Deviations
JSPG implements specific extensions to the Draft 2020-12 standard to support the Punc architecture's object-oriented needs while heavily optimizing for zero-runtime lookups.
* **Caching Strategy**: The Validator caches the pre-compiled `Database` registry in memory upon initialization (`jspg_setup`). This registry holds the comprehensive graph of schema boundaries, Types, ENUMs, and Foreign Key relationships, acting as the Single Source of Truth for all validation operations without polling Postgres.
#### A. Polymorphism & Referencing (`$ref`, `$family`, and Native Types)
* **Native Type Discrimination (`variations`)**: Schemas defined inside a Postgres `type` are Entities. The validator securely and implicitly manages their `"type"` property. If an entity inherits from `user`, incoming JSON can safely define `{"type": "person"}` without errors, thanks to `compiled_variations` inheritance.
* **Structural Inheritance & Viral Infection (`$ref`)**: `$ref` is used exclusively for structural inheritance, *never* for union creation. A Punc request schema that `$ref`s an Entity virally inherits all physical database polymorphism rules for that target.
@ -71,6 +73,7 @@ The Merger provides an automated, high-performance graph synchronization engine
### Core Features
* **Caching Strategy**: The Merger leverages the `Validator`'s in-memory `Database` registry to instantly resolve Foreign Key mapping graphs. It additionally utilizes the concurrent `GLOBAL_JSPG` application memory (`DashMap`) to cache statically constructed SQL `SELECT` strings used during deduplication (`lk_`) and difference tracking calculations.
* **Deep Graph Merging**: The Merger walks arbitrary levels of deeply nested JSON schemas (e.g. tracking an `order`, its `customer`, and an array of its `lines`). It intelligently discovers the correct parent-to-child or child-to-parent Foreign Keys stored in the registry and automatically maps the UUIDs across the relationships during UPSERT.
* **Prefix Foreign Key Matching**: Handles scenario where multiple relations point to the same table by using database Foreign Key constraint prefixes (`fk_`). For example, if a schema has `shipping_address` and `billing_address`, the merger resolves against `fk_shipping_address_entity` vs `fk_billing_address_entity` automatically to correctly route object properties.
* **Dynamic Deduplication & Lookups**: If a nested object is provided without an `id`, the Merger utilizes Postgres `lk_` index constraints defined in the schema registry (e.g. `lk_person` mapped to `first_name` and `last_name`). It dynamically queries these unique matching constraints to discover the correct UUID to perform an UPDATE, preventing data duplication.
@ -89,9 +92,15 @@ The Merger provides an automated, high-performance graph synchronization engine
The Queryer transforms Postgres into a pre-compiled Semantic Query Engine via the `jspg_query(schema_id text, cue jsonb)` API, designed to serve the exact shape of Punc responses directly via SQL.
### Core Features
* **Caching Strategy (DashMap SQL Caching)**: The Queryer securely caches its compiled, static SQL string templates per schema permutation inside the `GLOBAL_JSPG` concurrent `DashMap`. This eliminates recursive AST schema crawling on consecutive requests. Furthermore, it evaluates the strings via Postgres SPI (Server Programming Interface) Prepared Statements, leveraging native database caching of execution plans for extreme performance.
* **Schema-to-SQL Compilation**: Compiles JSON Schema ASTs spanning deep arrays directly into static, pre-planned SQL multi-JOIN queries. This explicitly features the `Smart Merge` evaluation engine which natively translates properties through `allOf` and `$ref` inheritances, mapping JSON fields specifically to their physical database table aliases during translation.
* **DashMap SQL Caching**: Executes compiled SQL via Postgres SPI execution, securely caching the static string compilation templates per schema permutation inside the `GLOBAL_JSPG` application memory, drastically reducing repetitive schema crawling.
* **Dynamic Filtering**: Binds parameters natively through `cue.filters` objects. Dynamically handles string formatting (e.g. parsing `uuid` or formatting date-times) and safely escapes complex combinations utilizing `ILIKE` operations correctly mapped to the originating structural table.
* **Dynamic Filtering**: Binds parameters natively through `cue.filters` objects. The queryer enforces a strict, structured, MongoDB-style operator syntax to map incoming JSON request paths directly to their originating structural table columns.
* **Equality / Inequality**: `{"$eq": value}`, `{"$ne": value}` automatically map to `=` and `!=`.
* **Comparison**: `{"$gt": ...}`, `{"$gte": ...}`, `{"$lt": ...}`, `{"$lte": ...}` directly compile to Postgres comparison operators (`> `, `>=`, `<`, `<=`).
* **Array Inclusion**: `{"$in": [values]}`, `{"$nin": [values]}` use native `jsonb_array_elements_text()` bindings to enforce `IN` and `NOT IN` logic without runtime SQL injection risks.
* **Text Matching (ILIKE)**: Evaluates `$eq` or `$ne` against string fields containing the `%` character natively into Postgres `ILIKE` and `NOT ILIKE` partial substring matches.
* **Type Casting**: Safely resolves dynamic combinations by casting values instantly into the physical database types mapped in the schema (e.g. parsing `uuid` bindings to `::uuid`, formatting DateTimes to `::timestamptz`, and numbers to `::numeric`).
### 4. The Stem Engine
Rather than over-fetching heavy Entity payloads and trimming them, Punc Framework Websockets depend on isolated subgraphs defined as **Stems**.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,9 @@
#[cfg(test)]
use crate::database::executors::DatabaseExecutor;
#[cfg(test)]
use regex::Regex;
#[cfg(test)]
use serde_json::Value;
#[cfg(test)]
use std::cell::RefCell;
@ -9,6 +12,7 @@ pub struct MockState {
pub captured_queries: Vec<String>,
pub query_responses: Vec<Result<Value, String>>,
pub execute_responses: Vec<Result<(), String>>,
pub mocks: Vec<Value>,
}
#[cfg(test)]
@ -18,6 +22,7 @@ impl MockState {
captured_queries: Default::default(),
query_responses: Default::default(),
execute_responses: Default::default(),
mocks: Default::default(),
}
}
}
@ -44,6 +49,15 @@ impl DatabaseExecutor for MockExecutor {
MOCK_STATE.with(|state| {
let mut s = state.borrow_mut();
s.captured_queries.push(sql.to_string());
if !s.mocks.is_empty() {
if let Some(matches) = parse_and_match_mocks(sql, &s.mocks) {
if !matches.is_empty() {
return Ok(Value::Array(matches));
}
}
}
if s.query_responses.is_empty() {
return Ok(Value::Array(vec![]));
}
@ -76,6 +90,13 @@ impl DatabaseExecutor for MockExecutor {
MOCK_STATE.with(|state| state.borrow().captured_queries.clone())
}
#[cfg(test)]
fn set_mocks(&self, mocks: Vec<Value>) {
MOCK_STATE.with(|state| {
state.borrow_mut().mocks = mocks;
});
}
#[cfg(test)]
fn reset_mocks(&self) {
MOCK_STATE.with(|state| {
@ -83,6 +104,93 @@ impl DatabaseExecutor for MockExecutor {
s.captured_queries.clear();
s.query_responses.clear();
s.execute_responses.clear();
s.mocks.clear();
});
}
}
#[cfg(test)]
fn parse_and_match_mocks(sql: &str, mocks: &[Value]) -> Option<Vec<Value>> {
let sql_upper = sql.to_uppercase();
if !sql_upper.starts_with("SELECT") {
return None;
}
// 1. Extract table name
let table_regex = Regex::new(r#"(?i)\s+FROM\s+(?:[a-zA-Z_]\w*\.)?"?([a-zA-Z_]\w*)"?"#).ok()?;
let table = if let Some(caps) = table_regex.captures(sql) {
caps.get(1)?.as_str()
} else {
return None;
};
// 2. Extract WHERE conditions
let mut conditions = Vec::new();
if let Some(where_idx) = sql_upper.find(" WHERE ") {
let mut where_end = sql_upper.find(" ORDER BY ").unwrap_or(sql.len());
if let Some(limit_idx) = sql_upper.find(" LIMIT ") {
if limit_idx < where_end {
where_end = limit_idx;
}
}
let where_clause = &sql[where_idx + 7..where_end];
let and_regex = Regex::new(r"(?i)\s+AND\s+").ok()?;
let parts = and_regex.split(where_clause);
for part in parts {
if let Some(eq_idx) = part.find('=') {
let left = part[..eq_idx]
.trim()
.split('.')
.last()
.unwrap_or("")
.trim_matches('"');
let right = part[eq_idx + 1..].trim().trim_matches('\'');
conditions.push((left.to_string(), right.to_string()));
} else if part.to_uppercase().contains(" IS NULL") {
let left = part[..part.to_uppercase().find(" IS NULL").unwrap()]
.trim()
.split('.')
.last()
.unwrap_or("")
.replace('"', ""); // Remove quotes explicitly
conditions.push((left, "null".to_string()));
}
}
}
// 3. Find matching mocks
let mut matches = Vec::new();
for mock in mocks {
if let Some(mock_obj) = mock.as_object() {
if let Some(t) = mock_obj.get("type") {
if t.as_str() != Some(table) {
continue;
}
}
let mut matches_all = true;
for (k, v) in &conditions {
let mock_val_str = match mock_obj.get(k) {
Some(Value::String(s)) => s.clone(),
Some(Value::Number(n)) => n.to_string(),
Some(Value::Bool(b)) => b.to_string(),
Some(Value::Null) => "null".to_string(),
_ => {
matches_all = false;
break;
}
};
if mock_val_str != *v {
matches_all = false;
break;
}
}
if matches_all {
matches.push(mock.clone());
}
}
}
Some(matches)
}

View File

@ -25,4 +25,7 @@ pub trait DatabaseExecutor: Send + Sync {
#[cfg(test)]
fn reset_mocks(&self);
#[cfg(test)]
fn set_mocks(&self, mocks: Vec<Value>);
}

View File

@ -396,6 +396,19 @@ impl Merger {
let mut lookup_complete = false;
if !entity_type.lookup_fields.is_empty() {
lookup_complete = true;
for column in &entity_type.lookup_fields {
match entity_fields.get(column) {
Some(Value::Null) | None => {
lookup_complete = false;
break;
}
Some(Value::String(s)) if s.is_empty() => {
lookup_complete = false;
break;
}
_ => {}
}
}
}
if id_val.is_none() && !lookup_complete {
@ -434,11 +447,7 @@ impl Merger {
if column == "type" {
lookup_predicates.push(format!("t1.\"{}\" = {}", column, Self::quote_literal(val)));
} else {
if val.as_str() == Some("") || val.is_null() {
lookup_predicates.push(format!("\"{}\" IS NULL", column));
} else {
lookup_predicates.push(format!("\"{}\" = {}", column, Self::quote_literal(val)));
}
lookup_predicates.push(format!("\"{}\" = {}", column, Self::quote_literal(val)));
}
}
format!("WHERE {}", lookup_predicates.join(" AND "))

View File

@ -202,76 +202,21 @@ impl SqlCompiler {
is_stem_query: bool,
depth: usize,
) -> Result<(String, String), String> {
// We are compiling a query block for an Entity.
let mut select_args = Vec::new();
// Mapping table hierarchy to aliases, e.g., ["person", "user", "organization", "entity"]
let local_ctx = format!("{}_{}", parent_alias, prop_name.unwrap_or("obj"));
// e.g., parent_t1_contact -> we'll use t1 for the first of this block, t2 for the second, etc.
// Actually, local_ctx can just be exactly that prop's unique path.
let mut table_aliases = std::collections::HashMap::new();
let mut from_clauses = Vec::new();
for (i, table_name) in type_def.hierarchy.iter().enumerate() {
let alias = format!("{}_t{}", local_ctx, i + 1);
table_aliases.insert(table_name.clone(), alias.clone());
// 1. Build FROM clauses and table aliases
let (mut table_aliases, from_clauses) = self.build_hierarchy_from_clauses(type_def, &local_ctx);
if i == 0 {
from_clauses.push(format!("agreego.{} {}", table_name, alias));
} else {
// Join to previous
let prev_alias = format!("{}_t{}", local_ctx, i);
from_clauses.push(format!(
"JOIN agreego.{} {} ON {}.id = {}.id",
table_name, alias, alias, prev_alias
));
}
}
// Now, let's map properties from the schema to the correct table alias using grouped_fields
// grouped_fields is { "person": ["first_name", ...], "user": ["password"], ... }
let grouped_fields = type_def.grouped_fields.as_ref().and_then(|v| v.as_object());
let merged_props = self.get_merged_properties(schema);
for (prop_key, prop_schema) in &merged_props {
// Find which table owns this property
// Find which table owns this property
let mut owner_alias = table_aliases
.get("entity")
.cloned()
.unwrap_or_else(|| format!("{}_t_err", parent_alias));
if let Some(gf) = grouped_fields {
for (t_name, fields_val) in gf {
if let Some(fields_arr) = fields_val.as_array() {
if fields_arr.iter().any(|v| v.as_str() == Some(prop_key)) {
owner_alias = table_aliases
.get(t_name)
.cloned()
.unwrap_or_else(|| parent_alias.to_string());
break;
}
}
}
}
// Now we know `owner_alias`, e.g., `parent_t1` or `parent_t3`.
// Walk the property to get its SQL value
let (val_sql, val_type) = self.walk_schema(
prop_schema,
&owner_alias,
Some(prop_key),
filter_keys,
is_stem_query,
depth + 1,
)?;
if val_type == "abort" {
continue;
}
select_args.push(format!("'{}', {}", prop_key, val_sql));
}
// 2. Map properties and build jsonb_build_object args
let select_args = self.map_properties_to_aliases(
schema,
type_def,
&table_aliases,
parent_alias,
filter_keys,
is_stem_query,
depth,
)?;
let jsonb_obj_sql = if select_args.is_empty() {
"jsonb_build_object()".to_string()
@ -279,91 +224,16 @@ impl SqlCompiler {
format!("jsonb_build_object({})", select_args.join(", "))
};
let base_alias = table_aliases
.get(&type_def.name)
.cloned()
.unwrap_or_else(|| "err".to_string());
// 3. Build WHERE clauses
let mut where_clauses = self.build_filter_where_clauses(
schema,
type_def,
&table_aliases,
parent_alias,
prop_name,
filter_keys,
)?;
let mut where_clauses = Vec::new();
where_clauses.push(format!("NOT {}.archived", base_alias));
// Filter Mapping - Only append filters if this is the ROOT table query (i.e. parent_alias is "t1")
// Because cue.filters operates strictly on top-level root properties right now.
if parent_alias == "t1" {
for (i, filter_key) in filter_keys.iter().enumerate() {
// Find which table owns this filter key
let mut filter_alias = base_alias.clone(); // default to root table (e.g. t3 entity)
if let Some(gf) = type_def.grouped_fields.as_ref().and_then(|v| v.as_object()) {
for (t_name, fields_val) in gf {
if let Some(fields_arr) = fields_val.as_array() {
if fields_arr.iter().any(|v| v.as_str() == Some(filter_key)) {
filter_alias = table_aliases
.get(t_name)
.cloned()
.unwrap_or_else(|| base_alias.clone());
break;
}
}
}
}
let mut is_ilike = false;
let mut cast = "";
// Use PostgreSQL column type metadata for exact argument casting
if let Some(field_types) = type_def.field_types.as_ref().and_then(|v| v.as_object()) {
if let Some(pg_type_val) = field_types.get(filter_key) {
if let Some(pg_type) = pg_type_val.as_str() {
if pg_type == "uuid" {
cast = "::uuid";
} else if pg_type == "boolean" || pg_type == "bool" {
cast = "::boolean";
} else if pg_type.contains("timestamp")
|| pg_type == "timestamptz"
|| pg_type == "date"
{
cast = "::timestamptz";
} else if pg_type == "numeric"
|| pg_type.contains("int")
|| pg_type == "real"
|| pg_type == "double precision"
{
cast = "::numeric";
} else if pg_type == "text" || pg_type.contains("char") {
// Determine if this is an enum in the schema locally to avoid ILIKE on strict enums
let mut is_enum = false;
if let Some(props) = &schema.obj.properties {
if let Some(ps) = props.get(filter_key) {
is_enum = ps.obj.enum_.is_some();
}
}
if !is_enum {
is_ilike = true;
}
}
}
}
}
// Add to WHERE clause using 1-indexed args pointer: $1, $2
if is_ilike {
let param = format!("${}#>>'{{}}'", i + 1);
where_clauses.push(format!("{}.{} ILIKE {}", filter_alias, filter_key, param));
} else {
let param = format!("(${}#>>'{{}}'){}", i + 1, cast);
where_clauses.push(format!("{}.{} = {}", filter_alias, filter_key, param));
}
}
}
// Resolve FK relationship constraint if this is a nested subquery
if let Some(_prop) = prop_name {
// MOCK relation resolution (will integrate with `get_entity_relation` properly)
// By default assume FK is parent_id on child
where_clauses.push(format!("{}.parent_id = {}.id", base_alias, parent_alias));
}
// Wrap the object in the final array or object SELECT
let selection = if is_array {
format!("COALESCE(jsonb_agg({}), '[]'::jsonb)", jsonb_obj_sql)
} else {
@ -387,6 +257,219 @@ impl SqlCompiler {
))
}
fn build_hierarchy_from_clauses(
&self,
type_def: &crate::database::r#type::Type,
local_ctx: &str,
) -> (std::collections::HashMap<String, String>, Vec<String>) {
let mut table_aliases = std::collections::HashMap::new();
let mut from_clauses = Vec::new();
for (i, table_name) in type_def.hierarchy.iter().enumerate() {
let alias = format!("{}_t{}", local_ctx, i + 1);
table_aliases.insert(table_name.clone(), alias.clone());
if i == 0 {
from_clauses.push(format!("agreego.{} {}", table_name, alias));
} else {
let prev_alias = format!("{}_t{}", local_ctx, i);
from_clauses.push(format!(
"JOIN agreego.{} {} ON {}.id = {}.id",
table_name, alias, alias, prev_alias
));
}
}
(table_aliases, from_clauses)
}
fn map_properties_to_aliases(
&self,
schema: &crate::database::schema::Schema,
type_def: &crate::database::r#type::Type,
table_aliases: &std::collections::HashMap<String, String>,
parent_alias: &str,
filter_keys: &[String],
is_stem_query: bool,
depth: usize,
) -> Result<Vec<String>, String> {
let mut select_args = Vec::new();
let grouped_fields = type_def.grouped_fields.as_ref().and_then(|v| v.as_object());
let merged_props = self.get_merged_properties(schema);
for (prop_key, prop_schema) in &merged_props {
let mut owner_alias = table_aliases
.get("entity")
.cloned()
.unwrap_or_else(|| format!("{}_t_err", parent_alias));
if let Some(gf) = grouped_fields {
for (t_name, fields_val) in gf {
if let Some(fields_arr) = fields_val.as_array() {
if fields_arr.iter().any(|v| v.as_str() == Some(prop_key)) {
owner_alias = table_aliases
.get(t_name)
.cloned()
.unwrap_or_else(|| parent_alias.to_string());
break;
}
}
}
}
let (val_sql, val_type) = self.walk_schema(
prop_schema,
&owner_alias,
Some(prop_key),
filter_keys,
is_stem_query,
depth + 1,
)?;
if val_type != "abort" {
select_args.push(format!("'{}', {}", prop_key, val_sql));
}
}
Ok(select_args)
}
fn build_filter_where_clauses(
&self,
schema: &crate::database::schema::Schema,
type_def: &crate::database::r#type::Type,
table_aliases: &std::collections::HashMap<String, String>,
parent_alias: &str,
prop_name: Option<&str>,
filter_keys: &[String],
) -> Result<Vec<String>, String> {
let base_alias = table_aliases
.get(&type_def.name)
.cloned()
.unwrap_or_else(|| "err".to_string());
let mut where_clauses = Vec::new();
where_clauses.push(format!("NOT {}.archived", base_alias));
if parent_alias == "t1" {
for (i, filter_key) in filter_keys.iter().enumerate() {
let mut parts = filter_key.split(':');
let field_name = parts.next().unwrap_or(filter_key);
let op = parts.next().unwrap_or("$eq");
let mut filter_alias = base_alias.clone();
if let Some(gf) = type_def.grouped_fields.as_ref().and_then(|v| v.as_object()) {
for (t_name, fields_val) in gf {
if let Some(fields_arr) = fields_val.as_array() {
if fields_arr.iter().any(|v| v.as_str() == Some(field_name)) {
filter_alias = table_aliases
.get(t_name)
.cloned()
.unwrap_or_else(|| base_alias.clone());
break;
}
}
}
}
let mut is_ilike = false;
let mut cast = "";
if let Some(field_types) = type_def.field_types.as_ref().and_then(|v| v.as_object()) {
if let Some(pg_type_val) = field_types.get(field_name) {
if let Some(pg_type) = pg_type_val.as_str() {
if pg_type == "uuid" {
cast = "::uuid";
} else if pg_type == "boolean" || pg_type == "bool" {
cast = "::boolean";
} else if pg_type.contains("timestamp")
|| pg_type == "timestamptz"
|| pg_type == "date"
{
cast = "::timestamptz";
} else if pg_type == "numeric"
|| pg_type.contains("int")
|| pg_type == "real"
|| pg_type == "double precision"
{
cast = "::numeric";
} else if pg_type == "text" || pg_type.contains("char") {
let mut is_enum = false;
if let Some(props) = &schema.obj.properties {
if let Some(ps) = props.get(field_name) {
is_enum = ps.obj.enum_.is_some();
}
}
if !is_enum {
is_ilike = true;
}
}
}
}
}
let param_index = i + 1;
let p_val = format!("${}#>>'{{}}'", param_index);
if op == "$in" || op == "$nin" {
let sql_op = if op == "$in" { "IN" } else { "NOT IN" };
let subquery = format!(
"(SELECT value{} FROM jsonb_array_elements_text(({})::jsonb))",
cast, p_val
);
where_clauses.push(format!(
"{}.{} {} {}",
filter_alias, field_name, sql_op, subquery
));
} else {
let sql_op = match op {
"$eq" => {
if is_ilike {
"ILIKE"
} else {
"="
}
}
"$ne" => {
if is_ilike {
"NOT ILIKE"
} else {
"!="
}
}
"$gt" => ">",
"$gte" => ">=",
"$lt" => "<",
"$lte" => "<=",
_ => {
if is_ilike {
"ILIKE"
} else {
"="
}
}
};
let param_sql = if is_ilike && (op == "$eq" || op == "$ne") {
p_val
} else {
format!("({}){}", p_val, cast)
};
where_clauses.push(format!(
"{}.{} {} {}",
filter_alias, field_name, sql_op, param_sql
));
}
}
}
if let Some(_prop) = prop_name {
where_clauses.push(format!("{}.parent_id = {}.id", base_alias, parent_alias));
}
Ok(where_clauses)
}
fn compile_inline_object(
&self,
props: &std::collections::BTreeMap<String, std::sync::Arc<crate::database::schema::Schema>>,

View File

@ -24,61 +24,105 @@ impl Queryer {
stem_opt: Option<&str>,
filters: Option<&serde_json::Value>,
) -> crate::drop::Drop {
let filters_map: Option<&serde_json::Map<String, serde_json::Value>> =
filters.and_then(|f| f.as_object());
let filters_map = filters.and_then(|f| f.as_object());
// Generate Permutation Cache Key: schema_id + sorted filter keys
let mut filter_keys: Vec<String> = Vec::new();
if let Some(fm) = filters_map {
for key in fm.keys() {
filter_keys.push(key.clone());
// 1. Process filters into structured $op keys and linear values
let (filter_keys, args) = match self.parse_filter_entries(filters_map) {
Ok(res) => res,
Err(msg) => {
return crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "FILTER_PARSE_FAILED".to_string(),
message: msg,
details: crate::drop::ErrorDetails {
path: schema_id.to_string(),
},
}]);
}
}
filter_keys.sort();
};
let stem_key = stem_opt.unwrap_or("/");
let cache_key = format!("{}(Stem:{}):{}", schema_id, stem_key, filter_keys.join(","));
let sql = if let Some(cached_sql) = self.cache.get(&cache_key) {
cached_sql.value().clone()
} else {
// Compile the massive base SQL string
let compiler = compiler::SqlCompiler::new(self.db.clone());
match compiler.compile(schema_id, stem_opt, &filter_keys) {
Ok(compiled_sql) => {
self.cache.insert(cache_key.clone(), compiled_sql.clone());
compiled_sql
}
Err(e) => {
return crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "QUERY_COMPILATION_FAILED".to_string(),
message: e,
details: crate::drop::ErrorDetails {
path: schema_id.to_string(),
},
}]);
}
}
// 2. Fetch from cache or compile
let sql = match self.get_or_compile_sql(&cache_key, schema_id, stem_opt, &filter_keys) {
Ok(sql) => sql,
Err(drop) => return drop,
};
// 2. Prepare the execution arguments from the filters
let mut args: Vec<serde_json::Value> = Vec::new();
// 3. Execute via Database Executor
self.execute_sql(schema_id, &sql, &args)
}
fn parse_filter_entries(
&self,
filters_map: Option<&serde_json::Map<String, serde_json::Value>>,
) -> Result<(Vec<String>, Vec<serde_json::Value>), String> {
let mut filter_entries: Vec<(String, serde_json::Value)> = Vec::new();
if let Some(fm) = filters_map {
for (_i, key) in filter_keys.iter().enumerate() {
if let Some(val) = fm.get(key) {
args.push(val.clone());
for (key, val) in fm {
if let Some(obj) = val.as_object() {
for (op, op_val) in obj {
if !op.starts_with('$') {
return Err(format!("Filter operator must start with '$', got: {}", op));
}
filter_entries.push((format!("{}:{}", key, op), op_val.clone()));
}
} else {
return Err(format!(
"Filter for field '{}' must be an object with operators like $eq, $in, etc.",
key
));
}
}
}
filter_entries.sort_by(|a, b| a.0.cmp(&b.0));
// 3. Execute via Database Executor
match self.db.query(&sql, Some(&args)) {
let filter_keys: Vec<String> = filter_entries.iter().map(|(k, _)| k.clone()).collect();
let args: Vec<serde_json::Value> = filter_entries.into_iter().map(|(_, v)| v).collect();
Ok((filter_keys, args))
}
fn get_or_compile_sql(
&self,
cache_key: &str,
schema_id: &str,
stem_opt: Option<&str>,
filter_keys: &[String],
) -> Result<String, crate::drop::Drop> {
if let Some(cached_sql) = self.cache.get(cache_key) {
return Ok(cached_sql.value().clone());
}
let compiler = compiler::SqlCompiler::new(self.db.clone());
match compiler.compile(schema_id, stem_opt, filter_keys) {
Ok(compiled_sql) => {
self
.cache
.insert(cache_key.to_string(), compiled_sql.clone());
Ok(compiled_sql)
}
Err(e) => Err(crate::drop::Drop::with_errors(vec![crate::drop::Error {
code: "QUERY_COMPILATION_FAILED".to_string(),
message: e,
details: crate::drop::ErrorDetails {
path: schema_id.to_string(),
},
}])),
}
}
fn execute_sql(
&self,
schema_id: &str,
sql: &str,
args: &[serde_json::Value],
) -> crate::drop::Drop {
match self.db.query(sql, Some(args)) {
Ok(serde_json::Value::Array(table)) => {
if table.is_empty() {
crate::drop::Drop::success_with_val(serde_json::Value::Null)
} else {
// We expect the query to return a single JSONB column, already unpacked from row[0]
crate::drop::Drop::success_with_val(table.first().unwrap().clone())
}
}

View File

@ -1434,39 +1434,39 @@ fn test_queryer_0_3() {
}
#[test]
fn test_queryer_1_0() {
fn test_queryer_0_4() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 0).unwrap();
crate::tests::runner::run_test_case(&path, 0, 4).unwrap();
}
#[test]
fn test_queryer_1_1() {
fn test_queryer_0_5() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 1).unwrap();
crate::tests::runner::run_test_case(&path, 0, 5).unwrap();
}
#[test]
fn test_queryer_1_2() {
fn test_queryer_0_6() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 2).unwrap();
crate::tests::runner::run_test_case(&path, 0, 6).unwrap();
}
#[test]
fn test_queryer_1_3() {
fn test_queryer_0_7() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 3).unwrap();
crate::tests::runner::run_test_case(&path, 0, 7).unwrap();
}
#[test]
fn test_queryer_1_4() {
fn test_queryer_0_8() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 4).unwrap();
crate::tests::runner::run_test_case(&path, 0, 8).unwrap();
}
#[test]
fn test_queryer_1_5() {
fn test_queryer_0_9() {
let path = format!("{}/fixtures/queryer.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 5).unwrap();
crate::tests::runner::run_test_case(&path, 0, 9).unwrap();
}
#[test]
@ -8514,31 +8514,43 @@ fn test_merger_0_0() {
}
#[test]
fn test_merger_1_0() {
fn test_merger_0_1() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 0).unwrap();
crate::tests::runner::run_test_case(&path, 0, 1).unwrap();
}
#[test]
fn test_merger_1_1() {
fn test_merger_0_2() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 1).unwrap();
crate::tests::runner::run_test_case(&path, 0, 2).unwrap();
}
#[test]
fn test_merger_1_2() {
fn test_merger_0_3() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 1, 2).unwrap();
crate::tests::runner::run_test_case(&path, 0, 3).unwrap();
}
#[test]
fn test_merger_2_0() {
fn test_merger_0_4() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 2, 0).unwrap();
crate::tests::runner::run_test_case(&path, 0, 4).unwrap();
}
#[test]
fn test_merger_2_1() {
fn test_merger_0_5() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 2, 1).unwrap();
crate::tests::runner::run_test_case(&path, 0, 5).unwrap();
}
#[test]
fn test_merger_0_6() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 6).unwrap();
}
#[test]
fn test_merger_0_7() {
let path = format!("{}/fixtures/merger.json", env!("CARGO_MANIFEST_DIR"));
crate::tests::runner::run_test_case(&path, 0, 7).unwrap();
}

View File

@ -95,17 +95,6 @@ pub fn run_test_case(path: &str, suite_idx: usize, case_idx: usize) -> Result<()
let mut failures = Vec::<String>::new();
// 4. Run Tests
// Provide fallback for legacy expectations if `expect` block was missing despite migration script
let _expected_success = test
.expect
.as_ref()
.map(|e| e.success)
.unwrap_or(test.valid.unwrap_or(false));
let _expected_errors = test
.expect
.as_ref()
.and_then(|e| e.errors.clone())
.unwrap_or(test.expect_errors.clone().unwrap_or(vec![]));
match test.action.as_str() {
"validate" => {

View File

@ -31,10 +31,6 @@ pub struct TestCase {
pub mocks: Option<serde_json::Value>,
pub expect: Option<ExpectBlock>,
// Legacy support for older tests to avoid migrating them all instantly
pub valid: Option<bool>,
pub expect_errors: Option<Vec<serde_json::Value>>,
}
fn default_action() -> String {
@ -59,18 +55,7 @@ impl TestCase {
let validator = Validator::new(db);
let expected_success = self
.expect
.as_ref()
.map(|e| e.success)
.unwrap_or(self.valid.unwrap_or(false));
// _expected_errors is preserved for future diffing if needed
let _expected_errors = self
.expect
.as_ref()
.and_then(|e| e.errors.clone())
.unwrap_or(self.expect_errors.clone().unwrap_or(vec![]));
let expected_success = self.expect.as_ref().map(|e| e.success).unwrap_or(false);
let schema_id = &self.schema_id;
if !validator.db.schemas.contains_key(schema_id) {
@ -102,6 +87,12 @@ impl TestCase {
}
pub fn run_merge(&self, db: Arc<Database>) -> Result<(), String> {
if let Some(mocks) = &self.mocks {
if let Some(arr) = mocks.as_array() {
db.executor.set_mocks(arr.clone());
}
}
use crate::merger::Merger;
let merger = Merger::new(db.clone());
@ -134,6 +125,12 @@ impl TestCase {
}
pub fn run_query(&self, db: Arc<Database>) -> Result<(), String> {
if let Some(mocks) = &self.mocks {
if let Some(arr) = mocks.as_array() {
db.executor.set_mocks(arr.clone());
}
}
use crate::queryer::Queryer;
let queryer = Queryer::new(db.clone());

View File

@ -1,4 +1,13 @@
use regex::Regex;
use serde::Deserialize;
use std::collections::HashMap;
#[derive(Debug, Deserialize)]
#[serde(untagged)]
pub enum SqlExpectation {
Single(String),
Multi(Vec<String>),
}
#[derive(Debug, Deserialize)]
pub struct ExpectBlock {
@ -6,7 +15,7 @@ pub struct ExpectBlock {
pub result: Option<serde_json::Value>,
pub errors: Option<Vec<serde_json::Value>>,
#[serde(default)]
pub sql: Option<Vec<String>>,
pub sql: Option<Vec<SqlExpectation>>,
}
impl ExpectBlock {
@ -29,8 +38,7 @@ impl ExpectBlock {
));
}
use regex::Regex;
use std::collections::HashMap;
let ws_re = Regex::new(r"\s+").unwrap();
let types = HashMap::from([
(
@ -53,9 +61,27 @@ impl ExpectBlock {
// Placeholder regex: {{type:name}} or {{type}}
let ph_rx = Regex::new(r"\{\{([a-z]+)(?:[:]([^}]+))?\}\}").unwrap();
for (i, pattern_str) in patterns.iter().enumerate() {
let aline = &actual[i];
let mut pp = regex::escape(pattern_str);
let clean_str = |s: &str| -> String {
let mut s = ws_re.replace_all(s, " ").into_owned();
for token in ["(", ")", ",", "{", "}", "\"", "=", "'"] {
s = s.replace(&format!(" {}", token), token);
s = s.replace(&format!("{} ", token), token);
}
s.trim().to_string()
};
for (i, pattern_expect) in patterns.iter().enumerate() {
let aline_raw = &actual[i];
let aline = clean_str(aline_raw);
let pattern_str_raw = match pattern_expect {
SqlExpectation::Single(s) => s.clone(),
SqlExpectation::Multi(m) => m.join(" "),
};
let pattern_str = clean_str(&pattern_str_raw);
let mut pp = regex::escape(&pattern_str);
pp = pp.replace(r"\{\{", "{{").replace(r"\}\}", "}}");
let mut cap_names = HashMap::new(); // cg_X -> var_name
@ -96,7 +122,7 @@ impl ExpectBlock {
Err(e) => return Err(format!("Bad constructed regex: {} -> {}", final_rx_str, e)),
};
if let Some(captures) = final_rx.captures(aline) {
if let Some(captures) = final_rx.captures(&aline) {
for (cg_name, var_name) in cap_names {
if let Some(m) = captures.name(&cg_name) {
let matched_str = m.as_str();

View File

@ -1,62 +0,0 @@
Compiling jspg v0.1.0 (/Users/awgneo/Repositories/thoughtpatterns/cellular/jspg)
Finished `test` profile [unoptimized + debuginfo] target(s) in 26.14s
Running unittests src/lib.rs (target/debug/deps/jspg-99ace086c3537f5a)
running 1 test
 Using PgConfig("pg18") and `pg_config` from /opt/homebrew/opt/postgresql@18/bin/pg_config
 Building extension with features pg_test pg18
 Running command "/opt/homebrew/bin/cargo" "build" "--lib" "--features" "pg_test pg18" "--no-default-features" "--message-format=json-render-diagnostics"
Compiling jspg v0.1.0 (/Users/awgneo/Repositories/thoughtpatterns/cellular/jspg)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 7.10s
 Installing extension
 Copying control file to /opt/homebrew/share/postgresql@18/extension/jspg.control
 Copying shared library to /opt/homebrew/lib/postgresql@18/jspg.dylib
 Discovered 351 SQL entities: 1 schemas (1 unique), 350 functions, 0 types, 0 enums, 0 sqls, 0 ords, 0 hashes, 0 aggregates, 0 triggers
 Rebuilding pgrx_embed, in debug mode, for SQL generation with features pg_test pg18
Compiling jspg v0.1.0 (/Users/awgneo/Repositories/thoughtpatterns/cellular/jspg)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 10.63s
 Writing SQL entities to /opt/homebrew/share/postgresql@18/extension/jspg--0.1.0.sql
 Finished installing jspg
[2026-03-01 22:54:19.068 EST] [82952] [69a509eb.14408]: LOG: starting PostgreSQL 18.1 (Homebrew) on aarch64-apple-darwin25.2.0, compiled by Apple clang version 17.0.0 (clang-1700.6.3.2), 64-bit
[2026-03-01 22:54:19.070 EST] [82952] [69a509eb.14408]: LOG: listening on IPv6 address "::1", port 32218
[2026-03-01 22:54:19.070 EST] [82952] [69a509eb.14408]: LOG: listening on IPv4 address "127.0.0.1", port 32218
[2026-03-01 22:54:19.071 EST] [82952] [69a509eb.14408]: LOG: listening on Unix socket "/Users/awgneo/Repositories/thoughtpatterns/cellular/jspg/target/test-pgdata/.s.PGSQL.32218"
[2026-03-01 22:54:19.077 EST] [82958] [69a509eb.1440e]: LOG: database system was shut down at 2026-03-01 22:49:02 EST
 Creating database pgrx_tests
thread 'tests::pg_test_typed_refs_0' (29092254) panicked at /Users/awgneo/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/pgrx-tests-0.16.1/src/framework.rs:166:9:
Postgres Messages:
[2026-03-01 22:54:19.068 EST] [82952] [69a509eb.14408]: LOG: starting PostgreSQL 18.1 (Homebrew) on aarch64-apple-darwin25.2.0, compiled by Apple clang version 17.0.0 (clang-1700.6.3.2), 64-bit
[2026-03-01 22:54:19.070 EST] [82952] [69a509eb.14408]: LOG: listening on IPv6 address "::1", port 32218
[2026-03-01 22:54:19.070 EST] [82952] [69a509eb.14408]: LOG: listening on IPv4 address "127.0.0.1", port 32218
[2026-03-01 22:54:19.071 EST] [82952] [69a509eb.14408]: LOG: listening on Unix socket "/Users/awgneo/Repositories/thoughtpatterns/cellular/jspg/target/test-pgdata/.s.PGSQL.32218"
[2026-03-01 22:54:19.081 EST] [82952] [69a509eb.14408]: LOG: database system is ready to accept connections

Test Function Messages:
[2026-03-01 22:54:20.058 EST] [82982] [69a509ec.14426]: LOG: statement: START TRANSACTION
[2026-03-01 22:54:20.058 EST] [82982] [69a509ec.14426]: LOG: statement: SELECT "tests"."test_typed_refs_0"();
[2026-03-01 22:54:20.062 EST] [82982] [69a509ec.14426]: ERROR: called `Result::unwrap()` on an `Err` value: "[Entity inheritance and native type discrimination] Test 'Valid person against organization schema (implicit type allowance)' failed. Expected: true, Got: false. Errors: [ValidationError { code: \"CONST_VIOLATED\", message: \"Value does not match const\", path: \"/type\" }, ValidationError { code: \"STRICT_PROPERTY_VIOLATION\", message: \"Unexpected property 'first_name'\", path: \"/first_name\" }, ValidationError { code: \"STRICT_PROPERTY_VIOLATION\", message: \"Unexpected property 'first_name'\", path: \"/first_name\" }, ValidationError { code: \"STRICT_PROPERTY_VIOLATION\", message: \"Unexpected property 'first_name'\", path: \"/first_name\" }]\n[Entity inheritance and native type discrimination] Test 'Valid organization against organization schema' failed. Expected: true, Got: false. Errors: [ValidationError { code: \"CONST_VIOLATED\", message: \"Value does not match const\", path: \"/type\" }]\n[Entity inheritance and native type discrimination] Test 'Invalid entity against organization schema (ancestor not allowed)' failed. Expected: false, Got: true. Errors: []"
[2026-03-01 22:54:20.062 EST] [82982] [69a509ec.14426]: STATEMENT: SELECT "tests"."test_typed_refs_0"();
[2026-03-01 22:54:20.062 EST] [82982] [69a509ec.14426]: LOG: statement: ROLLBACK

Client Error:
called `Result::unwrap()` on an `Err` value: "[Entity inheritance and native type discrimination] Test 'Valid person against organization schema (implicit type allowance)' failed. Expected: true, Got: false. Errors: [ValidationError { code: \"CONST_VIOLATED\", message: \"Value does not match const\", path: \"/type\" }, ValidationError { code: \"STRICT_PROPERTY_VIOLATION\", message: \"Unexpected property 'first_name'\", path: \"/first_name\" }, ValidationError { code: \"STRICT_PROPERTY_VIOLATION\", message: \"Unexpected property 'first_name'\", path: \"/first_name\" }, ValidationError { code: \"STRICT_PROPERTY_VIOLATION\", message: \"Unexpected property 'first_name'\", path: \"/first_name\" }]\n[Entity inheritance and native type discrimination] Test 'Valid organization against organization schema' failed. Expected: true, Got: false. Errors: [ValidationError { code: \"CONST_VIOLATED\", message: \"Value does not match const\", path: \"/type\" }]\n[Entity inheritance and native type discrimination] Test 'Invalid entity against organization schema (ancestor not allowed)' failed. Expected: false, Got: true. Errors: []"
postgres location: fixtures.rs
rust location: <unknown>
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test tests::pg_test_typed_refs_0 ... FAILED
failures:
failures:
tests::pg_test_typed_refs_0
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 343 filtered out; finished in 21.82s
error: test failed, to rerun pass `--lib`

View File

@ -1 +1 @@
1.0.57
1.0.58