Compare commits

...

10 Commits

22 changed files with 1845 additions and 1605 deletions

49
.agent/workflows/jspg.md Normal file
View File

@ -0,0 +1,49 @@
---
description: jspg work preparation
---
This workflow will get you up-to-speed on the JSPG custom json-schema-based cargo pgrx postgres validation extension. Everything you read will be in the jspg directory/project.
Read over this entire workflow and commit to every section of work in a task list, so that you don't stop half way through before reviewing all of the directories and files mentioned. Do not ask for confirmation after generating this task list and proceed through all sections in your list.
Please analyze the files and directories and do not use cat, find, or the terminal to discover or read in any of these files. Analyze every file mentioned. If a directory is mentioned or a /*, please analyze the directory, every single file at its root, and recursively analyze every subdirectory and every single file in every subdirectory to capture not just critical files, but the entirety of what is requested. I state again, DO NOT just review a cherry picking of files in any folder or wildcard specified. Review 100% of all files discovered recursively!
Section 1: Documentation
- GEMINI.md at the root
Section 2: Flow file for cmd interface
- flow at the root
Section 3: Source
- src/*
Section 4: Test Fixtures
- Just review some of the *.json files in tests/fixtures/*
Section 5: Build
- build.rs
Section 6: Cargo TOML
- Cargo.toml
Section 7: Some PUNC Syntax
Now, review some punc type and enum source in the api project with api/ these files:
- punc/sql/tables.sql
- punc/sql/domains.sql
- punc/sql/indexes.sql
- punc/sql/functions/entity.sql
- punc/sql/functions/puncs.sql
- punc/sql/puncs/entity.sql
- punc/sql/puncs/persons.sql
- punc/sql/puncs/puncs.sql
- punc/sql/puncs/job.sql
Now you are ready to help me work on this extension.

View File

@ -9,7 +9,7 @@ It is designed to serve as the validation engine for the "Punc" architecture, wh
1. **Draft 2020-12 Compliance**: Attempt to adhere to the official JSON Schema Draft 2020-12 specification. 1. **Draft 2020-12 Compliance**: Attempt to adhere to the official JSON Schema Draft 2020-12 specification.
2. **Ultra-Fast Validation**: Compile schemas into an optimized in-memory representation for near-instant validation during high-throughput workloads. 2. **Ultra-Fast Validation**: Compile schemas into an optimized in-memory representation for near-instant validation during high-throughput workloads.
3. **Connection-Bound Caching**: Leverage the PostgreSQL session lifecycle to maintain a per-connection schema cache, eliminating the need for repetitive parsing. 3. **Connection-Bound Caching**: Leverage the PostgreSQL session lifecycle to maintain a per-connection schema cache, eliminating the need for repetitive parsing.
4. **Structural Inheritance**: Support object-oriented schema design via Implicit Keyword Shadowing and virtual `.family` schemas. 4. **Structural Inheritance**: Support object-oriented schema design via Implicit Keyword Shadowing and virtual `$family` references.
5. **Punc Integration**: validation is aware of the "Punc" context (request/response) and can validate `cue` objects efficiently. 5. **Punc Integration**: validation is aware of the "Punc" context (request/response) and can validate `cue` objects efficiently.
## 🔌 API Reference ## 🔌 API Reference
@ -27,7 +27,7 @@ Loads and compiles the entire schema registry into the session's memory, atomica
* **Behavior**: * **Behavior**:
* Parses all inputs into an internal schema graph. * Parses all inputs into an internal schema graph.
* Resolves all internal references (`$ref`). * Resolves all internal references (`$ref`).
* Generates virtual `.family` schemas for type hierarchies. * Generates virtual union schemas for type hierarchies referenced via `$family`.
* Compiles schemas into validators. * Compiles schemas into validators.
* **Returns**: `{"response": "success"}` or an error object. * **Returns**: `{"response": "success"}` or an error object.
@ -78,16 +78,17 @@ Standard JSON Schema composition (`allOf`) is additive (Intersection), meaning c
* **Composition (`allOf`)**: When using `allOf`, standard intersection rules apply. No shadowing occurs; all constraints from all branches must pass. This is used for mixins or interfaces. * **Composition (`allOf`)**: When using `allOf`, standard intersection rules apply. No shadowing occurs; all constraints from all branches must pass. This is used for mixins or interfaces.
### 2. Virtual Family Schemas (`.family`) ### 2. Virtual Family References (`$family`)
To support polymorphic fields (e.g., a field that accepts any "User" type), JSPG generates virtual schemas representing type hierarchies. To support polymorphic fields (e.g., a field that accepts any "User" type), JSPG generates virtual schemas representing type hierarchies.
* **Mechanism**: When caching types, if a type defines a `hierarchy` (e.g., `["entity", "organization", "person"]`), JSPG generates a schema like `organization.family` which is a `oneOf` containing refs to all valid descendants. * **Mechanism**: When caching types, if a type defines a `hierarchy` (e.g., `["entity", "organization", "person"]`), JSPG generates a virtual `oneOf` family containing refs to all valid descendants. These can be pointed to exclusively by using `{"$family": "organization"}`. Because `$family` is a macro-pointer that swaps in the virtual union, it **must** be used exclusively in its schema object; you cannot define other properties alongside it.
### 3. Strict by Default & Extensibility ### 3. Strict by Default & Extensibility
JSPG enforces a "Secure by Default" philosophy. All schemas are treated as if `unevaluatedProperties: false` (and `unevaluatedItems: false`) is set, unless explicitly overridden. JSPG enforces a "Secure by Default" philosophy. All schemas are treated as if `unevaluatedProperties: false` (and `unevaluatedItems: false`) is set, unless explicitly overridden.
* **Strictness**: By default, any property in the instance data that is not explicitly defined in the schema causes a validation error. This prevents clients from sending undeclared fields. * **Strictness**: By default, any property or array item in the instance data that is not explicitly defined in the schema causes a validation error. This prevents clients from sending undeclared fields or extra array elements.
* **Extensibility (`extensible: true`)**: To allow additional, undefined properties, you must add `"extensible": true` to the schema. This is useful for types that are designed to be open for extension. * **Extensibility (`extensible: true`)**: To allow a free-for-all of additional, undefined properties or extra array items, you must add `"extensible": true` to the schema. This globally disables the strictness check for that object or array, useful for types designed to be completely open.
* **Structured Additional Properties (`additionalProperties: {...}`)**: Instead of a boolean free-for-all, you can define `additionalProperties` as a schema object (e.g., `{"type": "string"}`). This maintains strictness (no arbitrary keys) but allows any extra keys as long as their values match the defined structure.
* **Ref Boundaries**: Strictness is reset when crossing `$ref` boundaries. The referenced schema's strictness is determined by its own definition (strict by default unless `extensible: true`), ignoring the caller's state. * **Ref Boundaries**: Strictness is reset when crossing `$ref` boundaries. The referenced schema's strictness is determined by its own definition (strict by default unless `extensible: true`), ignoring the caller's state.
* **Inheritance**: Strictness is inherited. A schema extending a strict parent will also be strict unless it declares itself `extensible: true`. Conversely, a schema extending a loose parent will also be loose unless it declares itself `extensible: false`. * **Inheritance**: Strictness is inherited. A schema extending a strict parent will also be strict unless it declares itself `extensible: true`. Conversely, a schema extending a loose parent will also be loose unless it declares itself `extensible: false`.

View File

@ -3,38 +3,38 @@ use std::fs::File;
use std::io::Write; use std::io::Write;
use std::path::Path; use std::path::Path;
fn to_safe_identifier(name: &str) -> String {
let mut safe = String::new();
for (i, c) in name.chars().enumerate() {
if c.is_uppercase() {
if i > 0 {
safe.push('_');
}
safe.push(c.to_ascii_lowercase());
} else if c == '-' || c == '.' {
safe.push('_');
} else {
safe.push(c);
}
}
safe
}
fn main() { fn main() {
println!("cargo:rerun-if-changed=tests/fixtures"); println!("cargo:rerun-if-changed=tests/fixtures");
println!("cargo:rerun-if-changed=Cargo.toml"); println!("cargo:rerun-if-changed=Cargo.toml");
// File 1: src/tests.rs for #[pg_test] // File 1: src/tests/fixtures.rs for #[pg_test]
let pg_dest_path = Path::new("src/tests.rs"); let pg_dest_path = Path::new("src/tests/fixtures.rs");
let mut pg_file = File::create(&pg_dest_path).unwrap(); let mut pg_file = File::create(&pg_dest_path).unwrap();
// File 2: tests/tests.rs for standard #[test] integration // File 2: tests/fixtures.rs for standard #[test] integration
let std_dest_path = Path::new("tests/tests.rs"); let std_dest_path = Path::new("tests/fixtures.rs");
let mut std_file = File::create(&std_dest_path).unwrap(); let mut std_file = File::create(&std_dest_path).unwrap();
// Write headers // Write headers
writeln!(std_file, "use jspg::util;").unwrap(); writeln!(std_file, "use jspg::util;").unwrap();
// Helper for snake_case conversion
// let _to_snake_case = |s: &str| -> String {
// s.chars().fold(String::new(), |mut acc, c| {
// if c.is_uppercase() {
// if !acc.is_empty() {
// acc.push('_');
// }
// acc.push(c.to_ascii_lowercase());
// } else if c == '-' || c == ' ' || c == '.' || c == '/' || c == ':' {
// acc.push('_');
// } else if c.is_alphanumeric() {
// acc.push(c);
// }
// acc
// })
// };
// Walk tests/fixtures directly // Walk tests/fixtures directly
let fixtures_path = "tests/fixtures"; let fixtures_path = "tests/fixtures";
if Path::new(fixtures_path).exists() { if Path::new(fixtures_path).exists() {
@ -51,24 +51,7 @@ fn main() {
if let Some(arr) = val.as_array() { if let Some(arr) = val.as_array() {
for (i, _item) in arr.iter().enumerate() { for (i, _item) in arr.iter().enumerate() {
// Use deterministic names: test_{filename}_{index} // Use deterministic names: test_{filename}_{index}
// We sanitize the filename to be a valid identifier let safe_filename = to_safe_identifier(file_name);
// Use manual snake_case logic since we don't want to add a build-dependency just yet if not needed,
// but `dynamicRef` -> `dynamic_ref` requires parsing.
// Let's implement a simple camelToSnake helper.
let mut safe_filename = String::new();
for (i, c) in file_name.chars().enumerate() {
if c.is_uppercase() {
if i > 0 {
safe_filename.push('_');
}
safe_filename.push(c.to_ascii_lowercase());
} else if c == '-' || c == '.' {
safe_filename.push('_');
} else {
safe_filename.push(c);
}
}
let fn_name = format!("test_{}_{}", safe_filename, i); let fn_name = format!("test_{}_{}", safe_filename, i);
// Write to src/tests.rs (PG Test) // Write to src/tests.rs (PG Test)

71
flow
View File

@ -15,25 +15,28 @@ CARGO_DEPENDENCIES=(cargo-pgrx==0.16.1)
GITEA_ORGANIZATION="cellular" GITEA_ORGANIZATION="cellular"
GITEA_REPOSITORY="jspg" GITEA_REPOSITORY="jspg"
pgrx-prepare() { pgrx-up() {
info "Initializing pgrx..." info "Initializing pgrx..."
# Explicitly point to the postgresql@${POSTGRES_VERSION} pg_config, don't rely on 'which' # Explicitly point to the postgresql@${POSTGRES_VERSION} pg_config, don't rely on 'which'
local POSTGRES_CONFIG_PATH="/opt/homebrew/opt/postgresql@${POSTGRES_VERSION}/bin/pg_config" local POSTGRES_CONFIG_PATH="/opt/homebrew/opt/postgresql@${POSTGRES_VERSION}/bin/pg_config"
if [ ! -x "$POSTGRES_CONFIG_PATH" ]; then if [ ! -x "$POSTGRES_CONFIG_PATH" ]; then
error "pg_config not found or not executable at $POSTGRES_CONFIG_PATH."
warning "Ensure postgresql@${POSTGRES_VERSION} is installed correctly via Homebrew." warning "Ensure postgresql@${POSTGRES_VERSION} is installed correctly via Homebrew."
return 2 abort "pg_config not found or not executable at $POSTGRES_CONFIG_PATH." 2
fi fi
if cargo pgrx init --pg"$POSTGRES_VERSION"="$POSTGRES_CONFIG_PATH"; then if cargo pgrx init --pg"$POSTGRES_VERSION"="$POSTGRES_CONFIG_PATH"; then
success "pgrx initialized successfully." success "pgrx initialized successfully." && return 0
else
error "Failed to initialize pgrx. Check PostgreSQL development packages are installed and $POSTGRES_CONFIG_PATH is valid."
return 2
fi fi
abort "Failed to initialize pgrx. Check PostgreSQL development packages are installed and $POSTGRES_CONFIG_PATH is valid." 2
} }
pgrx-down() {
info "Taking pgrx down..."
}
build() { build() {
local version local version
version=$(get-version) || return $? version=$(get-version) || return $?
@ -51,11 +54,10 @@ build() {
info "Creating tarball: ${tarball_path}" info "Creating tarball: ${tarball_path}"
# Set COPYFILE_DISABLE=1 to prevent macOS tar from including ._ metadata files # Set COPYFILE_DISABLE=1 to prevent macOS tar from including ._ metadata files
if COPYFILE_DISABLE=1 tar --exclude='.git*' --exclude='./target' --exclude='./package' --exclude='./flows' --exclude='./flow' -czf "${tarball_path}" .; then if COPYFILE_DISABLE=1 tar --exclude='.git*' --exclude='./target' --exclude='./package' --exclude='./flows' --exclude='./flow' -czf "${tarball_path}" .; then
success "Successfully created source tarball: ${tarball_path}" success "Successfully created source tarball: ${tarball_path}" && return 0
else
error "Failed to create source tarball."
return 2
fi fi
abort "Failed to create source tarball." 2
} }
install() { install() {
@ -66,8 +68,7 @@ install() {
# Run the pgrx install command # Run the pgrx install command
if ! cargo pgrx install; then if ! cargo pgrx install; then
error "cargo pgrx install command failed." abort "cargo pgrx install command failed." 2
return 2
fi fi
success "PGRX extension v$version successfully built and installed." success "PGRX extension v$version successfully built and installed."
@ -76,36 +77,28 @@ install() {
pg_sharedir=$("$POSTGRES_CONFIG_PATH" --sharedir) pg_sharedir=$("$POSTGRES_CONFIG_PATH" --sharedir)
local pg_config_status=$? local pg_config_status=$?
if [ $pg_config_status -ne 0 ] || [ -z "$pg_sharedir" ]; then if [ $pg_config_status -ne 0 ] || [ -z "$pg_sharedir" ]; then
error "Failed to determine PostgreSQL shared directory using pg_config." abort "Failed to determine PostgreSQL shared directory using pg_config." 2
return 2
fi fi
local installed_control_path="${pg_sharedir}/extension/jspg.control" local installed_control_path="${pg_sharedir}/extension/jspg.control"
# Modify the control file # Modify the control file
if [ ! -f "$installed_control_path" ]; then if [ ! -f "$installed_control_path" ]; then
error "Installed control file not found: '$installed_control_path'" abort "Installed control file not found: '$installed_control_path'" 2
return 2
fi fi
info "Modifying control file for non-superuser access: ${installed_control_path}" info "Modifying control file for non-superuser access: ${installed_control_path}"
# Use sed -i '' for macOS compatibility # Use sed -i '' for macOS compatibility
if sed -i '' '/^superuser = false/d' "$installed_control_path" && \ if sed -i '' '/^superuser = false/d' "$installed_control_path" && \
echo 'trusted = true' >> "$installed_control_path"; then echo 'trusted = true' >> "$installed_control_path"; then
success "Control file modified successfully." success "Control file modified successfully." && return 0
else
error "Failed to modify control file: ${installed_control_path}"
return 2
fi fi
abort "Failed to modify control file: ${installed_control_path}" 2
} }
test-jspg() { test() {
info "Running jspg tests..." info "Running jspg tests..."
cargo pgrx test "pg${POSTGRES_VERSION}" "$@" || return $? cargo test --tests "$@" || return $?
}
test-validator() {
info "Running validator tests..."
cargo test -p boon --features "pgrx/pg${POSTGRES_VERSION}" "$@" || return $?
} }
clean() { clean() {
@ -114,27 +107,27 @@ clean() {
} }
jspg-usage() { jspg-usage() {
printf "prepare\tCheck OS, Cargo, and PGRX dependencies.\n" echo "up|Check OS, Cargo, and PGRX dependencies."
printf "install\tBuild and install the extension locally (after prepare).\n" echo "install|Build and install the extension locally (after up)."
printf "reinstall\tClean, build, and install the extension locally (after prepare).\n" echo "reinstall|Clean, build, and install the extension locally (after up)."
printf "test-jspg\t\tRun pgrx integration tests.\n" echo "test-jspg|Run pgrx integration tests."
printf "test-validator\t\tRun validator integration tests.\n" echo "test-validator|Run validator integration tests."
printf "clean\t\tRemove pgrx build artifacts.\n" echo "clean|Remove pgrx build artifacts."
} }
jspg-flow() { jspg-flow() {
case "$1" in case "$1" in
prepare) prepare && cargo-prepare && pgrx-prepare; return $?;; up) up && rust-up && pgrx-up; return $?;;
down) pgrx-down && rust-down && down; return $?;;
build) build; return $?;; build) build; return $?;;
install) install; return $?;; install) install; return $?;;
reinstall) clean && install; return $?;; reinstall) clean && install; return $?;;
test-jspg) test-jspg "${@:2}"; return $?;; test) test "${@:2}"; return $?;;
test-validator) test-validator "${@:2}"; return $?;;
clean) clean; return $?;; clean) clean; return $?;;
*) return 1 ;; *) return 127 ;;
esac esac
} }
register-flow "jspg-usage" "jspg-flow" register-flow "jspg"
dispatch "$@" dispatch "$@"

2
flows

Submodule flows updated: 404da626c7...a7b0f5dc4d

View File

@ -113,6 +113,9 @@ impl Compiler {
Self::compile_recursive(Arc::make_mut(s)); Self::compile_recursive(Arc::make_mut(s));
} }
} }
if let Some(add_props) = &mut schema.additional_properties {
Self::compile_recursive(Arc::make_mut(add_props));
}
// ... Recurse logic ... // ... Recurse logic ...
if let Some(items) = &mut schema.items { if let Some(items) = &mut schema.items {
@ -323,6 +326,11 @@ impl Compiler {
Self::compile_index(sub_schema, registry, current_base.clone(), sub); Self::compile_index(sub_schema, registry, current_base.clone(), sub);
} }
} }
if let Some(add_props) = &schema.additional_properties {
let mut sub = child_pointer.clone();
sub.push("additionalProperties".to_string());
Self::compile_index(add_props, registry, current_base.clone(), sub);
}
if let Some(contains) = &schema.contains { if let Some(contains) = &schema.contains {
let mut sub = child_pointer.clone(); let mut sub = child_pointer.clone();
sub.push("contains".to_string()); sub.push("contains".to_string());

118
src/context.rs Normal file
View File

@ -0,0 +1,118 @@
use crate::error::ValidationError;
use crate::instance::ValidationInstance;
use crate::result::ValidationResult;
use crate::schema::Schema;
use crate::validator::Validator;
use std::collections::HashSet;
pub struct ValidationContext<'a, I: ValidationInstance<'a>> {
pub validator: &'a Validator,
pub root: &'a Schema,
pub schema: &'a Schema,
pub instance: I,
pub path: String,
pub depth: usize,
pub scope: Vec<String>,
pub overrides: HashSet<String>,
pub extensible: bool,
pub reporter: bool,
}
impl<'a, I: ValidationInstance<'a>> ValidationContext<'a, I> {
pub fn new(
validator: &'a Validator,
root: &'a Schema,
schema: &'a Schema,
instance: I,
scope: Vec<String>,
overrides: HashSet<String>,
extensible: bool,
reporter: bool,
) -> Self {
let effective_extensible = schema.extensible.unwrap_or(extensible);
Self {
validator,
root,
schema,
instance,
path: String::new(),
depth: 0,
scope,
overrides,
extensible: effective_extensible,
reporter,
}
}
pub fn derive(
&self,
schema: &'a Schema,
instance: I,
path: &str,
scope: Vec<String>,
overrides: HashSet<String>,
extensible: bool,
reporter: bool,
) -> Self {
let effective_extensible = schema.extensible.unwrap_or(extensible);
Self {
validator: self.validator,
root: self.root,
schema,
instance,
path: path.to_string(),
depth: self.depth + 1,
scope,
overrides,
extensible: effective_extensible,
reporter,
}
}
pub fn derive_for_schema(&self, schema: &'a Schema, reporter: bool) -> Self {
self.derive(
schema,
self.instance,
&self.path,
self.scope.clone(),
HashSet::new(),
self.extensible,
reporter,
)
}
pub fn validate(&self) -> Result<ValidationResult, ValidationError> {
let mut effective_scope = self.scope.clone();
if let Some(id) = &self.schema.obj.id {
let current_base = self.scope.last().map(|s| s.as_str()).unwrap_or("");
let mut new_base = id.clone();
if !current_base.is_empty() {
if let Ok(base_url) = url::Url::parse(current_base) {
if let Ok(joined) = base_url.join(id) {
new_base = joined.to_string();
}
}
}
effective_scope.push(new_base);
let shadow = ValidationContext {
validator: self.validator,
root: self.root,
schema: self.schema,
instance: self.instance,
path: self.path.clone(),
depth: self.depth,
scope: effective_scope,
overrides: self.overrides.clone(),
extensible: self.extensible,
reporter: self.reporter,
};
return shadow.validate_scoped();
}
self.validate_scoped()
}
}

View File

@ -13,7 +13,7 @@ pub struct Drop {
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub response: Option<Value>, pub response: Option<Value>,
#[serde(default)] #[serde(default, skip_serializing_if = "Vec::is_empty")]
pub errors: Vec<Error>, pub errors: Vec<Error>,
} }
@ -29,7 +29,7 @@ impl Drop {
pub fn success() -> Self { pub fn success() -> Self {
Self { Self {
type_: "drop".to_string(), type_: "drop".to_string(),
response: Some(serde_json::json!({ "result": "success" })), // Or appropriate success response response: Some(serde_json::json!("success")),
errors: vec![], errors: vec![],
} }
} }
@ -53,8 +53,6 @@ impl Drop {
#[derive(Debug, Serialize, Deserialize, Clone)] #[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Error { pub struct Error {
#[serde(skip_serializing_if = "Option::is_none")]
pub punc: Option<String>,
pub code: String, pub code: String,
pub message: String, pub message: String,
pub details: ErrorDetails, pub details: ErrorDetails,

6
src/error.rs Normal file
View File

@ -0,0 +1,6 @@
#[derive(Debug, Clone, serde::Serialize)]
pub struct ValidationError {
pub code: String,
pub message: String,
pub path: String,
}

98
src/instance.rs Normal file
View File

@ -0,0 +1,98 @@
use serde_json::Value;
use std::collections::HashSet;
use std::ptr::NonNull;
pub trait ValidationInstance<'a>: Copy + Clone {
fn as_value(&self) -> &'a Value;
fn child_at_key(&self, key: &str) -> Option<Self>;
fn child_at_index(&self, idx: usize) -> Option<Self>;
fn prune_object(&self, _keys: &HashSet<String>) {}
fn prune_array(&self, _indices: &HashSet<usize>) {}
}
#[derive(Clone, Copy)]
pub struct ReadOnlyInstance<'a>(pub &'a Value);
impl<'a> ValidationInstance<'a> for ReadOnlyInstance<'a> {
fn as_value(&self) -> &'a Value {
self.0
}
fn child_at_key(&self, key: &str) -> Option<Self> {
self.0.get(key).map(ReadOnlyInstance)
}
fn child_at_index(&self, idx: usize) -> Option<Self> {
self.0.get(idx).map(ReadOnlyInstance)
}
}
#[derive(Clone, Copy)]
pub struct MutableInstance {
ptr: NonNull<Value>,
}
impl MutableInstance {
pub fn new(val: &mut Value) -> Self {
Self {
ptr: NonNull::from(val),
}
}
}
impl<'a> ValidationInstance<'a> for MutableInstance {
fn as_value(&self) -> &'a Value {
unsafe { self.ptr.as_ref() }
}
fn child_at_key(&self, key: &str) -> Option<Self> {
unsafe {
if let Some(obj) = self.ptr.as_ref().as_object() {
if obj.contains_key(key) {
let parent_mut = &mut *self.ptr.as_ptr();
if let Some(child_val) = parent_mut.get_mut(key) {
return Some(MutableInstance::new(child_val));
}
}
}
None
}
}
fn child_at_index(&self, idx: usize) -> Option<Self> {
unsafe {
if let Some(arr) = self.ptr.as_ref().as_array() {
if idx < arr.len() {
let parent_mut = &mut *self.ptr.as_ptr();
if let Some(child_val) = parent_mut.get_mut(idx) {
return Some(MutableInstance::new(child_val));
}
}
}
None
}
}
fn prune_object(&self, keys: &HashSet<String>) {
unsafe {
let val_mut = &mut *self.ptr.as_ptr();
if let Some(obj) = val_mut.as_object_mut() {
obj.retain(|k, _| keys.contains(k));
}
}
}
fn prune_array(&self, indices: &HashSet<usize>) {
unsafe {
let val_mut = &mut *self.ptr.as_ptr();
if let Some(arr) = val_mut.as_array_mut() {
let mut i = 0;
arr.retain(|_| {
let keep = indices.contains(&i);
i += 1;
keep
});
}
}
}
}

View File

@ -11,8 +11,13 @@ mod schema;
pub mod util; pub mod util;
mod validator; mod validator;
use crate::schema::Schema; pub mod context;
use serde_json::{Value, json}; pub mod error;
pub mod instance;
pub mod result;
pub(crate) mod rules;
use serde_json::json;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
lazy_static::lazy_static! { lazy_static::lazy_static! {
@ -25,80 +30,13 @@ lazy_static::lazy_static! {
} }
#[pg_extern(strict)] #[pg_extern(strict)]
fn cache_json_schemas(enums: JsonB, types: JsonB, puncs: JsonB) -> JsonB { pub fn cache_json_schemas(enums: JsonB, types: JsonB, puncs: JsonB) -> JsonB {
// 1. Build a new Registry LOCALLY (on stack) // 1 & 2. Build Registry, Families, and Wrap in Validator all in one shot
let mut registry = registry::Registry::new(); let new_validator = crate::validator::Validator::from_punc_definition(
Some(&enums.0),
// Generate Family Schemas from Types Some(&types.0),
{ Some(&puncs.0),
let mut family_map: std::collections::HashMap<String, std::collections::HashSet<String>> = );
std::collections::HashMap::new();
if let Value::Array(arr) = &types.0 {
for item in arr {
if let Some(name) = item.get("name").and_then(|v| v.as_str()) {
if let Some(hierarchy) = item.get("hierarchy").and_then(|v| v.as_array()) {
for ancestor in hierarchy {
if let Some(anc_str) = ancestor.as_str() {
family_map
.entry(anc_str.to_string())
.or_default()
.insert(name.to_string());
}
}
}
}
}
}
for (family_name, members) in family_map {
let id = format!("{}.family", family_name);
// Object Union (for polymorphic object validation)
// This allows the schema to match ANY of the types in the family hierarchy
let object_refs: Vec<Value> = members.iter().map(|s| json!({ "$ref": s })).collect();
let schema_json = json!({
"$id": id,
"oneOf": object_refs
});
if let Ok(schema) = serde_json::from_value::<Schema>(schema_json) {
registry.add(schema);
}
}
// Helper to parse and cache a list of items
let mut cache_items = |items: JsonB| {
if let Value::Array(arr) = items.0 {
for item in arr {
// For now, we assume the item structure matches what the generator expects
// or what `json_schemas.sql` sends.
// The `Schema` struct in `schema.rs` is designed to deserialize standard JSON Schema.
// However, the input here is an array of objects that *contain* a `schemas` array.
// We need to extract those inner schemas.
if let Some(schemas_val) = item.get("schemas") {
if let Value::Array(schemas) = schemas_val {
for schema_val in schemas {
// Deserialize into our robust Schema struct to ensure validity/parsing
if let Ok(schema) = serde_json::from_value::<Schema>(schema_val.clone()) {
// Registry handles compilation
registry.add(schema);
}
}
}
}
}
}
};
cache_items(enums);
cache_items(types);
cache_items(puncs); // public/private distinction logic to come later
}
// 2. Wrap in Validator and Arc
let new_validator = validator::Validator::new(registry);
let new_arc = Arc::new(new_validator); let new_arc = Arc::new(new_validator);
// 3. ATOMIC SWAP // 3. ATOMIC SWAP
@ -107,11 +45,12 @@ fn cache_json_schemas(enums: JsonB, types: JsonB, puncs: JsonB) -> JsonB {
*lock = Some(new_arc); *lock = Some(new_arc);
} }
JsonB(json!({ "response": "success" })) let drop = crate::drop::Drop::success();
JsonB(serde_json::to_value(drop).unwrap())
} }
#[pg_extern(strict, parallel_safe)] #[pg_extern(strict, parallel_safe)]
fn mask_json_schema(schema_id: &str, instance: JsonB) -> JsonB { pub fn mask_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
// 1. Acquire Snapshot // 1. Acquire Snapshot
let validator_arc = { let validator_arc = {
let lock = GLOBAL_VALIDATOR.read().unwrap(); let lock = GLOBAL_VALIDATOR.read().unwrap();
@ -135,7 +74,6 @@ fn mask_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
.errors .errors
.into_iter() .into_iter()
.map(|e| crate::drop::Error { .map(|e| crate::drop::Error {
punc: None,
code: e.code, code: e.code,
message: e.message, message: e.message,
details: crate::drop::ErrorDetails { path: e.path }, details: crate::drop::ErrorDetails { path: e.path },
@ -148,7 +86,6 @@ fn mask_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
Err(e) => { Err(e) => {
// Schema Not Found or other fatal error // Schema Not Found or other fatal error
let error = crate::drop::Error { let error = crate::drop::Error {
punc: None,
code: e.code, code: e.code,
message: e.message, message: e.message,
details: crate::drop::ErrorDetails { path: e.path }, details: crate::drop::ErrorDetails { path: e.path },
@ -158,19 +95,20 @@ fn mask_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
} }
} }
} else { } else {
JsonB(json!({ let error = crate::drop::Error {
"punc": null, code: "VALIDATOR_NOT_INITIALIZED".to_string(),
"errors": [{ message: "JSON Schemas have not been cached yet. Run cache_json_schemas()".to_string(),
"code": "VALIDATOR_NOT_INITIALIZED", details: crate::drop::ErrorDetails {
"message": "JSON Schemas have not been cached yet. Run cache_json_schemas()", path: "".to_string(),
"details": { "path": "" } },
}] };
})) let drop = crate::drop::Drop::with_errors(vec![error]);
JsonB(serde_json::to_value(drop).unwrap())
} }
} }
#[pg_extern(strict, parallel_safe)] #[pg_extern(strict, parallel_safe)]
fn validate_json_schema(schema_id: &str, instance: JsonB) -> JsonB { pub fn validate_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
// 1. Acquire Snapshot // 1. Acquire Snapshot
let validator_arc = { let validator_arc = {
let lock = GLOBAL_VALIDATOR.read().unwrap(); let lock = GLOBAL_VALIDATOR.read().unwrap();
@ -189,7 +127,6 @@ fn validate_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
.errors .errors
.into_iter() .into_iter()
.map(|e| crate::drop::Error { .map(|e| crate::drop::Error {
punc: None,
code: e.code, code: e.code,
message: e.message, message: e.message,
details: crate::drop::ErrorDetails { path: e.path }, details: crate::drop::ErrorDetails { path: e.path },
@ -201,7 +138,6 @@ fn validate_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
} }
Err(e) => { Err(e) => {
let error = crate::drop::Error { let error = crate::drop::Error {
punc: None,
code: e.code, code: e.code,
message: e.message, message: e.message,
details: crate::drop::ErrorDetails { path: e.path }, details: crate::drop::ErrorDetails { path: e.path },
@ -211,19 +147,20 @@ fn validate_json_schema(schema_id: &str, instance: JsonB) -> JsonB {
} }
} }
} else { } else {
JsonB(json!({ let error = crate::drop::Error {
"punc": null, code: "VALIDATOR_NOT_INITIALIZED".to_string(),
"errors": [{ message: "JSON Schemas have not been cached yet. Run cache_json_schemas()".to_string(),
"code": "VALIDATOR_NOT_INITIALIZED", details: crate::drop::ErrorDetails {
"message": "JSON Schemas have not been cached yet. Run cache_json_schemas()", path: "".to_string(),
"details": { "path": "" } },
}] };
})) let drop = crate::drop::Drop::with_errors(vec![error]);
JsonB(serde_json::to_value(drop).unwrap())
} }
} }
#[pg_extern(strict, parallel_safe)] #[pg_extern(strict, parallel_safe)]
fn json_schema_cached(schema_id: &str) -> bool { pub fn json_schema_cached(schema_id: &str) -> bool {
if let Some(validator) = GLOBAL_VALIDATOR.read().unwrap().as_ref() { if let Some(validator) = GLOBAL_VALIDATOR.read().unwrap().as_ref() {
match validator.validate(schema_id, &serde_json::Value::Null) { match validator.validate(schema_id, &serde_json::Value::Null) {
Err(e) if e.code == "SCHEMA_NOT_FOUND" => false, Err(e) if e.code == "SCHEMA_NOT_FOUND" => false,
@ -235,18 +172,23 @@ fn json_schema_cached(schema_id: &str) -> bool {
} }
#[pg_extern(strict)] #[pg_extern(strict)]
fn clear_json_schemas() -> JsonB { pub fn clear_json_schemas() -> JsonB {
let mut lock = GLOBAL_VALIDATOR.write().unwrap(); let mut lock = GLOBAL_VALIDATOR.write().unwrap();
*lock = None; *lock = None;
JsonB(json!({ "response": "success" })) let drop = crate::drop::Drop::success();
JsonB(serde_json::to_value(drop).unwrap())
} }
#[pg_extern(strict, parallel_safe)] #[pg_extern(strict, parallel_safe)]
fn show_json_schemas() -> JsonB { pub fn show_json_schemas() -> JsonB {
if let Some(_validator) = GLOBAL_VALIDATOR.read().unwrap().as_ref() { if let Some(validator) = GLOBAL_VALIDATOR.read().unwrap().as_ref() {
JsonB(json!({ "response": "success", "status": "active" })) let mut keys = validator.get_schema_ids();
keys.sort();
let drop = crate::drop::Drop::success_with_val(json!(keys));
JsonB(serde_json::to_value(drop).unwrap())
} else { } else {
JsonB(json!({ "response": "success", "status": "empty" })) let drop = crate::drop::Drop::success_with_val(json!([]));
JsonB(serde_json::to_value(drop).unwrap())
} }
} }
@ -254,7 +196,7 @@ fn show_json_schemas() -> JsonB {
#[pg_schema] #[pg_schema]
mod tests { mod tests {
use pgrx::prelude::*; use pgrx::prelude::*;
include!("tests.rs"); include!("tests/fixtures.rs");
} }
#[cfg(test)] #[cfg(test)]

27
src/result.rs Normal file
View File

@ -0,0 +1,27 @@
use crate::error::ValidationError;
use std::collections::HashSet;
#[derive(Debug, Default, Clone, serde::Serialize)]
pub struct ValidationResult {
pub errors: Vec<ValidationError>,
#[serde(skip)]
pub evaluated_keys: HashSet<String>,
#[serde(skip)]
pub evaluated_indices: HashSet<usize>,
}
impl ValidationResult {
pub fn new() -> Self {
Self::default()
}
pub fn merge(&mut self, other: ValidationResult) {
self.errors.extend(other.errors);
self.evaluated_keys.extend(other.evaluated_keys);
self.evaluated_indices.extend(other.evaluated_indices);
}
pub fn is_valid(&self) -> bool {
self.errors.is_empty()
}
}

1008
src/rules.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@ -33,6 +33,11 @@ pub struct SchemaObject {
pub properties: Option<BTreeMap<String, Arc<Schema>>>, pub properties: Option<BTreeMap<String, Arc<Schema>>>,
#[serde(rename = "patternProperties")] #[serde(rename = "patternProperties")]
pub pattern_properties: Option<BTreeMap<String, Arc<Schema>>>, pub pattern_properties: Option<BTreeMap<String, Arc<Schema>>>,
#[serde(rename = "additionalProperties")]
pub additional_properties: Option<Arc<Schema>>,
#[serde(rename = "$family")]
pub family: Option<String>,
pub required: Option<Vec<String>>, pub required: Option<Vec<String>>,
// dependencies can be schema dependencies or property dependencies // dependencies can be schema dependencies or property dependencies

View File

@ -155,6 +155,24 @@ fn test_puncs_7() {
crate::util::run_test_file_at_index(&path, 7).unwrap(); crate::util::run_test_file_at_index(&path, 7).unwrap();
} }
#[pg_test]
fn test_additional_properties_0() {
let path = format!("{}/tests/fixtures/additionalProperties.json", env!("CARGO_MANIFEST_DIR"));
crate::util::run_test_file_at_index(&path, 0).unwrap();
}
#[pg_test]
fn test_additional_properties_1() {
let path = format!("{}/tests/fixtures/additionalProperties.json", env!("CARGO_MANIFEST_DIR"));
crate::util::run_test_file_at_index(&path, 1).unwrap();
}
#[pg_test]
fn test_additional_properties_2() {
let path = format!("{}/tests/fixtures/additionalProperties.json", env!("CARGO_MANIFEST_DIR"));
crate::util::run_test_file_at_index(&path, 2).unwrap();
}
#[pg_test] #[pg_test]
fn test_exclusive_minimum_0() { fn test_exclusive_minimum_0() {
let path = format!("{}/tests/fixtures/exclusiveMinimum.json", env!("CARGO_MANIFEST_DIR")); let path = format!("{}/tests/fixtures/exclusiveMinimum.json", env!("CARGO_MANIFEST_DIR"));

View File

@ -50,74 +50,12 @@ pub fn run_test_file_at_index(path: &str, index: usize) -> Result<(), String> {
let group = &suite[index]; let group = &suite[index];
let mut failures = Vec::<String>::new(); let mut failures = Vec::<String>::new();
// Create Local Registry for this test group // Create Validator Instance and parse enums, types, and puncs automatically
let mut registry = crate::registry::Registry::new(); let mut validator = Validator::from_punc_definition(
group.enums.as_ref(),
// Helper to register items with 'schemas' group.types.as_ref(),
let register_schemas = |registry: &mut crate::registry::Registry, items_val: Option<&Value>| { group.puncs.as_ref(),
if let Some(val) = items_val { );
if let Value::Array(arr) = val {
for item in arr {
if let Some(schemas_val) = item.get("schemas") {
if let Value::Array(schemas) = schemas_val {
for schema_val in schemas {
if let Ok(schema) =
serde_json::from_value::<crate::schema::Schema>(schema_val.clone())
{
registry.add(schema);
}
}
}
}
}
}
}
};
// 1. Register Family Schemas if 'types' is present
if let Some(types_val) = &group.types {
if let Value::Array(arr) = types_val {
let mut family_map: std::collections::HashMap<String, std::collections::HashSet<String>> =
std::collections::HashMap::new();
for item in arr {
if let Some(name) = item.get("name").and_then(|v| v.as_str()) {
if let Some(hierarchy) = item.get("hierarchy").and_then(|v| v.as_array()) {
for ancestor in hierarchy {
if let Some(anc_str) = ancestor.as_str() {
family_map
.entry(anc_str.to_string())
.or_default()
.insert(name.to_string());
}
}
}
}
}
for (family_name, members) in family_map {
let id = format!("{}.family", family_name);
let object_refs: Vec<Value> = members
.iter()
.map(|s| serde_json::json!({ "$ref": s }))
.collect();
let schema_json = serde_json::json!({
"$id": id,
"oneOf": object_refs
});
if let Ok(schema) = serde_json::from_value::<crate::schema::Schema>(schema_json) {
registry.add(schema);
}
}
}
}
// 2. Register items directly
register_schemas(&mut registry, group.enums.as_ref());
register_schemas(&mut registry, group.types.as_ref());
register_schemas(&mut registry, group.puncs.as_ref());
// 3. Register root 'schemas' if present (generic test support) // 3. Register root 'schemas' if present (generic test support)
// Some tests use a raw 'schema' or 'schemas' field at the group level // Some tests use a raw 'schema' or 'schemas' field at the group level
@ -126,12 +64,12 @@ pub fn run_test_file_at_index(path: &str, index: usize) -> Result<(), String> {
Ok(mut schema) => { Ok(mut schema) => {
let id_clone = schema.obj.id.clone(); let id_clone = schema.obj.id.clone();
if id_clone.is_some() { if id_clone.is_some() {
registry.add(schema); validator.registry.add(schema);
} else { } else {
// Fallback ID if none provided in schema // Fallback ID if none provided in schema
let id = format!("test:{}:{}", path, index); let id = format!("test:{}:{}", path, index);
schema.obj.id = Some(id); schema.obj.id = Some(id);
registry.add(schema); validator.registry.add(schema);
} }
} }
Err(e) => { Err(e) => {
@ -143,9 +81,6 @@ pub fn run_test_file_at_index(path: &str, index: usize) -> Result<(), String> {
} }
} }
// Create Validator Instance (Takes ownership of registry)
let validator = Validator::new(registry);
// 4. Run Tests // 4. Run Tests
for (_test_index, test) in group.tests.iter().enumerate() { for (_test_index, test) in group.tests.iter().enumerate() {
let mut schema_id = test.schema_id.clone(); let mut schema_id = test.schema_id.clone();
@ -251,79 +186,13 @@ pub fn run_test_file(path: &str) -> Result<(), String> {
let mut failures = Vec::<String>::new(); let mut failures = Vec::<String>::new();
for (group_index, group) in suite.into_iter().enumerate() { for (group_index, group) in suite.into_iter().enumerate() {
// Create Isolated Registry for this test group // Create Validator Instance and parse enums, types, and puncs automatically
let mut registry = crate::registry::Registry::new(); let mut validator = Validator::from_punc_definition(
group.enums.as_ref(),
group.types.as_ref(),
group.puncs.as_ref(),
);
// Helper to register items with 'schemas'
let register_schemas = |registry: &mut crate::registry::Registry, items_val: Option<Value>| {
if let Some(val) = items_val {
if let Value::Array(arr) = val {
for item in arr {
if let Some(schemas_val) = item.get("schemas") {
if let Value::Array(schemas) = schemas_val {
for schema_val in schemas {
if let Ok(schema) =
serde_json::from_value::<crate::schema::Schema>(schema_val.clone())
{
registry.add(schema);
}
}
}
}
}
}
}
};
// 1. Register Family Schemas if 'types' is present
if let Some(types_val) = &group.types {
if let Value::Array(arr) = types_val {
let mut family_map: std::collections::HashMap<String, std::collections::HashSet<String>> =
std::collections::HashMap::new();
for item in arr {
if let Some(name) = item.get("name").and_then(|v| v.as_str()) {
// Default hierarchy contains self if not specified?
// Usually hierarchy is explicit in these tests.
if let Some(hierarchy) = item.get("hierarchy").and_then(|v| v.as_array()) {
for ancestor in hierarchy {
if let Some(anc_str) = ancestor.as_str() {
family_map
.entry(anc_str.to_string())
.or_default()
.insert(name.to_string());
}
}
}
}
}
for (family_name, members) in family_map {
let id = format!("{}.family", family_name);
let object_refs: Vec<Value> = members
.into_iter()
.map(|s| serde_json::json!({ "$ref": s }))
.collect();
let schema_json = serde_json::json!({
"$id": id,
"oneOf": object_refs
});
if let Ok(schema) = serde_json::from_value::<crate::schema::Schema>(schema_json) {
registry.add(schema);
}
}
}
}
// Register 'types', 'enums', and 'puncs' if present (JSPG style)
register_schemas(&mut registry, group.types);
register_schemas(&mut registry, group.enums);
register_schemas(&mut registry, group.puncs);
// Register main 'schema' if present (Standard style)
// Ensure ID is a valid URI to avoid Url::parse errors in Compiler
let unique_id = format!("test:{}:{}", path, group_index); let unique_id = format!("test:{}:{}", path, group_index);
// Register main 'schema' if present (Standard style) // Register main 'schema' if present (Standard style)
@ -336,12 +205,9 @@ pub fn run_test_file(path: &str) -> Result<(), String> {
if schema.obj.id.is_none() { if schema.obj.id.is_none() {
schema.obj.id = Some(unique_id.clone()); schema.obj.id = Some(unique_id.clone());
} }
registry.add(schema); validator.registry.add(schema);
} }
// Create Instance (Takes Ownership)
let validator = Validator::new(registry);
for test in group.tests { for test in group.tests {
// Use explicit schema_id from test, or default to unique_id // Use explicit schema_id from test, or default to unique_id
let schema_id = test.schema_id.as_deref().unwrap_or(&unique_id).to_string(); let schema_id = test.schema_id.as_deref().unwrap_or(&unique_id).to_string();

File diff suppressed because it is too large Load Diff

View File

@ -156,6 +156,24 @@ fn test_puncs_7() {
util::run_test_file_at_index(&path, 7).unwrap(); util::run_test_file_at_index(&path, 7).unwrap();
} }
#[test]
fn test_additional_properties_0() {
let path = format!("{}/tests/fixtures/additionalProperties.json", env!("CARGO_MANIFEST_DIR"));
util::run_test_file_at_index(&path, 0).unwrap();
}
#[test]
fn test_additional_properties_1() {
let path = format!("{}/tests/fixtures/additionalProperties.json", env!("CARGO_MANIFEST_DIR"));
util::run_test_file_at_index(&path, 1).unwrap();
}
#[test]
fn test_additional_properties_2() {
let path = format!("{}/tests/fixtures/additionalProperties.json", env!("CARGO_MANIFEST_DIR"));
util::run_test_file_at_index(&path, 2).unwrap();
}
#[test] #[test]
fn test_exclusive_minimum_0() { fn test_exclusive_minimum_0() {
let path = format!("{}/tests/fixtures/exclusiveMinimum.json", env!("CARGO_MANIFEST_DIR")); let path = format!("{}/tests/fixtures/exclusiveMinimum.json", env!("CARGO_MANIFEST_DIR"));

132
tests/fixtures/additionalProperties.json vendored Normal file
View File

@ -0,0 +1,132 @@
[
{
"description": "additionalProperties validates properties not matched by properties",
"schema": {
"$schema": "https://json-schema.org/draft/2020-12/schema",
"properties": {
"foo": {
"type": "string"
},
"bar": {
"type": "number"
}
},
"additionalProperties": {
"type": "boolean"
}
},
"tests": [
{
"description": "defined properties are valid",
"data": {
"foo": "value",
"bar": 123
},
"valid": true
},
{
"description": "additional property matching schema is valid",
"data": {
"foo": "value",
"is_active": true,
"hidden": false
},
"valid": true
},
{
"description": "additional property not matching schema is invalid",
"data": {
"foo": "value",
"is_active": 1
},
"valid": false
}
]
},
{
"description": "extensible: true with additionalProperties still validates structure",
"schema": {
"$schema": "https://json-schema.org/draft/2020-12/schema",
"properties": {
"foo": {
"type": "string"
}
},
"extensible": true,
"additionalProperties": {
"type": "integer"
}
},
"tests": [
{
"description": "additional property matching schema is valid",
"data": {
"foo": "hello",
"count": 5,
"age": 42
},
"valid": true
},
{
"description": "additional property not matching schema is invalid despite extensible: true",
"data": {
"foo": "hello",
"count": "five"
},
"valid": false
}
]
},
{
"description": "complex additionalProperties with object and array items",
"schema": {
"$schema": "https://json-schema.org/draft/2020-12/schema",
"properties": {
"type": {
"type": "string"
}
},
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
}
},
"tests": [
{
"description": "valid array of strings",
"data": {
"type": "my_type",
"group_a": [
"field1",
"field2"
],
"group_b": [
"field3"
]
},
"valid": true
},
{
"description": "invalid array of integers",
"data": {
"type": "my_type",
"group_a": [
1,
2
]
},
"valid": false
},
{
"description": "invalid non-array type",
"data": {
"type": "my_type",
"group_a": "field1"
},
"valid": false
}
]
}
]

View File

@ -1067,7 +1067,7 @@
"schemas": [ "schemas": [
{ {
"$id": "polymorphic_org_punc.request", "$id": "polymorphic_org_punc.request",
"$ref": "organization.family" "$family": "organization"
} }
] ]
}, },
@ -1080,6 +1080,21 @@
"$ref": "organization" "$ref": "organization"
} }
] ]
},
{
"name": "invalid_family_punc",
"public": false,
"schemas": [
{
"$id": "invalid_family_punc.request",
"$family": "organization",
"properties": {
"extra": {
"type": "string"
}
}
}
]
} }
], ],
"tests": [ "tests": [
@ -1240,6 +1255,23 @@
"path": "/first_name" "path": "/first_name"
} }
] ]
},
{
"description": "invalid schema due to family exclusivity violation",
"schema_id": "invalid_family_punc.request",
"data": {
"id": "org-2",
"type": "organization",
"name": "Strict Corp",
"extra": "value"
},
"valid": false,
"expect_errors": [
{
"code": "INVALID_SCHEMA",
"path": ""
}
]
} }
] ]
}, },

113
tests/lib.rs Normal file
View File

@ -0,0 +1,113 @@
use jspg::*;
use pgrx::JsonB;
use serde_json::json;
#[test]
fn test_library_api() {
// 1. Initially, schemas are not cached.
assert!(!json_schema_cached("test_schema"));
// Expected uninitialized drop format: errors + null response
let uninitialized_drop = validate_json_schema("test_schema", JsonB(json!({})));
assert_eq!(
uninitialized_drop.0,
json!({
"type": "drop",
"errors": [{
"code": "VALIDATOR_NOT_INITIALIZED",
"message": "JSON Schemas have not been cached yet. Run cache_json_schemas()",
"details": { "path": "" }
}]
})
);
// 2. Cache schemas
let puncs = json!([]);
let types = json!([{
"schemas": [{
"$id": "test_schema",
"type": "object",
"properties": {
"name": { "type": "string" }
},
"required": ["name"]
}]
}]);
let enums = json!([]);
let cache_drop = cache_json_schemas(JsonB(enums), JsonB(types), JsonB(puncs));
assert_eq!(
cache_drop.0,
json!({
"type": "drop",
"response": "success"
})
);
// 3. Check schemas are cached
assert!(json_schema_cached("test_schema"));
let show_drop = show_json_schemas();
assert_eq!(
show_drop.0,
json!({
"type": "drop",
"response": ["test_schema"]
})
);
// 4. Validate Happy Path
let happy_drop = validate_json_schema("test_schema", JsonB(json!({"name": "Neo"})));
assert_eq!(
happy_drop.0,
json!({
"type": "drop",
"response": "success"
})
);
// 5. Validate Unhappy Path
let unhappy_drop = validate_json_schema("test_schema", JsonB(json!({"wrong": "data"})));
assert_eq!(
unhappy_drop.0,
json!({
"type": "drop",
"errors": [
{
"code": "REQUIRED_FIELD_MISSING",
"message": "Missing name",
"details": { "path": "/name" }
},
{
"code": "STRICT_PROPERTY_VIOLATION",
"message": "Unexpected property 'wrong'",
"details": { "path": "/wrong" }
}
]
})
);
// 6. Mask Happy Path
let mask_drop = mask_json_schema(
"test_schema",
JsonB(json!({"name": "Neo", "extra": "data"})),
);
assert_eq!(
mask_drop.0,
json!({
"type": "drop",
"response": {"name": "Neo"}
})
);
// 7. Clear Schemas
let clear_drop = clear_json_schemas();
assert_eq!(
clear_drop.0,
json!({
"type": "drop",
"response": "success"
})
);
assert!(!json_schema_cached("test_schema"));
}

View File

@ -1 +1 @@
1.0.51 1.0.55