ZenoXCare — marketing
Loading...
AboutOn-device AI
ZenoXCare embeds an optional MLC WebLLM path that loads SmolLM2-360M-Instruct weights in the browser for privacy-preserving phrasing, short explanations, and offline-friendly discovery — without sending prompts to a cloud LLM when the engine is enabled and cached.
Integrators and installable shells can read the canonical stack descriptor (model ids, policy, discovery URLs).
Set NEXT_PUBLIC_LOCAL_ASSIST_ENGINE=1 to activate the bundled WebLLM path and floating assist entry points on supported surfaces. Optional mirrors use NEXT_PUBLIC_WEBLLM_MODEL_BASE_URL.
Standards, certifications, and deterministic compliance answers remain documented on Standards & compliance.