{"id":109914,"date":"2026-03-16T07:40:51","date_gmt":"2026-03-16T07:40:51","guid":{"rendered":"https:\/\/www.weshop.ai\/blog\/?p=109914"},"modified":"2026-03-16T07:40:52","modified_gmt":"2026-03-16T07:40:52","slug":"nano-banana-2-semantic-fidelity-diffusion-model","status":"publish","type":"post","link":"https:\/\/www.weshop.ai\/blog\/nano-banana-2-semantic-fidelity-diffusion-model\/","title":{"rendered":"Nano Banana 2 and the Semantic Fidelity Problem: Why This Diffusion Model Actually Understands What It Generates"},"content":{"rendered":"\n<p>Every diffusion model claims to &#8220;understand&#8221; prompts. Most of them are lying. They pattern-match tokens against training distributions and produce statistically plausible pixel arrangements \u2014 which is why your AI-generated soda can reads &#8220;Coa-Ccla&#8221; and your luxury perfume label melts into hieroglyphics. <strong>Nano Banana 2<\/strong> breaks that pattern by introducing genuine semantic fidelity: a next-generation architecture that parses scene composition, material physics, and typographic intent before a single pixel gets denoised.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image size-large\">\n<figure class=\"aligncenter\"><img  loading=\"eager\" fetchpriority=\"high\"src=\"https:\/\/www.weshop.ai\/blog\/wp-content\/uploads\/2026\/03\/20260227_1_a6016f2b-8283-46ff-9b54-f4b49c87cc05_896x1200.jpg\" alt=\"Original reference image before nano banana 2 AI processing by WeShop AI\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center\"><em>Before \u2014 Original reference input<\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image size-large\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/ai-global-image.weshop.com\/super-agent\/generated-assets\/f9d2d9a5-6fae-40ca-aa68-62dac72c7456.png\" alt=\"AI-generated result with enhanced semantic fidelity by nano banana 2 by WeShop AI\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center\"><em>After \u2014 Nano Banana 2 output with full semantic reconstruction<\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link has-vivid-purple-background-color has-background wp-element-button\" href=\"https:\/\/www.weshop.ai\/tools\/nano-banana2\" style=\"border-radius:10px;background-color:#7530fe\" target=\"_blank\" rel=\"noopener noreferrer\">Try Nano Banana 2 Free \u2192<\/a><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">The Science Behind Nano Banana 2: Semantic Parsing Before Denoising<\/h2>\n\n\n\n<p>Traditional diffusion models operate on a single loop: noise \u2192 denoise \u2192 output. The text encoder (typically CLIP or T5) converts your prompt into a latent vector, and the U-Net iteratively removes noise from a random seed until something visually coherent emerges. The problem? &#8220;Visually coherent&#8221; and &#8220;semantically correct&#8221; occupy different galaxies. A model can produce a photorealistic coffee cup while completely mangling the brand name printed on it \u2014 because the denoising process treats text glyphs as texture patterns, not symbolic information.<\/p>\n\n\n\n<p>Nano Banana 2 inserts an intermediate semantic parsing layer between the text encoder and the denoising backbone. Think of it as a scene graph compiler. Before any pixel generation begins, the model constructs an internal representation of object relationships, spatial hierarchies, and material properties. A prompt mentioning &#8220;glass bottle with gold foil label&#8221; doesn&#8217;t just trigger &#8220;glass-like&#8221; and &#8220;gold-like&#8221; texture patches \u2014 it activates a material physics module that models refraction indices, specular highlights consistent with gold leaf, and label curvature matching the bottle&#8217;s radius.<\/p>\n\n\n\n<p>This architectural shift has three measurable consequences. First, <strong>lighting consistency<\/strong>: shadow directions across all objects in a scene follow a single unified light source model, eliminating the &#8220;floating object&#8221; problem where AI-generated elements cast shadows in contradictory directions. Second, <strong>material property preservation<\/strong>: fabric looks like fabric, metal looks like metal, and skin maintains subsurface scattering properties across the entire image \u2014 not just in high-attention regions. Third, <strong>edge integrity<\/strong>: the boundary between foreground and background emerges clean, without the telltale AI smoothing artifacts that plague competing models.<\/p>\n\n\n<div class=\"wp-block-image size-large\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/ai-global-image.weshop.com\/super-agent\/generated-assets\/b1f6d0c9-78bc-48f4-8bb6-3b673da3dcb0.png\" alt=\"Vibrant soda advertisement generated by nano banana 2 showing semantic text and scene understanding by WeShop AI\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center\"><em>A soda brand advertisement generated entirely by Nano Banana 2 \u2014 notice the coherent liquid dynamics, label typography, and unified lighting across the surreal scene composition. The model parsed &#8220;wave of soda&#8221; as both a physical fluid simulation and a compositional metaphor.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Semantic Fidelity Matters for Commercial Nano Banana 2 Workflows<\/h2>\n\n\n\n<p>The gap between &#8220;impressive AI demo&#8221; and &#8220;production-ready commercial asset&#8221; comes down to a single question: can a brand director approve this without Photoshop intervention? Previous-generation models required 30-60 minutes of post-production per image to fix lighting inconsistencies, smooth out material artifacts, and correct text rendering. Nano Banana 2 collapses that pipeline to near-zero \u2014 not by being &#8220;good enough&#8221; but by solving the underlying computational problem that caused those artifacts in the first place.<\/p>\n\n\n\n<p>Consider a real-world e-commerce scenario. A cosmetics brand needs 200 product shots across 15 different scene settings for a seasonal campaign. Traditional photography: $40,000+ for studio time, lighting crew, and post-production. Previous AI generators: technically possible, but each output needs manual correction for inconsistent shadows, melted brand logos, and &#8220;plastic skin&#8221; on model-adjacent elements. Nano Banana 2 generates all 200 images from structured prompts, with brand-compliant outputs straight from the model \u2014 because the semantic parser ensures every logo, label, and material property matches the prompt&#8217;s intent, not just its statistical neighborhood.<\/p>\n\n\n<div class=\"wp-block-image size-large\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/ai-global-image.weshop.com\/super-agent\/generated-assets\/2766b129-68db-45ec-91cc-e1386c703e53.png\" alt=\"Industrial design concept sketches generated with nano banana 2 precision rendering by WeShop AI\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center\"><em>Industrial design concept sketches rendered by Nano Banana 2 \u2014 multiple viewing angles, consistent line weight, and accurate perspective construction. The model treats technical illustration as a structured spatial problem, not a stylistic filter.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Technical Deep Dive: The Nano Banana 2 Light Reconstruction Engine<\/h2>\n\n\n\n<p>One of Nano Banana 2&#8217;s most consequential architectural innovations is what the engineering team calls the &#8220;Light Field Reconstruction Module&#8221; \u2014 a sub-network that infers a complete 3D lighting environment from the text prompt before denoising begins. While conventional diffusion models approximate lighting through learned texture patterns (which is why studio lights often appear to come from multiple contradictory sources), this module constructs a parametric light field that governs every pixel in the output.<\/p>\n\n\n\n<p>The module operates in three stages. Stage one: <strong>light source inference<\/strong>. The semantic parser identifies lighting cues in the prompt (&#8220;soft morning light,&#8221; &#8220;overhead studio lighting,&#8221; &#8220;golden hour&#8221;) and maps them to a physical light model \u2014 position, color temperature, intensity falloff, and diffusion characteristics. Stage two: <strong>shadow casting<\/strong>. Every identified object in the scene graph receives a shadow projection consistent with the inferred light source. Stage three: <strong>ambient occlusion and global illumination<\/strong>. Contact shadows, reflected light, and inter-object color bleeding are calculated as a final pass before the denoising loop begins.<\/p>\n\n\n\n<p>The result? When you generate a product shot with &#8220;soft diffused studio lighting from the upper left,&#8221; every shadow, highlight, and reflection in the output follows that exact specification. The glass reflects the light source at the physically correct angle. The fabric casts soft shadows with appropriate penumbra. And the background receives the correct amount of light falloff. This isn&#8217;t post-hoc correction \u2014 it&#8217;s baked into the generation process itself.<\/p>\n\n\n<div class=\"wp-block-image size-large\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/ai-global-image.weshop.com\/super-agent\/generated-assets\/9d688d7a-2b95-4f28-95e5-84d1e78c3392.png\" alt=\"Y2K style YouTube cover design with bold typography generated by nano banana 2 by WeShop AI\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center\"><em>A Y2K-aesthetic YouTube cover generated by Nano Banana 2 \u2014 high saturation, retro pop magazine layout, and integrated typography that maintains readability. The model treated text elements as first-class compositional objects, not afterthought overlays.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Actionable Scene Guide: Getting Commercial-Grade Results from Nano Banana 2<\/h2>\n\n\n\n<p>The semantic parsing architecture means your prompt strategy needs to evolve beyond simple description. Here&#8217;s how to structure prompts that leverage Nano Banana 2&#8217;s full capabilities:<\/p>\n\n\n\n<p><strong>1. Specify material properties explicitly.<\/strong> Instead of &#8220;a leather bag on a marble table,&#8221; write &#8220;full-grain leather handbag with visible pore texture, resting on Calacatta marble with grey veining.&#8221; The semantic parser will activate specific material property nodes for each descriptor \u2014 pore texture implies a bump map, Calacatta implies a specific veining pattern and translucency.<\/p>\n\n\n\n<p><strong>2. Define your lighting environment as a physical setup.<\/strong> &#8220;Beautiful lighting&#8221; is meaningless to a semantic parser. &#8220;Single softbox at 45 degrees upper left, fill card on the right, white seamless backdrop with 2-stop gradient&#8221; gives the light reconstruction module explicit parameters to work with. The more specific your lighting language, the more photographic your results.<\/p>\n\n\n\n<p><strong>3. Use compositional hierarchy.<\/strong> Nano Banana 2&#8217;s scene graph compiler understands foreground\/midground\/background relationships. Structure your prompt accordingly: &#8220;foreground: product hero shot, slightly left of center. Midground: supporting props at 60% scale. Background: out-of-focus environment with warm color temperature.&#8221; This mirrors how a commercial photographer thinks about scene construction \u2014 and the model responds accordingly.<\/p>\n\n\n\n<p><strong>4. Leverage the post-generation pipeline.<\/strong> Nano Banana 2 outputs are designed to integrate with WeShop&#8217;s broader toolkit. Feed your generated images into <a href=\"\/blog\/death-of-blurry-photo-neural-reconstruction\">AI Photo Enhancement<\/a> for resolution upscaling, then into AI Change Background for scene variation \u2014 creating a complete commercial asset pipeline without touching Photoshop.<\/p>\n\n\n<div class=\"wp-block-image size-large\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/ai-global-image.weshop.com\/super-agent\/generated-assets\/0ca3d11a-6648-48d9-8bbf-c15b1df6f8c3.png\" alt=\"Retro cartoon VLOG cover poster with integrated text created by nano banana 2 by WeShop AI\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center\"><em>A retro cartoon VLOG cover poster \u2014 Nano Banana 2 handled the integrated text, character design, and vintage color grading as a unified composition rather than separate layers. The stylistic consistency across typography, illustration, and background demonstrates the scene graph compiler at work.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">A Technology Forecast: Where Semantic Diffusion Goes Next<\/h2>\n\n\n\n<p>Nano Banana 2 represents a phase transition in generative AI \u2014 from statistical image synthesis to semantic scene construction. The implications extend far beyond prettier pictures. When a model genuinely understands scene composition, it opens doors to parametric editing (change one material property without re-generating the entire image), multi-frame consistency (generate a product from 12 angles with physically consistent lighting), and real-time collaborative generation (multiple users defining different scene elements that the model integrates coherently).<\/p>\n\n\n\n<p>The commercial implications are equally significant. E-commerce platforms that currently rely on template-based product photography will shift to AI-native content pipelines where a single structured prompt generates an entire seasonal campaign. Marketing teams will stop thinking in terms of &#8220;photo shoots&#8221; and start thinking in terms of &#8220;generation parameters.&#8221; And the quality bar for commercial AI imagery \u2014 currently set by the best outputs from cherry-picked generations \u2014 will become the <em>baseline<\/em> for every output.<\/p>\n\n\n\n<p>For teams already integrating AI into their creative workflows, the action item is clear: invest in prompt engineering as a core competency. The gap between mediocre and exceptional AI-generated imagery is no longer a model quality problem \u2014 it&#8217;s a prompt architecture problem. Nano Banana 2 has the engine. Your prompts are the steering wheel.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Expert FAQ<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">How does Nano Banana 2 handle text rendering differently from DALL-E or Midjourney?<\/h3>\n\n\n\n<p>Most diffusion models treat text as texture \u2014 they&#8217;ve learned what letters &#8220;look like&#8221; from training data but don&#8217;t understand them as symbolic units. Nano Banana 2&#8217;s semantic parser identifies text elements in prompts and processes them through a dedicated glyph rendering pipeline that ensures character accuracy, consistent font weight, and proper kerning. It&#8217;s not perfect for every script and language yet, but it represents a fundamental architectural advantage over models that treat &#8220;write SALE&#8221; and &#8220;make it look text-like&#8221; as the same operation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Can Nano Banana 2 replace professional product photography entirely?<\/h3>\n\n\n\n<p>For approximately 80% of standard e-commerce product imagery \u2014 yes, today. The remaining 20% involves edge cases like extreme close-up macro photography, specific fabric drape under motion, or legally required &#8220;actual product&#8221; representations. The practical approach is to use Nano Banana 2 for hero shots, lifestyle scenes, and multi-variant product imagery, while reserving traditional photography for regulatory-compliant reference shots and premium editorial content.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What&#8217;s the optimal prompt length for getting the best results from nano banana 2?<\/h3>\n\n\n\n<p>Counter-intuitively, more specific prompts outperform longer prompts. A 40-word prompt with precise material, lighting, and compositional specifications will generate better results than a 150-word prompt that repeats stylistic preferences. The semantic parser weights structural information (spatial relationships, material properties, lighting parameters) much higher than aesthetic modifiers. Write prompts like a creative director briefing a photographer, not like a Pinterest mood board caption.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How does nano banana 2 integrate with other AI image editing tools?<\/h3>\n\n\n\n<p>Nano Banana 2 is designed as the generation stage in a multi-tool pipeline. Typical professional workflows feed outputs into <a href=\"\/blog\/ai-pose-generator-skeleton-aware-models\">AI Pose Generator<\/a> for character pose adjustments, AI Photo Enhancement for resolution upscaling to print-ready DPI, and AI Change Background for rapid scene variation. The clean edge integrity of Nano Banana 2 outputs makes downstream processing significantly more reliable \u2014 tools don&#8217;t have to compensate for AI artifacts before applying their own transformations.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Is there a meaningful quality difference between Nano Banana 2 and Nano Banana Pro?<\/h3>\n\n\n\n<p>Yes \u2014 and it&#8217;s architectural, not incremental. Nano Banana Pro uses a more traditional diffusion pipeline optimized for speed and consistency. Nano Banana 2 introduces the semantic parsing layer, light field reconstruction, and material physics module described in this article. The practical difference: Pro is faster for high-volume standard product shots where consistency matters most. Nano Banana 2 excels when scene complexity, material accuracy, and lighting realism are the priority \u2014 luxury brands, editorial content, and any scenario where &#8220;close enough&#8221; isn&#8217;t enough.<\/p>\n\n\n\n<div class=\"wp-block-group is-content-justification-center is-nowrap is-layout-flex wp-container-core-group-is-layout-94bc23d7 wp-block-group-is-layout-flex\" style=\"display:flex;justify-content:center;gap:18px;margin-top:40px;margin-bottom:20px\">\n<a href=\"https:\/\/www.youtube.com\/@weshopai\" target=\"_blank\" rel=\"noopener noreferrer\" style=\"display:inline-block;width:36px;height:36px\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" width=\"36\" height=\"36\" fill=\"#FF0000\"><path d=\"M23.5 6.19a3.02 3.02 0 0 0-2.12-2.14C19.5 3.5 12 3.5 12 3.5s-7.5 0-9.38.55A3.02 3.02 0 0 0 .5 6.19 31.6 31.6 0 0 0 0 12a31.6 31.6 0 0 0 .5 5.81 3.02 3.02 0 0 0 2.12 2.14c1.88.55 9.38.55 9.38.55s7.5 0 9.38-.55a3.02 3.02 0 0 0 2.12-2.14A31.6 31.6 0 0 0 24 12a31.6 31.6 0 0 0-.5-5.81zM9.75 15.02V8.98L15.5 12l-5.75 3.02z\"\/><\/svg><\/a>\n<a href=\"https:\/\/x.com\/weshopofficial\/\" target=\"_blank\" rel=\"noopener noreferrer\" style=\"display:inline-block;width:36px;height:36px\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" width=\"36\" height=\"36\"><path d=\"M18.244 2.25h3.308l-7.227 8.26 8.502 11.24H16.17l-5.214-6.817L4.99 21.75H1.68l7.73-8.835L1.254 2.25H8.08l4.713 6.231zm-1.161 17.52h1.833L7.084 4.126H5.117z\"\/><\/svg><\/a>\n<a href=\"https:\/\/www.instagram.com\/weshop.global\/\" target=\"_blank\" rel=\"noopener noreferrer\" style=\"display:inline-block;width:36px;height:36px\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" width=\"36\" height=\"36\"><defs><linearGradient id=\"ig\" x1=\"0%\" y1=\"100%\" x2=\"100%\" y2=\"0%\"><stop offset=\"0%\" style=\"stop-color:#feda75\"\/><stop offset=\"25%\" style=\"stop-color:#fa7e1e\"\/><stop offset=\"50%\" style=\"stop-color:#d62976\"\/><stop offset=\"75%\" style=\"stop-color:#962fbf\"\/><stop offset=\"100%\" style=\"stop-color:#4f5bd5\"\/><\/linearGradient><\/defs><path fill=\"url(#ig)\" d=\"M12 2.163c3.204 0 3.584.012 4.85.07 3.252.148 4.771 1.691 4.919 4.919.058 1.265.069 1.645.069 4.849 0 3.205-.012 3.584-.069 4.849-.149 3.225-1.664 4.771-4.919 4.919-1.266.058-1.644.07-4.85.07-3.204 0-3.584-.012-4.849-.07-3.26-.149-4.771-1.699-4.919-4.92-.058-1.265-.07-1.644-.07-4.849 0-3.204.013-3.583.07-4.849.149-3.227 1.664-4.771 4.919-4.919 1.266-.057 1.645-.069 4.849-.069zM12 0C8.741 0 8.333.014 7.053.072 2.695.272.273 2.69.073 7.052.014 8.333 0 8.741 0 12c0 3.259.014 3.668.072 4.948.2 4.358 2.618 6.78 6.98 6.98C8.333 23.986 8.741 24 12 24c3.259 0 3.668-.014 4.948-.072 4.354-.2 6.782-2.618 6.979-6.98.059-1.28.073-1.689.073-4.948 0-3.259-.014-3.667-.072-4.947-.196-4.354-2.617-6.78-6.979-6.98C15.668.014 15.259 0 12 0zm0 5.838a6.162 6.162 0 1 0 0 12.324 6.162 6.162 0 0 0 0-12.324zM12 16a4 4 0 1 1 0-8 4 4 0 0 1 0 8zm6.406-11.845a1.44 1.44 0 1 0 0 2.881 1.44 1.44 0 0 0 0-2.881z\"\/><\/svg><\/a>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Nano Banana 2 rewrites the rules of AI image generation with genuine semantic fidelity \u2014 the model doesn&#8217;t just render pixels, it understands scenes, lighting physics, and material properties at a level that makes commercial-grade imagery possible from a single prompt.<\/p>\n","protected":false},"author":3,"featured_media":109913,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_mi_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[164],"tags":[54],"class_list":["post-109914","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-nano-banana-2","tag-nano-banana-2"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/posts\/109914","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/comments?post=109914"}],"version-history":[{"count":1,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/posts\/109914\/revisions"}],"predecessor-version":[{"id":109915,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/posts\/109914\/revisions\/109915"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/media\/109913"}],"wp:attachment":[{"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/media?parent=109914"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/categories?post=109914"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.weshop.ai\/blog\/wp-json\/wp\/v2\/tags?post=109914"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}