Generating LLM-Ready Content

Last modified on March 26, 2026 • 6 min read • 1,153 words
Guide on how to make your Hinode site accessible to large language models using the mod-llm module.
Generating LLM-ready content
Photo by Growtika  on Unsplash 

Introduction  

  Note

A fully working implementation of this guide is available as mod-llm on GitHub  .

Large language models (LLMs) such as ChatGPT, Claude, and Gemini are increasingly used to answer questions about software products, tools, and services. By default, these models can only access your site’s content through general web crawls, which may be incomplete or out of date. The emerging llms.txt convention  addresses this by proposing a standard location and format for machine-readable content.

Hinode’s mod-llm module implements this convention and generates three complementary outputs:

  • /llms.txt — A structured index of all pages grouped by section, with titles, descriptions, and links. LLM agents use this to discover what content is available and where to find it.
  • /[page]/index.md — A clean markdown version of each page, free of HTML markup, navigation chrome, and scripts. These are the files linked from llms.txt and are what an LLM reads when it fetches page content.
  • /llms-components.json (optional) — A machine-readable JSON schema of every Hinode shortcode and content block, including argument definitions, types, defaults, and descriptions. This is primarily useful for Hinode documentation sites, where it enables LLMs and developer tools to understand and suggest correct component usage.

This tutorial guides you through installing mod-llm, configuring the Hugo output formats, customizing per-page metadata, and optionally supporting pages built with content blocks from mod-blocks.

Step 1 - Installing the Module  

mod-llm is a standard Hugo module. Add it as an import to your site’s module configuration in hugo.toml.

[[module.imports]]
  path = "github.com/gethinode/mod-llm"

Next, run the following command to download the module and update your go.mod and go.sum files.

hugo mod get github.com/gethinode/mod-llm

If you vendor your modules (recommended for reproducible builds), update the vendor directory too.

hugo mod vendor

Step 2 - Configuring the Output Formats  

Hugo generates output for each page based on the configured output formats. mod-llm requires two custom formats and supports one optional format. Add the following definitions to your site configuration.

[outputFormats]
  [outputFormats.llmstxt]
    mediaType = "text/plain"
    baseName = "llms"
    isPlainText = true
    notAlternative = true
    rel = "alternate"
    root = true

  [outputFormats.markdown]
    mediaType = "text/markdown"
    baseName = "index"
    isPlainText = true
    isHTML = false
    noUgly = false
    rel = "alternate"

  # Optional: add llmscomponents for Hinode documentation sites
  [outputFormats.llmscomponents]
    mediaType = "application/json"
    baseName = "llms-components"
    isPlainText = true
    notAlternative = true
    root = true

The llmstxt format generates the /llms.txt index at the site root. The markdown format generates an index.md alongside each page, including the homepage at /index.md. The llmscomponents format is optional — it generates /llms-components.json at the site root and is primarily useful for Hinode documentation sites that want to expose their shortcode and component schema to LLMs.

Activate the formats in the [outputs] section of your hugo.toml. Include llmscomponents only if your site is a Hinode documentation site.

[outputs]
  home = ["HTML", "llmstxt", "markdown"]   # add "llmscomponents" for documentation sites
  page = ["HTML", "markdown"]

  Note

If your site already defines [outputs], extend the existing lists rather than replacing them. The HTML output must remain to keep the regular site building correctly.

Rebuild your site and verify the generated files.

hugo --gc --minify

You should now find public/llms.txt, public/index.md for the homepage, and an index.md file next to each page’s index.html in the public/ directory. If you enabled llmscomponents, public/llms-components.json will also be present.

Step 3 - Understanding the Markdown Output  

Each page’s index.md is generated by applying Hugo’s markdown output format. The module strips all HTML, shortcode wrappers, navigation elements, and scripts, leaving only clean prose. The output adheres to common markdownlint conventions, with blank lines surrounding headings and lists for consistent structure.

The generation pipeline works as follows. Starting from the page’s raw markdown source (.RawContent), the module keeps all plain markdown as-is and only processes shortcodes. Each shortcode is rendered using a dedicated .md template that produces markdown output instead of HTML. This ensures that, for example, an {{< args >}} shortcode produces a clean markdown table rather than an HTML <table> element.

A typical page output looks like this:

# Page title

Page description or summary.

## Section heading

Content with **bold text**, `inline code`, and [links](https://example.com).

| Name | Type | Required | Default | Description |
| --- | --- | --- | --- | --- |
| `name` | string | yes |  | Name of the element. |

Step 4 - Customizing Per-Page Metadata  

By default, mod-llm uses the page’s description field for the summary shown in llms.txt. You can override this with a dedicated llm block in the frontmatter.

---
title: My page
description: General description for SEO and social sharing.
llm:
  description: A more precise description written for LLM consumers.
---

To exclude a page from the llms.txt index and from markdown output entirely, set llm.exclude to true. This is useful for pages such as privacy policies or draft content that should not be fed to language models.

---
title: Private page
llm:
  exclude: true
---

Step 5 - Supporting Content Blocks  

Sites built with Hinode’s optional mod-blocks module use content_blocks in the page frontmatter to compose pages from pre-built components such as heroes, card grids, FAQ sections, and article feeds. These pages have no prose body — all content lives in the frontmatter YAML.

When mod-llm detects a content_blocks frontmatter key, it switches to a dedicated rendering path that iterates over each block and produces structured markdown. Each block is rendered as a level-two heading derived from heading.title, prefixed with heading.preheading separated by an em dash when both are present.

## Our services — From ideation to realization

We provide a full range of services to help you transform your business.

- **[Strategy & Transformation](/services/strategy/)**: We help you shape and realize a compelling digital strategy.
- **[Experience Design](/services/experience/)**: We digitize your processes and products.
- **[Platform Engineering](/services/platform/)**: We design a modern technology stack that scales.

mod-blocks ships a .hugo.md markdown template alongside each component’s standard .hugo.html template. These templates follow the single-responsibility principle: each one renders only the component-specific content. Generic concerns such as the heading, description, and normalization of blank lines are handled centrally by mod-llm.

The articles block goes a step further and fetches real pages from the configured section, rendering them as a native markdown link list.

## Blog — Recent posts

- [Hello World](/blog/hello-world/)
- [Getting started with Hinode](/blog/getting-started/)

To use content blocks, ensure mod-blocks is installed alongside mod-llm.

[[module.imports]]
  path = "github.com/gethinode/mod-llm"
[[module.imports]]
  path = "github.com/gethinode/mod-blocks"

Conclusion  

Your site now exposes machine-readable endpoints that LLM agents, developer tools, and AI-powered search engines can consume directly. The llms.txt index provides discovery and the per-page index.md files provide content. For Hinode documentation sites, the optional llms-components.json schema additionally exposes shortcode and component intelligence.

A fully working reference implementation is available as mod-llm on GitHub  . The repository includes an example site that demonstrates all features covered in this guide, including content block rendering and i18n support for translated label strings.