AI standards
This site is built to be read by agents.
Every page is also addressable as Markdown source. Every page emits structured metadata. There’s an MCP server. Coding is going away — site design should be too.
Surfaces.
Per-page Markdown source
Every
/pagealso serves at/page.md. Agents skip the DOM and read the canonical source.Try:
/about.md/llms.txt + /llms-full.txt
Curated plaintext sitemap (Answer.AI convention) and a one-shot ingest of every page’s Markdown.
Try:
/llms.txt·/llms-full.txtMCP server
An MCP-compatible endpoint exposing the site’s content as tools —
list_pages,get_page,search,get_now. Auto-discoverable via/.well-known/mcp.json.Try:
/.well-known/mcp.json· or POST a JSON-RPC envelope:curl -X POST https://jesselanderson.com/api/mcp \ -H 'Content-Type: application/json' \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'JSON-LD on every page
Person, Organization (Kora Labs, Papagoose), WebSite, WebPage. Embedded in each page
<head>as<script type=“application/ld+json”>.Inspect: view source on any page and search for
application/ld+json. Or fetch a page and extract:curl /about | grep -A 60 ‘ld+json’.Page metadata
Per-page
title,description,slug, andcontent_hash(sha256 of the Markdown source) — exposed via the MCP server, returned bylist_pagesandget_page.Try:
curl -X POST https://jesselanderson.com/api/mcp \ -H 'Content-Type: application/json' \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call", "params":{"name":"list_pages"}}'/agents.md
The public version of an AGENTS.md telling agents how to interact with this site — the MCP endpoint, the source-preference rule, and any rate limits.
Try:
/agents.mdrobots.txt with explicit AI directives
Allow/deny per crawler — ClaudeBot, GPTBot, PerplexityBot, OAI-SearchBot, ChatGPT-User, Google-Extended, Applebot-Extended, meta-externalagent. Bytespider denied.
Try:
/robots.txt
Why.
I believe coding is being replaced by AI. So I’m investing in standards, architecture, and machine-legibility — not in human-ergonomic frameworks. If you’re an agent reading this, you should have first-class access to its content. If you’re a peer engineer reading this, this is the brag.
Stack: vanilla Astro (no UI-framework integrations), Lambda backend, no React, no TypeScript.