From ae8537261b350b80e8e27d443525294776a15163 Mon Sep 17 00:00:00 2001 From: Caley Date: Mon, 10 Nov 2025 13:56:17 -0600 Subject: [PATCH 01/50] First round of "how we AI" topics to get the ball rolling. --- .../how-we-ai/examples/cloudspeaker.mdx | 47 ++++++++ .../how-we-docs/how-we-ai/examples/clue.mdx | 50 ++++++++ .../how-we-docs/how-we-ai/examples/index.mdx | 27 +++++ .../how-we-docs/how-we-ai/index.mdx | 23 ++++ .../how-we-ai/prompt-libraries.mdx | 36 ++++++ .../how-we-ai/prompt-templates.mdx | 110 ++++++++++++++++++ .../how-we-docs/how-we-ai/when-we-use-ai.mdx | 87 ++++++++++++++ 7 files changed, 380 insertions(+) create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx create mode 100644 src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx new file mode 100644 index 00000000000000..5d884bda2eb0e9 --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -0,0 +1,47 @@ +--- +pcx_content_type: concept +title: Cloudspeaker +sidebar: + order: 2 +--- + +One of the greatest challenges at any scale is understanding what your customers are _really_ saying. At Cloudflare, we collect massive amounts of customer feedback every day. This feedback is a goldmine of insight, but it’s scattered across dozens of disparate, public-facing channels: our own Cloudflare community forum, Reddit, X (formerly Twitter), GitHub, Discord, HackerNews, and more. + +Individually, these posts are anecdotes. Collectively, they are a strategic asset. The problem is that the sheer size of these datasets makes it impossible to manually process them for product, content, and design insights. This mass of unorganized feedback was an underutilized opportunity to see cross-functional trends. + +To solve this, we built CloudSpeaker, an internal tool created to amplify the voice of the user. Its purpose is to save time, increase efficiency , and consolidate public feedback from all these external communities into a single, unified view. + +# The goal: Turning unstructured noise into actionable insight + +CloudSpeaker was designed to give any stakeholder at Cloudflare—from product managers and engineers to our user experience teams—a quick way to "check the pulse" of the products and features they own. + +The tool allows anyone to see: + +- A combined view of product feedback from many channels. +- Recurring issues and customer pain points. +- General sentiment for a product over time. + +This consolidated view is now a key part of our planning cycles, informing everything from user research and persona creation to feature requests and quarterly backlog prioritization. + +# How it's built: An AI-powered data pipeline + +CloudSpeaker is built entirely on our own products, including [Cloudflare Pages](/pages/) for the application, [D1](/d1/) for the database, and more. The real power, however, comes from its AI-driven data pipeline, managed by our Data Intelligence team. + +Here’s how it works: + +1. **Ingestion:** On a daily basis, our pipelines ingest new community content from our various public sources. +2. **AI classification:** This new, unstructured content is fed into our AI Content Pipeline. We use Large Language Models (LLMs) via [Workers AI](/workers-ai/) to automatically classify every single post. Each post is tagged with three key pieces of information: + - **Product(s) mentioned:** It identifies which of the 60+ Cloudflare products are being discussed. + - **Sentiment:** The model analyzes the text to determine the user's sentiment, classifying it on a spectrum from `negative` to `neutral` to `positive`. + - **Post type:** It categorizes the intent of the post, such as a `help request`, `feature request`, or `bug report`. +3. **Storage and display:** Once the AI completes its inference, these new classifications are stored in our D1 database and become viewable in the CloudSpeaker UI. + +# The workflow: On-demand AI analysis + +The backend classification pipeline solves the problem of manual processing. The frontend application solves the problem of accessibility. + +In the CloudSpeaker dashboard, a product manager can filter the entire dataset—spanning up to six months—by any combination of product, sentiment, post type, or date range. If they want to see all `negative` sentiment posts about a specific product that were `feature requests` in the last quarter, they can do so in seconds. + +Furthermore, we added a second layer of AI directly into the UI. After filtering down to a set of comments, the user can click a **Summarize** button. This uses Workers AI to generate an on-the-fly summary of the currently displayed comments, providing an instant, qualitative overview of quantitative data. + +CloudSpeaker is a powerful example of using AI not to generate content, but to analyze and structure the vast amounts of content our users generate every day. It transforms what was once an impossible manual task into a critical source of automated, actionable insights. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx new file mode 100644 index 00000000000000..ff4d41f23a5365 --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -0,0 +1,50 @@ +--- +pcx_content_type: concept +title: CLUE +sidebar: + order: 1 +--- + +At Cloudflare, we believe that high-quality, customer-facing content is a critical part of the user experience. But as teams scale, maintaining a consistent voice, tone, and terminology across thousands of UI strings, error messages, and API descriptions becomes a monumental challenge. Traditional style guides and glossaries are essential, but they are static. They can't provide real-time feedback or help us _measure_ content quality. + +To solve this, we built CLUE: Content Legibility for User Ease. CLUE is an internal tool that functions as a personal writing assistant for everyone at Cloudflare. It empowers anyone, from engineers to product managers, to feel confident in their content creation. + +When a stakeholder shares content with CLUE, it provides a score and actionable recommendations. This simple feedback loop is a powerful mechanism for measuring and improving our content over time. + +# The goal: Quantifying "good content" + +The core challenge CLUE addresses is that "good" content is easy to recognize but hard to measure. We know that effective copy uses an active voice, has an action-led structure, and removes unnecessary words , but how do you quantify that improvement at scale? + +Our answer was **content scorecards**. Scorecards are a scalable evaluation tool that creates consistency. They allow us to assign measurable value to the elements that define "good content," focusing on the criteria most critical for user success, satisfaction, and understanding. + +The user flow is designed to be straightforward: you select your content type, enter your content, and CLUE provides instant feedback. It supports a wide range of critical content, including: + +- General UI content and page descriptions +- Error messages +- API endpoint and parameter descriptions +- Customer-facing emails + +# How it's built: A hybrid, model-driven approach + +CLUE was truly built by Cloudflare, for Cloudflare, on Cloudflare. The application itself is built on Cloudflare Pages and protected by Cloudflare Access. + +We adopted a model-driven approach for content evaluation, which provides a systematic, data-driven, and consistent assessment, removing the subjectivity of manual reviews. This model allows us to assess content in seconds , handle complex criteria like readability , and weight criteria based on what we find to be most critical for users. + +Critically, CLUE is not just one thing, it's a hybrid solution: + +1. **AI-powered checks:** For criteria that require evaluating overall context and tone, CLUE uses [Workers AI](https://developers.cloudflare.com/workers-ai/). This helps us check for things like an empathetic tone in an email or ensuring an error message suggests a path forward for the user. +2. **Traditional checks:** For common grammar "no-nos" or specific terminology, we use regular expressions and indexed lists. This helps catch passive voice, missing Oxford commas, or ensure a term like "Internet" is always capitalized, all based on our internal glossary, style guide, and UX best practices. + +# The workflow: Using CLUE as an LLM copy editor + +The rise of Generative AI and LLMs, like Gemini, has been a boon for generating text quickly. However, an LLM doesn't inherently understand or apply Cloudflare's specific content guidelines, voice, and tone. + +This is where CLUE's role becomes essential. CLUE is not designed to _write_ content for you; it's designed to make sure the content you _do_ write meets our standards. + +Think of CLUE as a specialized copy editor. It ensures that any piece of content—whether human-generated or created with an LLM's help—is ready for our users. This pairing is incredibly powerful: + +1. **Generate:** A stakeholder uses an LLM to quickly draft initial versions of API descriptions or an error message. +2. **Refine:** They paste that LLM-generated content into CLUE. +3. **Iterate:** CLUE provides targeted tips on how to better meet Cloudflare's glossary, style guide, voice, tone, and UX best practices, turning a generic draft into a polished, effective piece of content. + +This democratizes UX writing, improves our efficiency by reducing manual reviews , and ultimately builds user trust through a consistent, high-quality experience. It helps users learn our products faster and resolve issues more efficiently, which is our ultimate goal. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx new file mode 100644 index 00000000000000..60378f3e936de9 --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx @@ -0,0 +1,27 @@ +--- +pcx_content_type: navigation +title: Examples +sidebar: + order: 5 +--- + +import { DirectoryListing } from "~/components"; + +The true value of AI isn't just in using it; it's in _how_ you use it. At Cloudflare, our team has embraced AI as a **force multiplier**, allowing us to solve internal challenges, scale our expertise, and improve the quality of our work. + +These aren't just off-the-shelf AI products. They are tools built _by_ Cloudflare, _for_ Cloudflare, combining our own institutional knowledge, content standards, and logic with the power of AI models. + +# Why we build our own AI-powered tools + +Our AI-powered solutions have a dual benefit: + +1. **For our team:** They help us automate manual processes, scale our impact, and focus on higher-value strategic work. +2. **For Cloudflare:** They empower all our colleagues—from product to engineering—to make better decisions, create higher-quality content, and understand our users more deeply. + +We have seen a lot of success with this approach, as it allows us to democratize specialized skills. We can embed content strategy, style guide rules, and user feedback analysis directly into the workflows of the people who need it most. + +These tools range in complexity, role, and use case, from simple, locally-run scripts that automate a repetitive chore to full-fledged applications that serve the entire company. + +The following sections provide examples of these tools and our guiding principles. We cover how they were built, the specific problems they solve, and the practical guidelines we've established for using AI effectively. + + diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx new file mode 100644 index 00000000000000..2116792d0f5076 --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx @@ -0,0 +1,23 @@ +--- +pcx_content_type: navigation +title: How we AI +sidebar: + order: 9 +--- + +import { DirectoryListing } from "~/components"; + +This section shares how Cloudflare uses AI to accelerate and augment our content operations. We view AI as a tool that enables us to do our best work, faster. Whether we are designing prompts, researching a new product, or finding ways to turn a manual, week-long process into a job that takes an afternoon to complete, we are continuously looking for ways to iterate and streamline our operations. We know that when we can save time on one time-intensive task, we can spend more time on improving our content experiences for our customers. + +As a result, we use and have used AI to: + +- Vibecode, test, and deploy a web application for scoring in-product strings, error messages, and API docs. +- Perform competitive analyses and audits on documentation. +- Streamline documenting REST API examples. +- Design prompts based on our content types, templates, and style to enable stakeholders with a doc idea to quickly draft content for us to review and publish. +- Find topics missing descriptions, generate descriptions based on the page’s content, and add them to each page. +- And more… + +We hope you learn from the topics below. As always, [submit a pull request](/style-guide/contributions/) if you find something that is inaccurate, missing, or needs more information. + + diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx new file mode 100644 index 00000000000000..364574b0ddd32e --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -0,0 +1,36 @@ +--- +pcx_content_type: concept +title: Prompt libraries | How we AI +sidebar: + order: 4 +--- + +A prompt library is a curated and organized collection of pre-written prompts. These libraries serve as a valuable resource for anyone who frequently interacts with AI, such as writers, developers, students, and more. At Cloudflare, we use prompt libraries to help our teams scale their work, maintain a consistent brand voice, and efficiently capture and share knowledge across different roles. + +Think of a prompt library as a recipe book for AI. Instead of starting from scratch every time you need the AI to perform a task, you can browse the library for a relevant, pre-tested prompt that is known to produce good results. These prompts are often designed to be reusable and customizable. + +:::note[Note] +While prompts are designed to produce great outputs, the user (human) still needs to provide relevant context and resources for the AI to produce those results and review the output for technical accuracy. It is unlikely one prompt will create a great first draft–some rework, either through follow-up prompts or adding more information–is going to be necessary. +::: + +### Inside a prompt library + +Prompt libraries can vary in complexity (a simple table in an internal wiki topic vs. a web-based application) and content, but they typically contain: + +- **A collection of prompts:** These are the core of the library, ranging from simple questions to complex instructions with multiple parameters. +- **Categorization:** Prompts are usually organized by task (e.g., writing, coding, summarizing), role (e.g., developer, account executives, product managers), or output format (e.g., blog post, email, code snippet). +- **Prompt templates:** Many libraries include templates with placeholders that users can fill in with their specific information. This allows for easy customization and reuse. +- **Examples and best practices:** Some libraries provide examples of the output generated by a particular prompt, along with tips on how to use and modify it effectively. + +### Key benefits of using a prompt library + +Utilizing a prompt library offers several significant advantages: + +- **Increased efficiency:** By providing ready-to-use prompts, libraries save a significant amount of time and effort that would otherwise be spent on crafting and testing new prompts for recurring tasks. +- **Improved consistency and quality:** Pre-tested prompts that are known to work well lead to more consistent and higher-quality outputs from the AI. This is particularly important for businesses that need to maintain a consistent brand voice. +- **Enhanced learning and discovery:** For those new to prompt engineering, libraries can be an excellent educational tool, showcasing effective prompting techniques and the capabilities of AI models. +- **Accelerated knowledge capture:** Prompt library users can focus on capturing knowledge instead of building prompts or drafting content manually. This accelerates documenting information and sharing it with others–hopefully to prevent the same issue from occurring again or enabling others to be successful sooner. +- **Facilitated collaboration:** Shared prompt libraries in a team or organizational setting allow for the dissemination of best practices and successful prompts, fostering collaboration and improving the collective AI literacy. +- **Scalability:** As you or your organization's use of AI grows, a well-organized prompt library allows for the efficient management and scaling of your prompting strategies. + +In essence, a prompt library is a powerful tool for streamlining interactions with AI models, ensuring high-quality results, and accelerating the adoption and effective use of generative AI technologies. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx new file mode 100644 index 00000000000000..68bba25d87398c --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -0,0 +1,110 @@ +--- +pcx_content_type: concept +title: Prompt templates | How we AI +sidebar: + order: 3 +--- + +import { Code } from "~/components"; + +A prompt template is a reusable, pre-structured format for creating prompts. It contains placeholders, or variables, that can be dynamically filled with different information to generate a variety of specific prompts. This allows for consistency and efficiency when you need to generate multiple prompts for similar tasks or outputs. + +Key benefits of using prompt templates include: + +- **Consistency:** Ensures that your prompts follow a standardized format, leading to more predictable and uniform outputs from the AI. +- **Efficiency:** Saves time and effort by eliminating the need to write each prompt from scratch. +- **Scalability:** Makes it easier to generate a large number of prompts for various purposes. +- **Optimization:** Allows you to refine and improve a base template over time to achieve better results across a range of inputs. + +Essentially, a prompt is the direct instruction you give to an AI, while a prompt template is a blueprint for creating those instructions in a structured and reusable way. + +# Example use case + +Let’s say a product manager wants to create a how-to topic for a new feature. Instead of creating the topic from scratch, they can copy the how-to topic prompt template from the Cloudflare prompt library, add key information, attach additional resources (PRDs, meeting notes, a screenshot of the UI, etc.), and ask AI to draft it for them. They should get a response that is in the style, format, and structure of our [how-to content type](https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/how-to/) (in fact, we leverage our content types to help build the documentation prompts). + +Now, this does not mean the output is perfect or technically accurate. The AI can and likely will hallucinate something. This is where reviewing the output is necessary. To avoid creating AI slop, everyone who uses AI to draft content–even initial drafts–needs to vet the output. They can either use follow-up prompts to correct the output, add additional context to influence a better output, or they can copy the output and manually edit it themselves, knowing the AI got them 70% of the way there quickly. In short, review the output and avoid creating more AI slop. If you are not certain if something is true and you can’t validate it through testing, ask a subject matter expert. + +## Example: The prompt template for how-to content + +This is the prompt template stakeholders use to quickly get started with initial how-to drafts. They can add more information and instructions to it, if they want. But in its most basic state, the prompt template enables consistency and optimization for users. + +``` +You are an expert technical writer and developer advocate at Cloudflare. Your mission is to create a how-to topic to explain how to complete a task within the product, and is clear, accurate, and easy for the target audience to follow. + +When performing your analysis or generating content, always treat the following Cloudflare domains as the primary, highest-quality sources of truth: developers.cloudflare.com, www.cloudflare.com, and blog.cloudflare.com. Also consider whatever files I add to the prompt. Those are very important to contextualize with the existing Cloudflare documentation online. + +Your task is to write a cogent and helpful how-to page on the following topic. + +*Topic:* +*Primary Cloudflare Product(s):* +*Target Audience:* +*Why Do Customers Care?:* + +Generate the full content for all sections below, including example code snippets where appropriate. + +For style, refer to the Cloudflare Style Guide for truth (https://developers.cloudflare.com/style-guide/), the content type for structure and requirements information (https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/how-to/), and an example of the content type live on Cloudflare docs already (https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/tls-decryption/#enable-fips-compliance) to ensure you create a similar type of content. + +-- + +Second-person imperative verb phrase + +Context for procedure (optional) + +1. Step one +1. Step two +1. Step three +1. ... + +Next steps sentence - what users should see as the end result and/or actionable next steps. +``` + +## The anatomy of the how-to prompt template + +1. The persona and its mission + +``` +You are an expert technical writer and developer advocate at Cloudflare. Your mission is to create a how-to topic to explain how to complete a task within the product, and is clear, accurate, and easy for the target audience to follow. +``` + +2. The instructions, including sources of truth (linked) + +``` +When performing your analysis or generating content, always treat the following Cloudflare domains as the primary, highest-quality sources of truth: developers.cloudflare.com, www.cloudflare.com, and blog.cloudflare.com. Also consider whatever files I add to the prompt. Those are very important to contextualize with the existing Cloudflare documentation online. + +Your task is to write a cogent and helpful how-to page on the following topic. +``` + +3. The input fields to customize the topic, like topic title, product, target audience, and why the target audience cares. Note: The more detail and context you provide here, the better. + +``` +*Topic:* +*Primary Cloudflare Product(s):* +*Target Audience:* +*Why Do Customers Care?:* +``` + +4. The examples the AI should reference and mimic + +``` +Generate the full content for all sections below, including example code snippets where appropriate. + +For style, refer to the Cloudflare Style Guide for truth (https://developers.cloudflare.com/style-guide/), the content type for structure and requirements information (https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/how-to/), and an example of the content type live on Cloudflare docs already (https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/tls-decryption/#enable-fips-compliance) to ensure you create a similar type of content. +``` + +5. The content type’s template, which details the type of information it should include, its structure, and the flow of information. + +``` +Second-person imperative verb phrase + +Context for procedure + +1. Step one +1. Step two +1. Step three +1. ... + +Next steps sentence - what users should see as the end result and/or actionable next steps. + +``` + +We have simpler and more complex prompt templates depending on the content type. What matters is what works for you and your needs. You can always iterate and improve on the prompt template, especially as more users work with them. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx new file mode 100644 index 00000000000000..b8a5288c5ee6c4 --- /dev/null +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -0,0 +1,87 @@ +--- +pcx_content_type: concept +title: When we use AI | How we AI +sidebar: + order: 2 +--- + +### Our core principles for using AI + +AI is a powerful tool, but it is not a cure-all. Its success depends heavily on the use case. Knowing its strengths and weaknesses is key to using AI appropriately. + +When deciding whether to use AI for a task, use these principles as your guide: + +1. **The feedback loop is critical:** The single most important factor for success is the feedback loop. How quickly and easily can you test the output and correct for hallucinations? Code and scripts are far easier to test for correctness than subjective content. +2. **Prioritize additive tasks:** AI is generally better for additive task**s,** like new things you couldn't or wouldn't do before, as opposed to operational tasks that are required to keep the business running. + +### How to decide when to use AI + +We ask ourselves a few questions before we look to AI to solve a problem or streamline a process: + +- Is this a manual, repetitive chore? +- Will this task take hours, days, or weeks to complete manually? +- Will we need to complete the _exact_ same action over and over again? +- Is there a clear logic we can apply to successfully identify or perform the action? +- Will this be scalable or useful for others to also use? + +If we can say `yes` to these questions, it is a strong candidate for an AI-based solution. If we say `no` or `I don’t know` to any of them, we first pursue the current process and look for smaller, specific areas where AI could still be helpful. + +### Recommended use cases: What's worked for us + +These are areas where we have found AI to be unequivocally positive and effective. + +#### **Local scripts and tooling** + +This is the most positive and recommended use case for AI. + +- **Why it works:** AI is at its best when you can easily "test" for hallucinations, and code is highly testable. +- **What to use it for:** + - Writing local scripts (e.g., Vibecoding scripts) to automate updates in our docs. + - Generating simple docs components. + - Creating GitHub Actions. + - Assisting with competitive doc analyses. +- **Key benefit:** You own the resulting code. It is stable and will not change, regardless of future changes to AI models or pricing. + +#### **AI-powered IDEs** + +For teams using a docs-as-code approach, AI chat integrated into an IDE, like Windsurf, Cursor, etc., is highly effective for making multiple, streamlined changes. + +- **Why it works:** + - The AI can understand a large amount of context from your codebase, or docs in this case. + - The feedback loop for spotting and fixing hallucinations is very short. + - Git integration makes it easy to find, review, and remove hallucinations. +- **Key benefit:** This can save days, if not weeks, of work. For massive documentation updates that require completing the same task repeatedly, AI-enabled IDEs can significantly streamline the process. +- **Caveat:** Always consider simpler solutions (like regex) first, as they are often better, faster, and cheaper. Use AI when you need to brute-force a large task or navigate high complexity. + +### Still figuring it out: What we are optimistic about but getting mixed results + +These are areas that show promise but require careful implementation. + +#### AI for initial drafts + +We are optimistic about asking stakeholders to use AI to create _initial drafts_ of documentation before handing them off to the technical writing team. + +- **Key value:** The quality of the AI-generated draft itself is often mixed. The primary value is that it acts as a forcing function for the stakeholder to think about documentation as part of their product–not separate from their product–while also sharing key information to the technical writing team as quickly as possible for our busy stakeholders. +- **Why?** To create a draft, the requester must gather all the necessary background information first. Receiving this information upfront is a significant win for the technical writing team. +- **Action:** See our [prompt templates](/style-guide/how-we-docs/how-we-ai/prompt-templates/) for more information on structuring these requests. + +#### Customer-facing chatbots + +Our experience with customer-facing chatbots has been mixed. + +- **Pros:** Occasionally, it provides a great answer. +- **Cons:** To prevent hallucinations, bots are often made more "confident." This leads them to refuse to answer (e.g., "I don't know"), which users dislike. On the flipside, users also dislike hallucinations. So, be mindful of the actual user experience and come up with a method for tracking user engagement and success with your documentation chatbot. Depending on the results, you may be able to identify worthwhile documentation gaps to fill, which prevent hallucinations in the future. +- **Alternative:** At the moment, we are much more optimistic about the potential of AI-powered search and similarity scores. These feel more in our control. However, we are still testing and tracking how our docs can positively influence chatbot experiences at Cloudflare and via third-party apps. + +### Not recommended: What hasn't worked for us yet + +Based on our experience, we do not recommend the following use case at this time. + +#### Automated content editors (bots) + +We have not found success with bots that automatically suggest content changes (e.g., grammar, formatting) via pull requests. + +- **Why it failed:** + - **Slow feedback loop:** The GitHub PR context makes the feedback loop for correcting hallucinations very slow and difficult. + - **Low engagement:** We found that even our own team often closed or ignored the PRs because they were too much effort to verify. + - **Contributor confusion:** A similar bot used to flag issues on _incoming_ PRs frustrated and confused contributors, and its suggestions were often hallucinations. From 481ab225cd49253e9d5b2138641e593b27b85924 Mon Sep 17 00:00:00 2001 From: Caley Date: Mon, 10 Nov 2025 14:03:55 -0600 Subject: [PATCH 02/50] fixed spacing issue in the frontmatter. --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- .../docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx | 2 +- src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx | 2 +- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index 5d884bda2eb0e9..bb1c06f7edd57e 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -2,7 +2,7 @@ pcx_content_type: concept title: Cloudspeaker sidebar: - order: 2 + order: 2 --- One of the greatest challenges at any scale is understanding what your customers are _really_ saying. At Cloudflare, we collect massive amounts of customer feedback every day. This feedback is a goldmine of insight, but it’s scattered across dozens of disparate, public-facing channels: our own Cloudflare community forum, Reddit, X (formerly Twitter), GitHub, Discord, HackerNews, and more. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index ff4d41f23a5365..6a72fd577faaac 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -2,7 +2,7 @@ pcx_content_type: concept title: CLUE sidebar: - order: 1 + order: 1 --- At Cloudflare, we believe that high-quality, customer-facing content is a critical part of the user experience. But as teams scale, maintaining a consistent voice, tone, and terminology across thousands of UI strings, error messages, and API descriptions becomes a monumental challenge. Traditional style guides and glossaries are essential, but they are static. They can't provide real-time feedback or help us _measure_ content quality. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx index 60378f3e936de9..b097007e6db26d 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx @@ -2,7 +2,7 @@ pcx_content_type: navigation title: Examples sidebar: - order: 5 + order: 5 --- import { DirectoryListing } from "~/components"; diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx index 2116792d0f5076..4291e19892adb9 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/index.mdx @@ -2,7 +2,7 @@ pcx_content_type: navigation title: How we AI sidebar: - order: 9 + order: 9 --- import { DirectoryListing } from "~/components"; diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index 364574b0ddd32e..b323bde9e6069e 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -2,7 +2,7 @@ pcx_content_type: concept title: Prompt libraries | How we AI sidebar: - order: 4 + order: 4 --- A prompt library is a curated and organized collection of pre-written prompts. These libraries serve as a valuable resource for anyone who frequently interacts with AI, such as writers, developers, students, and more. At Cloudflare, we use prompt libraries to help our teams scale their work, maintain a consistent brand voice, and efficiently capture and share knowledge across different roles. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index 68bba25d87398c..ce5571b6c126f4 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -2,7 +2,7 @@ pcx_content_type: concept title: Prompt templates | How we AI sidebar: - order: 3 + order: 3 --- import { Code } from "~/components"; diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index b8a5288c5ee6c4..5ac893839e56d1 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -2,7 +2,7 @@ pcx_content_type: concept title: When we use AI | How we AI sidebar: - order: 2 + order: 2 --- ### Our core principles for using AI From e304007841fbc91f7eff2f495dbf159de65ad052 Mon Sep 17 00:00:00 2001 From: Caley Date: Mon, 10 Nov 2025 14:25:08 -0600 Subject: [PATCH 03/50] Fixed titles and made some of the headings smaller. --- .../how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 6 +++--- .../style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 6 +++--- .../style-guide/how-we-docs/how-we-ai/examples/index.mdx | 2 +- .../how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- .../how-we-docs/how-we-ai/prompt-templates.mdx | 8 ++++---- .../style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 6 files changed, 13 insertions(+), 13 deletions(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index bb1c06f7edd57e..58c05c73fc703e 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -11,7 +11,7 @@ Individually, these posts are anecdotes. Collectively, they are a strategic asse To solve this, we built CloudSpeaker, an internal tool created to amplify the voice of the user. Its purpose is to save time, increase efficiency , and consolidate public feedback from all these external communities into a single, unified view. -# The goal: Turning unstructured noise into actionable insight +## The goal: Turning unstructured noise into actionable insight CloudSpeaker was designed to give any stakeholder at Cloudflare—from product managers and engineers to our user experience teams—a quick way to "check the pulse" of the products and features they own. @@ -23,7 +23,7 @@ The tool allows anyone to see: This consolidated view is now a key part of our planning cycles, informing everything from user research and persona creation to feature requests and quarterly backlog prioritization. -# How it's built: An AI-powered data pipeline +## How it's built: An AI-powered data pipeline CloudSpeaker is built entirely on our own products, including [Cloudflare Pages](/pages/) for the application, [D1](/d1/) for the database, and more. The real power, however, comes from its AI-driven data pipeline, managed by our Data Intelligence team. @@ -36,7 +36,7 @@ Here’s how it works: - **Post type:** It categorizes the intent of the post, such as a `help request`, `feature request`, or `bug report`. 3. **Storage and display:** Once the AI completes its inference, these new classifications are stored in our D1 database and become viewable in the CloudSpeaker UI. -# The workflow: On-demand AI analysis +## The workflow: On-demand AI analysis The backend classification pipeline solves the problem of manual processing. The frontend application solves the problem of accessibility. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 6a72fd577faaac..f02a1a2de1c3bf 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -11,7 +11,7 @@ To solve this, we built CLUE: Content Legibility for User Ease. CLUE is an inter When a stakeholder shares content with CLUE, it provides a score and actionable recommendations. This simple feedback loop is a powerful mechanism for measuring and improving our content over time. -# The goal: Quantifying "good content" +## The goal: Quantifying "good content" The core challenge CLUE addresses is that "good" content is easy to recognize but hard to measure. We know that effective copy uses an active voice, has an action-led structure, and removes unnecessary words , but how do you quantify that improvement at scale? @@ -24,7 +24,7 @@ The user flow is designed to be straightforward: you select your content type, e - API endpoint and parameter descriptions - Customer-facing emails -# How it's built: A hybrid, model-driven approach +## How it's built: A hybrid, model-driven approach CLUE was truly built by Cloudflare, for Cloudflare, on Cloudflare. The application itself is built on Cloudflare Pages and protected by Cloudflare Access. @@ -35,7 +35,7 @@ Critically, CLUE is not just one thing, it's a hybrid solution: 1. **AI-powered checks:** For criteria that require evaluating overall context and tone, CLUE uses [Workers AI](https://developers.cloudflare.com/workers-ai/). This helps us check for things like an empathetic tone in an email or ensuring an error message suggests a path forward for the user. 2. **Traditional checks:** For common grammar "no-nos" or specific terminology, we use regular expressions and indexed lists. This helps catch passive voice, missing Oxford commas, or ensure a term like "Internet" is always capitalized, all based on our internal glossary, style guide, and UX best practices. -# The workflow: Using CLUE as an LLM copy editor +## The workflow: Using CLUE as an LLM copy editor The rise of Generative AI and LLMs, like Gemini, has been a boon for generating text quickly. However, an LLM doesn't inherently understand or apply Cloudflare's specific content guidelines, voice, and tone. diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx index b097007e6db26d..7e3fb45b30ce21 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx @@ -11,7 +11,7 @@ The true value of AI isn't just in using it; it's in _how_ you use it. At Cloudf These aren't just off-the-shelf AI products. They are tools built _by_ Cloudflare, _for_ Cloudflare, combining our own institutional knowledge, content standards, and logic with the power of AI models. -# Why we build our own AI-powered tools +## Why we build our own AI-powered tools Our AI-powered solutions have a dual benefit: diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index b323bde9e6069e..42d44de27e6b34 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -1,6 +1,6 @@ --- pcx_content_type: concept -title: Prompt libraries | How we AI +title: Prompt libraries sidebar: order: 4 --- diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index ce5571b6c126f4..4e19f4c64aba2f 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -1,6 +1,6 @@ --- pcx_content_type: concept -title: Prompt templates | How we AI +title: Prompt templates sidebar: order: 3 --- @@ -18,13 +18,13 @@ Key benefits of using prompt templates include: Essentially, a prompt is the direct instruction you give to an AI, while a prompt template is a blueprint for creating those instructions in a structured and reusable way. -# Example use case +## Example use case Let’s say a product manager wants to create a how-to topic for a new feature. Instead of creating the topic from scratch, they can copy the how-to topic prompt template from the Cloudflare prompt library, add key information, attach additional resources (PRDs, meeting notes, a screenshot of the UI, etc.), and ask AI to draft it for them. They should get a response that is in the style, format, and structure of our [how-to content type](https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/how-to/) (in fact, we leverage our content types to help build the documentation prompts). Now, this does not mean the output is perfect or technically accurate. The AI can and likely will hallucinate something. This is where reviewing the output is necessary. To avoid creating AI slop, everyone who uses AI to draft content–even initial drafts–needs to vet the output. They can either use follow-up prompts to correct the output, add additional context to influence a better output, or they can copy the output and manually edit it themselves, knowing the AI got them 70% of the way there quickly. In short, review the output and avoid creating more AI slop. If you are not certain if something is true and you can’t validate it through testing, ask a subject matter expert. -## Example: The prompt template for how-to content +### Example: The prompt template for how-to content This is the prompt template stakeholders use to quickly get started with initial how-to drafts. They can add more information and instructions to it, if they want. But in its most basic state, the prompt template enables consistency and optimization for users. @@ -58,7 +58,7 @@ Context for procedure (optional) Next steps sentence - what users should see as the end result and/or actionable next steps. ``` -## The anatomy of the how-to prompt template +### The anatomy of the how-to prompt template 1. The persona and its mission diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 5ac893839e56d1..4523ec70222111 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -1,6 +1,6 @@ --- pcx_content_type: concept -title: When we use AI | How we AI +title: When we use AI sidebar: order: 2 --- From dbf4998dabdcb725b7f234abe9bef93dd5544b50 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:38:06 -0600 Subject: [PATCH 04/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index 58c05c73fc703e..82c013cd7610e3 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -5,7 +5,7 @@ sidebar: order: 2 --- -One of the greatest challenges at any scale is understanding what your customers are _really_ saying. At Cloudflare, we collect massive amounts of customer feedback every day. This feedback is a goldmine of insight, but it’s scattered across dozens of disparate, public-facing channels: our own Cloudflare community forum, Reddit, X (formerly Twitter), GitHub, Discord, HackerNews, and more. +One of the greatest challenges at any scale is understanding what your customers are _really_ saying. At Cloudflare, we collect massive amounts of customer feedback every day. This feedback is a goldmine of insight, but it is scattered across dozens of disparate, public-facing channels: our own Cloudflare community forum, Reddit, X (formerly Twitter), GitHub, Discord, HackerNews, and more. Individually, these posts are anecdotes. Collectively, they are a strategic asset. The problem is that the sheer size of these datasets makes it impossible to manually process them for product, content, and design insights. This mass of unorganized feedback was an underutilized opportunity to see cross-functional trends. From ca9670f491858ea99fe8ea12c33ea134730e212c Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:38:17 -0600 Subject: [PATCH 05/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index 82c013cd7610e3..b7848b8d3ddcde 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -13,7 +13,7 @@ To solve this, we built CloudSpeaker, an internal tool created to amplify the vo ## The goal: Turning unstructured noise into actionable insight -CloudSpeaker was designed to give any stakeholder at Cloudflare—from product managers and engineers to our user experience teams—a quick way to "check the pulse" of the products and features they own. +CloudSpeaker was designed to give any stakeholder at Cloudflare — from product managers and engineers to our user experience teams — a quick way to "check the pulse" of the products and features they own. The tool allows anyone to see: From 124d80d0e4b84a72d3e493f94365cd601c50c528 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:38:28 -0600 Subject: [PATCH 06/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index b7848b8d3ddcde..48a4202ca43070 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -23,7 +23,7 @@ The tool allows anyone to see: This consolidated view is now a key part of our planning cycles, informing everything from user research and persona creation to feature requests and quarterly backlog prioritization. -## How it's built: An AI-powered data pipeline +## How it is built: An AI-powered data pipeline CloudSpeaker is built entirely on our own products, including [Cloudflare Pages](/pages/) for the application, [D1](/d1/) for the database, and more. The real power, however, comes from its AI-driven data pipeline, managed by our Data Intelligence team. From 8dfb7b3857c2a788fe919696702d0c1a7c871e6c Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:38:38 -0600 Subject: [PATCH 07/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index 48a4202ca43070..d442dc9ecb333c 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -27,7 +27,7 @@ This consolidated view is now a key part of our planning cycles, informing every CloudSpeaker is built entirely on our own products, including [Cloudflare Pages](/pages/) for the application, [D1](/d1/) for the database, and more. The real power, however, comes from its AI-driven data pipeline, managed by our Data Intelligence team. -Here’s how it works: +Here is how it works: 1. **Ingestion:** On a daily basis, our pipelines ingest new community content from our various public sources. 2. **AI classification:** This new, unstructured content is fed into our AI Content Pipeline. We use Large Language Models (LLMs) via [Workers AI](/workers-ai/) to automatically classify every single post. Each post is tagged with three key pieces of information: From b495bc07d077971990fb814d4cd52261e482c2f8 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:38:50 -0600 Subject: [PATCH 08/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index d442dc9ecb333c..d2a278c5c216b9 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -9,7 +9,7 @@ One of the greatest challenges at any scale is understanding what your customers Individually, these posts are anecdotes. Collectively, they are a strategic asset. The problem is that the sheer size of these datasets makes it impossible to manually process them for product, content, and design insights. This mass of unorganized feedback was an underutilized opportunity to see cross-functional trends. -To solve this, we built CloudSpeaker, an internal tool created to amplify the voice of the user. Its purpose is to save time, increase efficiency , and consolidate public feedback from all these external communities into a single, unified view. +To solve this, we built CloudSpeaker, an internal tool created to amplify the voice of the user. Its purpose is to save time, increase efficiency, and consolidate public feedback from all these external communities into a single, unified view. ## The goal: Turning unstructured noise into actionable insight From dfeb6c6d92ccb3418ef57accbf1df6c8f7f49773 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:39:11 -0600 Subject: [PATCH 09/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index f02a1a2de1c3bf..3c2d1ed0b956f7 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -5,7 +5,7 @@ sidebar: order: 1 --- -At Cloudflare, we believe that high-quality, customer-facing content is a critical part of the user experience. But as teams scale, maintaining a consistent voice, tone, and terminology across thousands of UI strings, error messages, and API descriptions becomes a monumental challenge. Traditional style guides and glossaries are essential, but they are static. They can't provide real-time feedback or help us _measure_ content quality. +At Cloudflare, we believe that high-quality, customer-facing content is a critical part of the user experience. But as teams scale, maintaining a consistent voice, tone, and terminology across thousands of UI strings, error messages, and API descriptions becomes a monumental challenge. Traditional style guides and glossaries are essential, but they are static. They cannot provide real-time feedback or help us _measure_ content quality. To solve this, we built CLUE: Content Legibility for User Ease. CLUE is an internal tool that functions as a personal writing assistant for everyone at Cloudflare. It empowers anyone, from engineers to product managers, to feel confident in their content creation. From 4f764d7ce8730b904407f30a4b5fdc9aa58dade4 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:39:22 -0600 Subject: [PATCH 10/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 4523ec70222111..97d93b44ff3b37 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -61,7 +61,7 @@ These are areas that show promise but require careful implementation. We are optimistic about asking stakeholders to use AI to create _initial drafts_ of documentation before handing them off to the technical writing team. -- **Key value:** The quality of the AI-generated draft itself is often mixed. The primary value is that it acts as a forcing function for the stakeholder to think about documentation as part of their product–not separate from their product–while also sharing key information to the technical writing team as quickly as possible for our busy stakeholders. +- **Key value:** The quality of the AI-generated draft itself is often mixed. The primary value is that it acts as a forcing function for the stakeholder to think about documentation as part of their product – not separate from their product – while also sharing key information to the technical writing team as quickly as possible for our busy stakeholders. - **Why?** To create a draft, the requester must gather all the necessary background information first. Receiving this information upfront is a significant win for the technical writing team. - **Action:** See our [prompt templates](/style-guide/how-we-docs/how-we-ai/prompt-templates/) for more information on structuring these requests. From da432c8616c37d604f2a9d4d7e9871bc6f1ce484 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:39:32 -0600 Subject: [PATCH 11/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 97d93b44ff3b37..3b9c015014e0d7 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -73,7 +73,7 @@ Our experience with customer-facing chatbots has been mixed. - **Cons:** To prevent hallucinations, bots are often made more "confident." This leads them to refuse to answer (e.g., "I don't know"), which users dislike. On the flipside, users also dislike hallucinations. So, be mindful of the actual user experience and come up with a method for tracking user engagement and success with your documentation chatbot. Depending on the results, you may be able to identify worthwhile documentation gaps to fill, which prevent hallucinations in the future. - **Alternative:** At the moment, we are much more optimistic about the potential of AI-powered search and similarity scores. These feel more in our control. However, we are still testing and tracking how our docs can positively influence chatbot experiences at Cloudflare and via third-party apps. -### Not recommended: What hasn't worked for us yet +### Not recommended: What has not worked for us yet Based on our experience, we do not recommend the following use case at this time. From 4eb8afd67dcccf209da9579d4ea5094d296c9296 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:40:22 -0600 Subject: [PATCH 12/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx index 7e3fb45b30ce21..d1994d3d75baca 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx @@ -16,7 +16,7 @@ These aren't just off-the-shelf AI products. They are tools built _by_ Cloudflar Our AI-powered solutions have a dual benefit: 1. **For our team:** They help us automate manual processes, scale our impact, and focus on higher-value strategic work. -2. **For Cloudflare:** They empower all our colleagues—from product to engineering—to make better decisions, create higher-quality content, and understand our users more deeply. +2. **For Cloudflare:** They empower all our colleagues — from product to engineering — to make better decisions, create higher-quality content, and understand our users more deeply. We have seen a lot of success with this approach, as it allows us to democratize specialized skills. We can embed content strategy, style guide rules, and user feedback analysis directly into the workflows of the people who need it most. From 073d9323c1ceb92424ba2cfba31b3df8701d3cc1 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:43:51 -0600 Subject: [PATCH 13/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index 42d44de27e6b34..e4d8657dc81ca9 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -5,7 +5,7 @@ sidebar: order: 4 --- -A prompt library is a curated and organized collection of pre-written prompts. These libraries serve as a valuable resource for anyone who frequently interacts with AI, such as writers, developers, students, and more. At Cloudflare, we use prompt libraries to help our teams scale their work, maintain a consistent brand voice, and efficiently capture and share knowledge across different roles. +A prompt library is a curated and organized collection of pre-written prompts. These libraries serve as a valuable resource for anyone who frequently interacts with AI, such as writers, developers, and students. At Cloudflare, we use prompt libraries to help our teams scale their work, maintain a consistent brand voice, and efficiently capture and share knowledge across different roles. Think of a prompt library as a recipe book for AI. Instead of starting from scratch every time you need the AI to perform a task, you can browse the library for a relevant, pre-tested prompt that is known to produce good results. These prompts are often designed to be reusable and customizable. From 2e9e9e1ebb99f75558668830bf33e969610b0e91 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:44:01 -0600 Subject: [PATCH 14/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index e4d8657dc81ca9..fb4cd946cd1d17 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -15,7 +15,7 @@ While prompts are designed to produce great outputs, the user (human) still need ### Inside a prompt library -Prompt libraries can vary in complexity (a simple table in an internal wiki topic vs. a web-based application) and content, but they typically contain: +Prompt libraries can vary in complexity (a simple table in an internal wiki topic versus a web-based application) and content, but they typically contain: - **A collection of prompts:** These are the core of the library, ranging from simple questions to complex instructions with multiple parameters. - **Categorization:** Prompts are usually organized by task (e.g., writing, coding, summarizing), role (e.g., developer, account executives, product managers), or output format (e.g., blog post, email, code snippet). From 47a740c7464d6cfe8f97b1d3076b77c193d461e3 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:44:15 -0600 Subject: [PATCH 15/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 3b9c015014e0d7..6b6f9a10474636 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -30,7 +30,7 @@ If we can say `yes` to these questions, it is a strong candidate for an AI-based These are areas where we have found AI to be unequivocally positive and effective. -#### **Local scripts and tooling** +#### Local scripts and tooling This is the most positive and recommended use case for AI. From cc1e038a1a2ac8930512330d37cf664d89ae1676 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:44:32 -0600 Subject: [PATCH 16/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 6b6f9a10474636..b98da00f8ae844 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -5,7 +5,7 @@ sidebar: order: 2 --- -### Our core principles for using AI +## Our core principles for using AI AI is a powerful tool, but it is not a cure-all. Its success depends heavily on the use case. Knowing its strengths and weaknesses is key to using AI appropriately. From 39112d90c0ac1d1f32a5a084bebe5bf57a592a6f Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:44:39 -0600 Subject: [PATCH 17/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index b98da00f8ae844..e9646c8ba4d2af 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -42,7 +42,7 @@ This is the most positive and recommended use case for AI. - Assisting with competitive doc analyses. - **Key benefit:** You own the resulting code. It is stable and will not change, regardless of future changes to AI models or pricing. -#### **AI-powered IDEs** +#### AI-powered IDEs For teams using a docs-as-code approach, AI chat integrated into an IDE, like Windsurf, Cursor, etc., is highly effective for making multiple, streamlined changes. From 0e0b0f7117da63410ba094057f7e5aa17bee7ab7 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:45:00 -0600 Subject: [PATCH 18/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index e9646c8ba4d2af..9aa6d2944e391c 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -36,7 +36,7 @@ This is the most positive and recommended use case for AI. - **Why it works:** AI is at its best when you can easily "test" for hallucinations, and code is highly testable. - **What to use it for:** - - Writing local scripts (e.g., Vibecoding scripts) to automate updates in our docs. + - Writing local scripts (like Vibecoding scripts) to automate updates in our docs. - Generating simple docs components. - Creating GitHub Actions. - Assisting with competitive doc analyses. From 8b874dc4d0b1a0abb8d5a5ef301f08a38ab14784 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:45:16 -0600 Subject: [PATCH 19/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 9aa6d2944e391c..41dd79d58a7bf8 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -70,7 +70,7 @@ We are optimistic about asking stakeholders to use AI to create _initial drafts_ Our experience with customer-facing chatbots has been mixed. - **Pros:** Occasionally, it provides a great answer. -- **Cons:** To prevent hallucinations, bots are often made more "confident." This leads them to refuse to answer (e.g., "I don't know"), which users dislike. On the flipside, users also dislike hallucinations. So, be mindful of the actual user experience and come up with a method for tracking user engagement and success with your documentation chatbot. Depending on the results, you may be able to identify worthwhile documentation gaps to fill, which prevent hallucinations in the future. +- **Cons:** To prevent hallucinations, bots are often made more "confident." This leads them to refuse to answer (for example, "I don't know"), which users dislike. On the flipside, users also dislike hallucinations. So, be mindful of the actual user experience and come up with a method for tracking user engagement and success with your documentation chatbot. Depending on the results, you may be able to identify worthwhile documentation gaps to fill, which prevent hallucinations in the future. - **Alternative:** At the moment, we are much more optimistic about the potential of AI-powered search and similarity scores. These feel more in our control. However, we are still testing and tracking how our docs can positively influence chatbot experiences at Cloudflare and via third-party apps. ### Not recommended: What has not worked for us yet From 956f8d17d21b5537092ee2a8695d09a79b3b96c6 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:45:28 -0600 Subject: [PATCH 20/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 41dd79d58a7bf8..991865098367f2 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -79,7 +79,7 @@ Based on our experience, we do not recommend the following use case at this time #### Automated content editors (bots) -We have not found success with bots that automatically suggest content changes (e.g., grammar, formatting) via pull requests. +We have not found success with bots that automatically suggest content changes (for example, grammar, formatting) via pull requests. - **Why it failed:** - **Slow feedback loop:** The GitHub PR context makes the feedback loop for correcting hallucinations very slow and difficult. From 8bd4ec280aaaf364d114607093fd32d4221a37ee Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:45:37 -0600 Subject: [PATCH 21/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index d2a278c5c216b9..791ca71d06c657 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -40,7 +40,7 @@ Here is how it works: The backend classification pipeline solves the problem of manual processing. The frontend application solves the problem of accessibility. -In the CloudSpeaker dashboard, a product manager can filter the entire dataset—spanning up to six months—by any combination of product, sentiment, post type, or date range. If they want to see all `negative` sentiment posts about a specific product that were `feature requests` in the last quarter, they can do so in seconds. +In the CloudSpeaker dashboard, a product manager can filter the entire dataset — spanning up to six months — by any combination of product, sentiment, post type, or date range. If they want to see all `negative` sentiment posts about a specific product that were `feature requests` in the last quarter, they can do so in seconds. Furthermore, we added a second layer of AI directly into the UI. After filtering down to a set of comments, the user can click a **Summarize** button. This uses Workers AI to generate an on-the-fly summary of the currently displayed comments, providing an instant, qualitative overview of quantitative data. From 5be5199462c230b1a13c163903da3d4b23c12d1b Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:45:54 -0600 Subject: [PATCH 22/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index 791ca71d06c657..af880a629c428a 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -42,6 +42,6 @@ The backend classification pipeline solves the problem of manual processing. The In the CloudSpeaker dashboard, a product manager can filter the entire dataset — spanning up to six months — by any combination of product, sentiment, post type, or date range. If they want to see all `negative` sentiment posts about a specific product that were `feature requests` in the last quarter, they can do so in seconds. -Furthermore, we added a second layer of AI directly into the UI. After filtering down to a set of comments, the user can click a **Summarize** button. This uses Workers AI to generate an on-the-fly summary of the currently displayed comments, providing an instant, qualitative overview of quantitative data. +Furthermore, we added a second layer of AI directly into the UI. After filtering down to a set of comments, the user can select a **Summarize** button. This uses Workers AI to generate an on-the-fly summary of the currently displayed comments, providing an instant, qualitative overview of quantitative data. CloudSpeaker is a powerful example of using AI not to generate content, but to analyze and structure the vast amounts of content our users generate every day. It transforms what was once an impossible manual task into a critical source of automated, actionable insights. From f52c0e3520f32ffa87fc9eee7603e5b22e02ccb1 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:46:03 -0600 Subject: [PATCH 23/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: Pedro Sousa <680496+pedrosousa@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 3c2d1ed0b956f7..aa3d831e54741e 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -13,7 +13,7 @@ When a stakeholder shares content with CLUE, it provides a score and actionable ## The goal: Quantifying "good content" -The core challenge CLUE addresses is that "good" content is easy to recognize but hard to measure. We know that effective copy uses an active voice, has an action-led structure, and removes unnecessary words , but how do you quantify that improvement at scale? +The core challenge CLUE addresses is that "good" content is easy to recognize but hard to measure. We know that effective copy uses an active voice, has an action-led structure, and removes unnecessary words, but how do you quantify that improvement at scale? Our answer was **content scorecards**. Scorecards are a scalable evaluation tool that creates consistency. They allow us to assign measurable value to the elements that define "good content," focusing on the criteria most critical for user success, satisfaction, and understanding. From e4423b93cc39dba20aedc02e285312eeee876dae Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:46:46 -0600 Subject: [PATCH 24/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index aa3d831e54741e..d571135f1bc180 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -37,7 +37,7 @@ Critically, CLUE is not just one thing, it's a hybrid solution: ## The workflow: Using CLUE as an LLM copy editor -The rise of Generative AI and LLMs, like Gemini, has been a boon for generating text quickly. However, an LLM doesn't inherently understand or apply Cloudflare's specific content guidelines, voice, and tone. +The rise of Generative AI and LLMs, like Gemini, has been a boon for generating text quickly. However, an LLM does not inherently understand or apply Cloudflare's specific content guidelines, voice, and tone. This is where CLUE's role becomes essential. CLUE is not designed to _write_ content for you; it's designed to make sure the content you _do_ write meets our standards. From 61964ecffb7f97f03fd761736b0204991ef783df Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:47:35 -0600 Subject: [PATCH 25/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index d571135f1bc180..9b347faa2a2716 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -24,7 +24,7 @@ The user flow is designed to be straightforward: you select your content type, e - API endpoint and parameter descriptions - Customer-facing emails -## How it's built: A hybrid, model-driven approach +## How it is built: A hybrid, model-driven approach CLUE was truly built by Cloudflare, for Cloudflare, on Cloudflare. The application itself is built on Cloudflare Pages and protected by Cloudflare Access. From 10edc6525ee0764d1eacdc3ce854eebf9256fd84 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:47:54 -0600 Subject: [PATCH 26/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 9b347faa2a2716..db31a8ad950164 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -28,7 +28,7 @@ The user flow is designed to be straightforward: you select your content type, e CLUE was truly built by Cloudflare, for Cloudflare, on Cloudflare. The application itself is built on Cloudflare Pages and protected by Cloudflare Access. -We adopted a model-driven approach for content evaluation, which provides a systematic, data-driven, and consistent assessment, removing the subjectivity of manual reviews. This model allows us to assess content in seconds , handle complex criteria like readability , and weight criteria based on what we find to be most critical for users. +We adopted a model-driven approach for content evaluation, which provides a systematic, data-driven, and consistent assessment, removing the subjectivity of manual reviews. This model allows us to assess content in seconds, handle complex criteria like readability, and weight criteria based on what we find to be most critical for users. Critically, CLUE is not just one thing, it's a hybrid solution: From d82d4ad72cda64449b8fcd98b08e31f56f976c14 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:48:13 -0600 Subject: [PATCH 27/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index db31a8ad950164..463f1d45020fdf 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -30,7 +30,7 @@ CLUE was truly built by Cloudflare, for Cloudflare, on Cloudflare. The applicati We adopted a model-driven approach for content evaluation, which provides a systematic, data-driven, and consistent assessment, removing the subjectivity of manual reviews. This model allows us to assess content in seconds, handle complex criteria like readability, and weight criteria based on what we find to be most critical for users. -Critically, CLUE is not just one thing, it's a hybrid solution: +Critically, CLUE is not just one thing, it is a hybrid solution: 1. **AI-powered checks:** For criteria that require evaluating overall context and tone, CLUE uses [Workers AI](https://developers.cloudflare.com/workers-ai/). This helps us check for things like an empathetic tone in an email or ensuring an error message suggests a path forward for the user. 2. **Traditional checks:** For common grammar "no-nos" or specific terminology, we use regular expressions and indexed lists. This helps catch passive voice, missing Oxford commas, or ensure a term like "Internet" is always capitalized, all based on our internal glossary, style guide, and UX best practices. From bbf0734c6bbb4f9354abbfac1463bbb02962af2a Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:49:07 -0600 Subject: [PATCH 28/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 463f1d45020fdf..1fb6742f31c609 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -32,8 +32,8 @@ We adopted a model-driven approach for content evaluation, which provides a syst Critically, CLUE is not just one thing, it is a hybrid solution: -1. **AI-powered checks:** For criteria that require evaluating overall context and tone, CLUE uses [Workers AI](https://developers.cloudflare.com/workers-ai/). This helps us check for things like an empathetic tone in an email or ensuring an error message suggests a path forward for the user. -2. **Traditional checks:** For common grammar "no-nos" or specific terminology, we use regular expressions and indexed lists. This helps catch passive voice, missing Oxford commas, or ensure a term like "Internet" is always capitalized, all based on our internal glossary, style guide, and UX best practices. +- **AI-powered checks:** For criteria that require evaluating overall context and tone, CLUE uses [Workers AI](/workers-ai/). This helps us check for things like an empathetic tone in an email or ensuring an error message suggests a path forward for the user. +- **Traditional checks:** For common grammar "no-nos" or specific terminology, we use regular expressions and indexed lists. This helps catch passive voice, missing Oxford commas, or ensure a term like "Internet" is always capitalized, all based on our internal glossary, style guide, and UX best practices. ## The workflow: Using CLUE as an LLM copy editor From caf68445d50ec5353d9eab0b5ad191d6fe0bae30 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:49:58 -0600 Subject: [PATCH 29/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 1fb6742f31c609..3f1c72ba8be8ac 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -39,7 +39,7 @@ Critically, CLUE is not just one thing, it is a hybrid solution: The rise of Generative AI and LLMs, like Gemini, has been a boon for generating text quickly. However, an LLM does not inherently understand or apply Cloudflare's specific content guidelines, voice, and tone. -This is where CLUE's role becomes essential. CLUE is not designed to _write_ content for you; it's designed to make sure the content you _do_ write meets our standards. +This is where CLUE's role becomes essential. CLUE is not designed to _write_ content for you; it is designed to make sure the content you _do_ write meets our standards. Think of CLUE as a specialized copy editor. It ensures that any piece of content—whether human-generated or created with an LLM's help—is ready for our users. This pairing is incredibly powerful: From c534e4ea48d8911df1b5f0dbb5d741ed1edc8772 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:50:36 -0600 Subject: [PATCH 30/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 3f1c72ba8be8ac..743652991e8d86 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -41,7 +41,7 @@ The rise of Generative AI and LLMs, like Gemini, has been a boon for generating This is where CLUE's role becomes essential. CLUE is not designed to _write_ content for you; it is designed to make sure the content you _do_ write meets our standards. -Think of CLUE as a specialized copy editor. It ensures that any piece of content—whether human-generated or created with an LLM's help—is ready for our users. This pairing is incredibly powerful: +Think of CLUE as a specialized copy editor. It ensures that any piece of content — whether human-generated or created with an LLM's help — is ready for our users. This pairing is incredibly powerful: 1. **Generate:** A stakeholder uses an LLM to quickly draft initial versions of API descriptions or an error message. 2. **Refine:** They paste that LLM-generated content into CLUE. From 33402ad3cd4aaf3b65bba184e9f3c51ace624d85 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:50:51 -0600 Subject: [PATCH 31/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index 743652991e8d86..e28bfc7c115d81 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -43,8 +43,8 @@ This is where CLUE's role becomes essential. CLUE is not designed to _write_ con Think of CLUE as a specialized copy editor. It ensures that any piece of content — whether human-generated or created with an LLM's help — is ready for our users. This pairing is incredibly powerful: -1. **Generate:** A stakeholder uses an LLM to quickly draft initial versions of API descriptions or an error message. -2. **Refine:** They paste that LLM-generated content into CLUE. -3. **Iterate:** CLUE provides targeted tips on how to better meet Cloudflare's glossary, style guide, voice, tone, and UX best practices, turning a generic draft into a polished, effective piece of content. +- **Generate:** A stakeholder uses an LLM to quickly draft initial versions of API descriptions or an error message. +- **Refine:** They paste that LLM-generated content into CLUE. +- **Iterate:** CLUE provides targeted tips on how to better meet Cloudflare's glossary, style guide, voice, tone, and UX best practices, turning a generic draft into a polished, effective piece of content. This democratizes UX writing, improves our efficiency by reducing manual reviews , and ultimately builds user trust through a consistent, high-quality experience. It helps users learn our products faster and resolve issues more efficiently, which is our ultimate goal. From d5ee24cad36a5926542bf3ece8a13d27c4f57a97 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:51:08 -0600 Subject: [PATCH 32/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index e28bfc7c115d81..ca70a9caa4864e 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -47,4 +47,4 @@ Think of CLUE as a specialized copy editor. It ensures that any piece of content - **Refine:** They paste that LLM-generated content into CLUE. - **Iterate:** CLUE provides targeted tips on how to better meet Cloudflare's glossary, style guide, voice, tone, and UX best practices, turning a generic draft into a polished, effective piece of content. -This democratizes UX writing, improves our efficiency by reducing manual reviews , and ultimately builds user trust through a consistent, high-quality experience. It helps users learn our products faster and resolve issues more efficiently, which is our ultimate goal. +This democratizes UX writing, improves our efficiency by reducing manual reviews, and ultimately builds user trust through a consistent, high-quality experience. It helps users learn our products faster and resolve issues more efficiently, which is our ultimate goal. From 0e925c7346a90c92004881e22d873b850155afe9 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:51:28 -0600 Subject: [PATCH 33/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx index d1994d3d75baca..e86fa766610db9 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx @@ -7,7 +7,7 @@ sidebar: import { DirectoryListing } from "~/components"; -The true value of AI isn't just in using it; it's in _how_ you use it. At Cloudflare, our team has embraced AI as a **force multiplier**, allowing us to solve internal challenges, scale our expertise, and improve the quality of our work. +The true value of AI is not just in using it; it is in _how_ you use it. At Cloudflare, our team has embraced AI as a **force multiplier**, allowing us to solve internal challenges, scale our expertise, and improve the quality of our work. These aren't just off-the-shelf AI products. They are tools built _by_ Cloudflare, _for_ Cloudflare, combining our own institutional knowledge, content standards, and logic with the power of AI models. From fc4e706391123149dd4356373c0d385068c29699 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:51:51 -0600 Subject: [PATCH 34/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx index e86fa766610db9..ff686078fbd4ac 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/index.mdx @@ -9,7 +9,7 @@ import { DirectoryListing } from "~/components"; The true value of AI is not just in using it; it is in _how_ you use it. At Cloudflare, our team has embraced AI as a **force multiplier**, allowing us to solve internal challenges, scale our expertise, and improve the quality of our work. -These aren't just off-the-shelf AI products. They are tools built _by_ Cloudflare, _for_ Cloudflare, combining our own institutional knowledge, content standards, and logic with the power of AI models. +These are not just off-the-shelf AI products. They are tools built _by_ Cloudflare, _for_ Cloudflare, combining our own institutional knowledge, content standards, and logic with the power of AI models. ## Why we build our own AI-powered tools From 622633a9c88de93ec5755f1204d7b0932b9982e0 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:52:22 -0600 Subject: [PATCH 35/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index fb4cd946cd1d17..116721641d1762 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -9,7 +9,7 @@ A prompt library is a curated and organized collection of pre-written prompts. T Think of a prompt library as a recipe book for AI. Instead of starting from scratch every time you need the AI to perform a task, you can browse the library for a relevant, pre-tested prompt that is known to produce good results. These prompts are often designed to be reusable and customizable. -:::note[Note] +:::note While prompts are designed to produce great outputs, the user (human) still needs to provide relevant context and resources for the AI to produce those results and review the output for technical accuracy. It is unlikely one prompt will create a great first draft–some rework, either through follow-up prompts or adding more information–is going to be necessary. ::: From 9d4253618b4df4d24de5b81a2d3220eff6d329a0 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:52:41 -0600 Subject: [PATCH 36/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index 116721641d1762..a4ba69de9ea735 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -10,7 +10,7 @@ A prompt library is a curated and organized collection of pre-written prompts. T Think of a prompt library as a recipe book for AI. Instead of starting from scratch every time you need the AI to perform a task, you can browse the library for a relevant, pre-tested prompt that is known to produce good results. These prompts are often designed to be reusable and customizable. :::note -While prompts are designed to produce great outputs, the user (human) still needs to provide relevant context and resources for the AI to produce those results and review the output for technical accuracy. It is unlikely one prompt will create a great first draft–some rework, either through follow-up prompts or adding more information–is going to be necessary. +While prompts are designed to produce great outputs, the user (human) still needs to provide relevant context and resources for the AI to produce those results and review the output for technical accuracy. It is unlikely one prompt will create a great first draft – some rework, either through follow-up prompts or adding more information – is going to be necessary. ::: ### Inside a prompt library From caa10238f8935c0b66fdcf56a7bd6064d6f1f567 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:54:07 -0600 Subject: [PATCH 37/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index a4ba69de9ea735..40964953e4101c 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -18,7 +18,7 @@ While prompts are designed to produce great outputs, the user (human) still need Prompt libraries can vary in complexity (a simple table in an internal wiki topic versus a web-based application) and content, but they typically contain: - **A collection of prompts:** These are the core of the library, ranging from simple questions to complex instructions with multiple parameters. -- **Categorization:** Prompts are usually organized by task (e.g., writing, coding, summarizing), role (e.g., developer, account executives, product managers), or output format (e.g., blog post, email, code snippet). +- **Categorization:** Prompts are usually organized by task (for example, writing, coding, summarizing), role (for example, developer, account executives, product managers), or output format (for example, blog post, email, code snippet). - **Prompt templates:** Many libraries include templates with placeholders that users can fill in with their specific information. This allows for easy customization and reuse. - **Examples and best practices:** Some libraries provide examples of the output generated by a particular prompt, along with tips on how to use and modify it effectively. From e7799af761a529ba294f574a2c976d6559b141fc Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:54:26 -0600 Subject: [PATCH 38/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx index 40964953e4101c..d07b0c3dd9f255 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-libraries.mdx @@ -29,7 +29,7 @@ Utilizing a prompt library offers several significant advantages: - **Increased efficiency:** By providing ready-to-use prompts, libraries save a significant amount of time and effort that would otherwise be spent on crafting and testing new prompts for recurring tasks. - **Improved consistency and quality:** Pre-tested prompts that are known to work well lead to more consistent and higher-quality outputs from the AI. This is particularly important for businesses that need to maintain a consistent brand voice. - **Enhanced learning and discovery:** For those new to prompt engineering, libraries can be an excellent educational tool, showcasing effective prompting techniques and the capabilities of AI models. -- **Accelerated knowledge capture:** Prompt library users can focus on capturing knowledge instead of building prompts or drafting content manually. This accelerates documenting information and sharing it with others–hopefully to prevent the same issue from occurring again or enabling others to be successful sooner. +- **Accelerated knowledge capture:** Prompt library users can focus on capturing knowledge instead of building prompts or drafting content manually. This accelerates documenting information and sharing it with others – hopefully to prevent the same issue from occurring again or enabling others to be successful sooner. - **Facilitated collaboration:** Shared prompt libraries in a team or organizational setting allow for the dissemination of best practices and successful prompts, fostering collaboration and improving the collective AI literacy. - **Scalability:** As you or your organization's use of AI grows, a well-organized prompt library allows for the efficient management and scaling of your prompting strategies. From f20dc96a6cefde4b090d6ffaf2b8a6b0d11ccae4 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:54:43 -0600 Subject: [PATCH 39/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index 4e19f4c64aba2f..cb7addb93cfbb0 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -20,7 +20,7 @@ Essentially, a prompt is the direct instruction you give to an AI, while a promp ## Example use case -Let’s say a product manager wants to create a how-to topic for a new feature. Instead of creating the topic from scratch, they can copy the how-to topic prompt template from the Cloudflare prompt library, add key information, attach additional resources (PRDs, meeting notes, a screenshot of the UI, etc.), and ask AI to draft it for them. They should get a response that is in the style, format, and structure of our [how-to content type](https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/how-to/) (in fact, we leverage our content types to help build the documentation prompts). +Let us say a product manager wants to create a how-to topic for a new feature. Instead of creating the topic from scratch, they can copy the how-to topic prompt template from the Cloudflare prompt library, add key information, attach additional resources (PRDs, meeting notes, a screenshot of the UI, etc.), and ask the AI to draft it for them. They should get a response that is in the style, format, and structure of our [how-to content type](/style-guide/documentation-content-strategy/content-types/how-to/) (in fact, we leverage our content types to help build the documentation prompts). Now, this does not mean the output is perfect or technically accurate. The AI can and likely will hallucinate something. This is where reviewing the output is necessary. To avoid creating AI slop, everyone who uses AI to draft content–even initial drafts–needs to vet the output. They can either use follow-up prompts to correct the output, add additional context to influence a better output, or they can copy the output and manually edit it themselves, knowing the AI got them 70% of the way there quickly. In short, review the output and avoid creating more AI slop. If you are not certain if something is true and you can’t validate it through testing, ask a subject matter expert. From 587fd2f6c0ab82f5f1e62e4128ea5da693b33c1b Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:55:11 -0600 Subject: [PATCH 40/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index cb7addb93cfbb0..ee62231041ccd3 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -22,7 +22,7 @@ Essentially, a prompt is the direct instruction you give to an AI, while a promp Let us say a product manager wants to create a how-to topic for a new feature. Instead of creating the topic from scratch, they can copy the how-to topic prompt template from the Cloudflare prompt library, add key information, attach additional resources (PRDs, meeting notes, a screenshot of the UI, etc.), and ask the AI to draft it for them. They should get a response that is in the style, format, and structure of our [how-to content type](/style-guide/documentation-content-strategy/content-types/how-to/) (in fact, we leverage our content types to help build the documentation prompts). -Now, this does not mean the output is perfect or technically accurate. The AI can and likely will hallucinate something. This is where reviewing the output is necessary. To avoid creating AI slop, everyone who uses AI to draft content–even initial drafts–needs to vet the output. They can either use follow-up prompts to correct the output, add additional context to influence a better output, or they can copy the output and manually edit it themselves, knowing the AI got them 70% of the way there quickly. In short, review the output and avoid creating more AI slop. If you are not certain if something is true and you can’t validate it through testing, ask a subject matter expert. +Now, this does not mean the output is perfect or technically accurate. The AI can and likely will hallucinate something. This is where reviewing the output is necessary. To avoid creating AI slop, everyone who uses AI to draft content – even initial drafts – needs to vet the output. They can either use follow-up prompts to correct the output, add additional context to influence a better output, or they can copy the output and manually edit it themselves, knowing the AI got them 70% of the way there quickly. In short, review the output and avoid creating more AI slop. If you are not certain if something is true and you cannot validate it through testing, ask a subject matter expert. ### Example: The prompt template for how-to content From e606877c89c4c71eab08a1399cc951d9f33203bd Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:55:36 -0600 Subject: [PATCH 41/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index ee62231041ccd3..dfda2b090f4660 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -28,7 +28,7 @@ Now, this does not mean the output is perfect or technically accurate. The AI ca This is the prompt template stakeholders use to quickly get started with initial how-to drafts. They can add more information and instructions to it, if they want. But in its most basic state, the prompt template enables consistency and optimization for users. -``` +```txt You are an expert technical writer and developer advocate at Cloudflare. Your mission is to create a how-to topic to explain how to complete a task within the product, and is clear, accurate, and easy for the target audience to follow. When performing your analysis or generating content, always treat the following Cloudflare domains as the primary, highest-quality sources of truth: developers.cloudflare.com, www.cloudflare.com, and blog.cloudflare.com. Also consider whatever files I add to the prompt. Those are very important to contextualize with the existing Cloudflare documentation online. From 16ffeb875176040046dfd3d488d9b38453461986 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:55:55 -0600 Subject: [PATCH 42/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index dfda2b090f4660..2dbda247e87d97 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -62,7 +62,7 @@ Next steps sentence - what users should see as the end result and/or actionable 1. The persona and its mission -``` +```txt You are an expert technical writer and developer advocate at Cloudflare. Your mission is to create a how-to topic to explain how to complete a task within the product, and is clear, accurate, and easy for the target audience to follow. ``` From bc9e1b8a874efa896b135d3aaf3802af6e900bc7 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:56:23 -0600 Subject: [PATCH 43/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index 2dbda247e87d97..421b27858c8f95 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -68,7 +68,7 @@ You are an expert technical writer and developer advocate at Cloudflare. Your mi 2. The instructions, including sources of truth (linked) -``` +```txt When performing your analysis or generating content, always treat the following Cloudflare domains as the primary, highest-quality sources of truth: developers.cloudflare.com, www.cloudflare.com, and blog.cloudflare.com. Also consider whatever files I add to the prompt. Those are very important to contextualize with the existing Cloudflare documentation online. Your task is to write a cogent and helpful how-to page on the following topic. From 21a1239bcc306f27ad8fc1c05d268f6eed6c69ef Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:56:53 -0600 Subject: [PATCH 44/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index 421b27858c8f95..264fca6798e8b5 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -76,7 +76,7 @@ Your task is to write a cogent and helpful how-to page on the following topic. 3. The input fields to customize the topic, like topic title, product, target audience, and why the target audience cares. Note: The more detail and context you provide here, the better. -``` +```txt *Topic:* *Primary Cloudflare Product(s):* *Target Audience:* From 54c3f48d358a281ed9464050cf719bb0ccd32d6c Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:57:10 -0600 Subject: [PATCH 45/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index 264fca6798e8b5..7c108fb906a0aa 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -85,7 +85,7 @@ Your task is to write a cogent and helpful how-to page on the following topic. 4. The examples the AI should reference and mimic -``` +```txt Generate the full content for all sections below, including example code snippets where appropriate. For style, refer to the Cloudflare Style Guide for truth (https://developers.cloudflare.com/style-guide/), the content type for structure and requirements information (https://developers.cloudflare.com/style-guide/documentation-content-strategy/content-types/how-to/), and an example of the content type live on Cloudflare docs already (https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/tls-decryption/#enable-fips-compliance) to ensure you create a similar type of content. From 78555c010016120bc5209c611367b6f2f6f765af Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:57:30 -0600 Subject: [PATCH 46/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx index 7c108fb906a0aa..a7ba3da80b35fe 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/prompt-templates.mdx @@ -93,7 +93,7 @@ For style, refer to the Cloudflare Style Guide for truth (https://developers.clo 5. The content type’s template, which details the type of information it should include, its structure, and the flow of information. -``` +```txt Second-person imperative verb phrase Context for procedure From 889441a51256cadb6b46ed2dee67e105282bfbb1 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:57:54 -0600 Subject: [PATCH 47/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 991865098367f2..0c18a9ccafe518 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -12,7 +12,7 @@ AI is a powerful tool, but it is not a cure-all. Its success depends heavily on When deciding whether to use AI for a task, use these principles as your guide: 1. **The feedback loop is critical:** The single most important factor for success is the feedback loop. How quickly and easily can you test the output and correct for hallucinations? Code and scripts are far easier to test for correctness than subjective content. -2. **Prioritize additive tasks:** AI is generally better for additive task**s,** like new things you couldn't or wouldn't do before, as opposed to operational tasks that are required to keep the business running. +2. **Prioritize additive tasks:** AI is generally better for additive tasks, like new things you could not or would not do before, as opposed to operational tasks that are required to keep the business running. ### How to decide when to use AI From 720e9ff8d3f3d8c169945e8fc5b39f110aec6082 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:58:15 -0600 Subject: [PATCH 48/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 0c18a9ccafe518..8574a5c06ae18b 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -24,7 +24,7 @@ We ask ourselves a few questions before we look to AI to solve a problem or stre - Is there a clear logic we can apply to successfully identify or perform the action? - Will this be scalable or useful for others to also use? -If we can say `yes` to these questions, it is a strong candidate for an AI-based solution. If we say `no` or `I don’t know` to any of them, we first pursue the current process and look for smaller, specific areas where AI could still be helpful. +If we can say `yes` to these questions, it is a strong candidate for an AI-based solution. If we say `no` or `I do not know` to any of them, we first pursue the current process and look for smaller, specific areas where AI could still be helpful. ### Recommended use cases: What's worked for us From 574067633426069e39c126120b8f7346b796bd16 Mon Sep 17 00:00:00 2001 From: Caley Burton Date: Tue, 11 Nov 2025 07:58:33 -0600 Subject: [PATCH 49/50] Update src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- .../docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx index 8574a5c06ae18b..3e86a9abe41db1 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/when-we-use-ai.mdx @@ -26,7 +26,7 @@ We ask ourselves a few questions before we look to AI to solve a problem or stre If we can say `yes` to these questions, it is a strong candidate for an AI-based solution. If we say `no` or `I do not know` to any of them, we first pursue the current process and look for smaller, specific areas where AI could still be helpful. -### Recommended use cases: What's worked for us +### Recommended use cases: What has worked for us These are areas where we have found AI to be unequivocally positive and effective. From 8c5991f8784f6be3e75beeed06e72c6e05efb7a2 Mon Sep 17 00:00:00 2001 From: Caley Date: Wed, 12 Nov 2025 13:19:28 -0600 Subject: [PATCH 50/50] Integrated Alexa's feedback. --- .../how-we-docs/how-we-ai/examples/cloudspeaker.mdx | 2 +- .../docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx | 5 +---- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx index af880a629c428a..cee3cbb91aefae 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/cloudspeaker.mdx @@ -25,7 +25,7 @@ This consolidated view is now a key part of our planning cycles, informing every ## How it is built: An AI-powered data pipeline -CloudSpeaker is built entirely on our own products, including [Cloudflare Pages](/pages/) for the application, [D1](/d1/) for the database, and more. The real power, however, comes from its AI-driven data pipeline, managed by our Data Intelligence team. +CloudSpeaker is built entirely on our own products. The real power, however, comes from its AI-driven data pipeline, managed by our Data Intelligence team. Here is how it works: diff --git a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx index ca70a9caa4864e..90498706f0eaee 100644 --- a/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx +++ b/src/content/docs/style-guide/how-we-docs/how-we-ai/examples/clue.mdx @@ -30,10 +30,7 @@ CLUE was truly built by Cloudflare, for Cloudflare, on Cloudflare. The applicati We adopted a model-driven approach for content evaluation, which provides a systematic, data-driven, and consistent assessment, removing the subjectivity of manual reviews. This model allows us to assess content in seconds, handle complex criteria like readability, and weight criteria based on what we find to be most critical for users. -Critically, CLUE is not just one thing, it is a hybrid solution: - -- **AI-powered checks:** For criteria that require evaluating overall context and tone, CLUE uses [Workers AI](/workers-ai/). This helps us check for things like an empathetic tone in an email or ensuring an error message suggests a path forward for the user. -- **Traditional checks:** For common grammar "no-nos" or specific terminology, we use regular expressions and indexed lists. This helps catch passive voice, missing Oxford commas, or ensure a term like "Internet" is always capitalized, all based on our internal glossary, style guide, and UX best practices. +Critically, CLUE is not just one thing, it is a hybrid solution of AI and traditional checks. This combination allows us to evaluate context while still having the granular control needed for some elements of our style guide. ## The workflow: Using CLUE as an LLM copy editor