What is documentation?

<div class="toc-placeholder">Raghav</div>
<div class="toc-placeholder">Raghav1</div>
Documentation is written information that describes and explains a product, system, or service. It can take many different forms, such as user manuals, technical guides, and online help resources. Documentation is typically used to provide information and instructions to users of a product or service, and to support its development and maintenance. Internal documentation is documentation that is created and used within an organization, and is typically not intended for external use. It can include things like design and implementation plans, technical specifications, and internal processes and procedures. Internal documentation is often used to help teams within an organization understand and work with a product or service, and to support the development and maintenance of the product or service. External documentation, on the other hand, is documentation that is intended for use by external stakeholders, such as customers, partners, or users of a product or service. It can include things like user manuals, online help resources, API documentation, and technical guides. External documentation is often used to provide information and instructions to users of a product or service, and to support their use of the product or service. Both internal and external documentation, when done right, can take your developer experience and user experience to a different level. External documentation is no replacement for a good product. But few good products can succeed in the market without solid documentation. This is part of an extensive series of guides about software development. Types of documentation External documentation External documentation refers to the written materials that are created for users of a software system. External documentation can be divided into several categories, including: End-user documentation This type of documentation is intended for the end users of a software system, who are typically non-technical individuals. End-user documentation includes user manuals, help files, and online tutorials that explain how to use the software and troubleshoot common issues. Enterprise user documentation This type of documentation is similar to end-user documentation, but is targeted at enterprise users who are responsible for managing and maintaining the software within their organization. Enterprise user documentation may include information on how to install and configure the software, how to perform maintenance tasks, and how to troubleshoot issues. API documentation This type of document is relevant for some products and is aimed at developers who extend the product or interact with it. Just-in-time documentation Just-in-time documentation is documentation that is created on an as-needed basis, rather than being included in the software itself. It is often used in situations where the software is highly complex or changes frequently, and traditional documentation may not be sufficient. Just-in-time documentation may include online resources such as FAQs, forums, and wikis, which users can access when they need help with specific tasks or issues. Related content: read our guide to documentation strategies. Internal documentation Internal documentation refers to the written materials that are created for the development team, rather than for external users of a software system. Internal documentation can be divided into several categories, including: Code documentation: Provides detailed information on the components of a software system, and how developers can work with it. It might include any type of information a developer might need to get started with the system, integrate with it, or participate in its development. Process documentation: Describes the processes and procedures that the development team follows when creating, testing, and maintaining software. It may include information on the development methodology, code review process, and testing procedures. Project documentation: Describes the overall goals, requirements, and design of a software project. It may include user stories, acceptance criteria, and technical specifications.
In June 2022, Astral Codex Ten made the following bet on generative AI:
We give an image generation model the following 5 prompts:
- A stained glass picture of a woman in a library with a raven on her shoulder with a key in its mouth
- An oil painting of a man in a factory looking at a cat wearing a top hat
- A digital art picture of a child riding a llama with a bell on its tail through a desert
- A 3D render of an astronaut in space holding a fox wearing lipstick
- Pixel art of a farmer in a cathedral holding a red basketball
For each prompt, we generate 10 images. If at least one of the ten images has the scene perfectly correct on at least 3 of the 5 prompts, then ACT wins his bet.
At the time, DALL·E 2 failed all 5 prompts:

But when Google released its Imagen model a few months later, ACT claimed the bet was won – that the cat prompt, the llama prompt, and the basketball prompt now succeeded.

- In Imagen's raven generation, none of the ravens was on the robot’s shoulder, and none had a key in its mouth.
- In its llama generation, none had a bell on its tail.
- In its astronaut generation, none of the foxes was wearing lipstick.
- In its basketball generation, none was recognizably a farmer, most of the basketballs weren’t red, and the cathedral was dubious.
Luckily, when the world needed an third-party AI judge, they knew who to turn to.
(Why are we the right judges? Behind the scenes, we’re the world’s largest RLHF and human LLM evaluation platform – training and measuring every next-gen LLM on millions of prompts every day.)


But that was 1.5 years ago. What about now?
Conclusion
We evaluated DALL·E 3 and Midjourney on ACT’s 5 prompts.
- DALL·E 3 met ACT’s criteria on 2 out of 5 (the cat prompt and the farmer prompt). It completely failed on the llama and raven prompts, but came close on the astronaut prompt.
- Midjourney failed all 5 prompts.
Call me Edwin Marcus, but the bet is not yet won.
Methodology
We first gathered 10 DALL·E 3 generations and 10 Midjourney generations for each of the prompts. Then we asked 5 Surgers – our human evaluators – to rate the accuracy of each generation.
Here’s a sample of the DALL·E 3 generations.