#4391 Prompt for Assessing a Document for "Recommended Minimum Content"; +Requirements List
#4391 Prompt for Assessing a Document for "Recommended Minimum Content"; +Requirements List
#4391 I have a software development document and a list of "recommended minimum content". Provide a good prompt to evaluate the document for the recommended content.
To evaluate a software document against a specific set of content recommendations, you need a prompt that enforces structural rigor and objective mapping.
The following prompt uses a "Compliance Matrix" approach, which is often the most effective way to identify gaps without the AI hallucinating presence where there is none.
The Recommended Prompt
System Role: You are an expert Software Quality Assurance Auditor specializing in technical documentation and requirement traceability.
Task: Perform a gap analysis of the provided [Document Title/Type] against the "Recommended Minimum Content" list.
Evaluation Criteria:
Presence: Is the recommended content explicitly addressed?
Sufficiency: Does the detail provided meet the intent of the recommendation?
Location: Where in the document is this content found (Section/Page)?
Output Format: Provide the results in a Markdown table with the following columns:
Recommended Content Item: The name/description of the requirement.
Status: [Fully Met | Partially Met | Missing].
Evidence/Location: Direct quote or section reference.
Gap/Action: If missing or partial, describe exactly what needs to be added.
Constraints:
Do not infer information. If it is not explicitly stated in the text, mark it as "Missing."
If a requirement is "Partially Met," explain what specific nuance or data point is absent.
[ATTACH/PASTE THE RECOMMENDED MINIMUM CONTENT HERE]
[ATTACH/PASTE THE SOFTWARE DOCUMENT HERE]
Tips for Better Results
Define the "Shall": If your recommended list uses specific keywords (e.g., "must," "should," or "shall"), tell the AI to prioritize those as mandatory compliance points.
Segment Large Docs: If the software document is exceptionally long (e.g., a 100-page SRS), process it section by section or focus the AI on the Table of Contents first to map high-level coverage.
Recursive Check: After the AI provides the table, you can follow up with: "Based on the 'Missing' items, suggest a draft for a new Section [X] that would satisfy these requirements."
Does the document you are evaluating follow a specific regulatory standard, or is it an internal team template?
Switch gears a little. I have a document and a list of requirements (including shalls). Provide a prompt to evaluate the document for requirements compliance.
To shift from general content evaluation to strict requirements compliance, the prompt needs to move from "presence of information" to "traceability and verification."
Since you are dealing with "shalls," the AI must act as a logic checker to ensure that for every requirement, there is a corresponding implementation or descriptive proof within the document.
The Requirements Compliance Prompt
System Role: You are a Senior Systems Engineer and Compliance Auditor. Your task is to perform a formal Requirements Traceability Matrix (RTM) analysis between a set of "Shall" requirements and a target Software Document.
Task: Analyze the provided document to determine if it satisfies each individual requirement listed.
Analysis Protocol:
Requirement Extraction: Treat every "Shall" statement as a unique, non-negotiable data point.
Verification Mapping: Search the document for explicit evidence that fulfills the "Shall."
Binary Assessment: A requirement is only "Compliant" if the document explicitly describes how the requirement is met. If the document merely mentions the topic but lacks the specific directive action, mark it as "Non-Compliant."
Output Format: Provide a Markdown table:
ID Requirement (Shall Statement) Compliance Status Document Reference (Section/Quote) Findings/Gap Analysis [ID] [The text of the requirement] [Compliant / Non-Compliant / Partial] [Section # or "N/A"] [Explain why it passes or exactly what is missing] Strict Rules:
No Assumptions: If a requirement asks for "redundant power" and the document only mentions "power," mark it as Non-Compliant due to missing redundancy details.
Terminology Match: Flag any discrepancies where document terminology conflicts with requirement terminology.
[PASTE LIST OF REQUIREMENTS/SHALLS HERE]
[PASTE DOCUMENT TEXT HERE]
Key Adjustments for "Shalls"
The "Gap Analysis" Focus: In formal compliance, the "Findings" column is the most important. It should highlight the specific delta between what the requirement demands and what the text provides.
Logical Negation: You can add a secondary instruction: "Identify any statements in the document that directly contradict a 'Shall' requirement." This is often more critical than finding missing info.
Ambiguity Flagging: If a requirement is met but the language in the document is "weak" (using words like should, may, or can instead of will or shall), instruct the AI to flag this as a Technical Risk.
Are these requirements derived from a specific technical standard (like an IEEE or NASA standard), or are they custom project-level requirements?
Give the list of requirements, would it be better to include the list in a single prompt, or a prompt per requirement?
Deciding between a single batch prompt or a requirement-by-requirement approach depends largely on the context window of the LLM you are using and the density of your "shall" statements.
Given the technical complexity and safety-critical nature of the documents you handle, here is the breakdown of which strategy to choose:
1. The Single Prompt (Batch) Approach
Best for: Short-to-medium lists (under 30 requirements) and high-level mapping.
Pros: Much faster; the AI can see the relationship between requirements; it can identify if one section of the document satisfies multiple "shalls" simultaneously.
Cons: "Attention Sink." Large language models can sometimes skip requirements in the middle of a long list or provide thinner analysis for the final few items.
2. The Per-Requirement (Atomic) Approach
Best for: Safety-critical "shalls" (e.g., NASA Class A/B software), complex logic, or very long requirement lists.
Pros: Maximum "reasoning density" per requirement. The AI focuses 100% of its attention on finding a match for a single specific directive. It is far less likely to miss nuances or subtle non-compliance.
Cons: Very time-consuming if done manually; consumes more tokens/API calls; can lose the "big picture" of how requirements interact.
The "Hybrid" Recommendation (The Gold Standard)
For high-rigidity software assurance, a Grouped/Modular Approach is usually the most effective middle ground.
Instead of one or fifty prompts, group your requirements by Functional Area (e.g., all "Error Handling" shalls, then all "Interface" shalls).
Structure the prompt like this:
Load the Document: Feed the document into the system prompt or the first message.
Modular Evaluation:
"Using the document provided above, evaluate the following 5 requirements related to [Specific Subsystem/Class]. Provide a deep-dive analysis for each."
Iterate: Repeat for the next batch.
Decision Matrix
| If your list is... | Use... | Why? |
| < 20 requirements | Single Prompt | The model can easily hold the entire matrix in its active "thinking" space. |
| 20–100 requirements | Grouped (5-10 at a time) | Prevents "LLM fatigue" and ensures the analysis of the 80th requirement is as sharp as the 1st. |
| Highly Technical/Safety Critical | Per-Requirement | When a "False Pass" could result in a mission hazard, the extra cost/time of atomic analysis is a necessary safeguard. |
Pro Tip: If you are using a model with a massive context window (like Gemini 1.5 Pro), you can provide the entire document and the entire list, but specifically instruct it to: "Process these sequentially and do not summarize. If the output limit is reached, I will prompt you to 'Continue' until all items are addressed."
From <https://gemini.google.com/app/> Google Gemini 3.0 (Fast)
Comments
Post a Comment