#4261 LM Studio/Neomotron-Cascade-2-30B-A3B-i1: Multi-Step Prompt Template; +Gemini Improvement

#4261 LM Studio/Neomotron-Cascade-2-30B-A3B-i1: Multi-Step Prompt Template; +Gemini Improvement

#4261 Provide prompt template to ENSURE that each and every step of a multi-step instruct sequence (>= 10 steps) are followed correctly and completely.

Below is a ready‑to‑copy prompt template you can drop into any LLM (or give to a human operator) when you need to be absolutely certain that every single step of a ≥10‑step instruction set is carried out exactly as written, in the right order, and with no shortcuts.

The template is deliberately verbose – it forces the model (or person) to:

  1. Declare each sub‑task before doing any work.
  2. Show a concrete “completion marker” for every step.
  3. Self‑audit after each step and before moving on.
  4. Log evidence (outputs, screenshots, command logs, etc.) that can be inspected later.

You can paste the whole block into a chat window, or you can keep it as a reusable “system prompt” for an assistant.


📋 Prompt Template – “Step‑by‑Step Guarantee”

[System] You are **STRICTLY COMPLIANT**. Your job is to execute a multi‑step instruction list (>=10 steps) with *zero* deviation, omission, or reordering. You must treat every step as an indivisible atomic task and prove that it succeeded before proceeding. --- ### 📌 General Rules (do NOT skip) 1. **Read the entire instruction list first.** Do not start any work until you have listed all steps in order. 2. **For each step i (i = 1 … N):** a. Write `=== STEP i ===` on its own line. b. State the exact command, query, or action you are about to perform *verbatim* (copy‑paste from the list). c. Execute it **exactly** as written – no shortcuts, no paraphrasing. d. Capture the raw output (or a screenshot if visual) and place it under `--- Output ---`. e. Write `✅ STEP i COMPLETED` **only after you have verified** that the output matches every success criterion listed for step i. 3. **Verification Checklist** – after each step, run this checklist before moving to the next: - [ ] The command/query was executed *exactly* as written (no missing flags, no extra arguments). - [ ] All required inputs were supplied in the correct order/format. - [ ] The observed output satisfies **every** success condition for that step (list them explicitly). - [ ] No error messages or warnings appear unless the step’s “allowed‑error” list includes them. 4. If any verification fails, **STOP IMMEDIATELY**. Write `❌ STEP i FAILED` and provide a detailed reason. Do not attempt later steps until you have corrected the failure *and* re‑run the entire step (including its output capture). 5. After the final step N, write a **Final Confirmation** block that lists: - All step numbers with ✅ status. - A short summary of each step’s key result. - An explicit statement: “All steps 1‑N have been executed correctly and in order.” 6. Do NOT add any extra commentary, emojis, or decorative text **outside** the designated sections (=== STEP i === … --- Output ---). Keep everything inside the template structure. --- ### 📄 Instruction List (to be filled by the user) 1. **[STEP 1 – Title]** *Description:* <free‑form description of what step 1 must achieve> **Success Criteria:** - • … - • … 2. **[STEP 2 – Title]** *Description:* … **Success Criteria:** … … (continue up to at least STEP 10) --- ### 🛠️ Example of a single step execution (do NOT copy‑paste; use as a pattern) === STEP 3 === Command: `git log --oneline -n 5` Output:

<raw terminal output here> ✅ STEP 3 COMPLETED

Verification Checklist:

  •  Command matches exactly.
  •  Output shows the last 5 commits, no more, no less.
  •  No error messages.
--- **Now start with Step 1.** Do not proceed until you have written the full step header and captured its output. When you finish all steps, send the entire transcript back to me for final audit. [End of Prompt]

How to Use It in Practice

SituationWhat You Do
You are an LLMSet the above block as your system message. When a user supplies the numbered instruction list, you will automatically follow the template because it is baked into your system prompt.
You are a human operatorPaste the whole block into a text editor, replace the placeholder “Instruction List” with your actual 10‑plus steps (including success criteria), then run each step exactly as shown. Keep the transcript and send it back for verification.
You are building an automated pipelineEncode the template in a script that:
1️⃣ Prints === STEP i ===
2️⃣ Runs the command, captures stdout/stderr, writes them under --- Output ---
3️⃣ Checks each success‑criterion (regexes, exit codes)
4️⃣ If any check fails, abort and alert.
The script can later dump the whole transcript for a human reviewer.

Why This Works

RequirementHow the Template Satisfies It
Every step is explicitThe === STEP i === header forces you to name the exact command before execution.
No skipping or re‑orderingYou must finish and verify step i (✅) before the next header appears; the script cannot generate a later header until the previous one is marked complete.
Evidence of completion--- Output --- plus the verification checklist give you an auditable artifact that can be inspected later.
Error handlingThe “STOP IMMEDIATELY” rule guarantees you never hide failures behind a later success flag.
Scalable to any length (≥10)The template is generic; just add more numbered items – the same loop works for 5, 20, or 100 steps.
Human‑readable auditAll sections are clearly demarcated, making it trivial for a reviewer to spot missing output or mismatched criteria.

Quick “Copy‑Paste” Ready Version (no placeholders)

If you already have the numbered list ready, just replace the placeholder section with your real steps and run:

[System] You are STRICTLY COMPLIANT... (copy the whole block above)

Then start at === STEP 1 === and follow the pattern. When you reach the final step, send the entire transcript back to me for a final compliance audit.


Happy compliant execution! 🚀 If you need a concrete example with real commands (e.g., “download file → unzip → checksum → upload”), just let me know and I’ll generate one for you.

61.41 tokens/second


From <LM Studio 0.4.7> Neomotron-Cascade-2-30B-A3B-i1



What do you think about the following multi-step prompt template? """[System] You are **STRICTLY COMPLIANT**. Your job is to execute a multi‑step instruction list (>=10 steps) with *zero* deviation, omission, or reordering...

This is a classic "Chain of Verification" (CoVe) or "Strict Procedural" prompt. It’s essentially an attempt to turn a Large Language Model into a deterministic state machine.

By forcing the AI to mirror the command, capture raw output, and check off success criteria before moving on, you are effectively creating a manual "debugging" loop for the model to follow.

Here is a breakdown of why this works (and where it might still trip up):

### 🟢 Why it’s effective

  • Reduces "Skipping": LLMs often suffer from "middle-of-the-prompt" neglect. Forcing a numbered, 10-step minimum structure ensures the model maintains focus on the sequential nature of the task.

  • Self-Correction: By requiring a Verification Checklist for every single step, you trigger "Self-Critique." The model is much more likely to spot its own hallucination if it has to explicitly check a box saying it didn't happen.

  • Audit Trail: If the process fails at Step 7, you know exactly why. You aren't left guessing which part of a complex prompt the AI misunderstood.

### 🔴 Potential Pitfalls

  • The "Yes-Man" Effect: LLMs have a strong prior toward being helpful and saying "Yes." There is a risk the model will simply check the boxes (✅) out of habit even if the output is slightly flawed, especially if the success criteria are vague.

  • Context Window Pressure: If the 10 steps are very data-heavy, the "raw output" captures could eventually fill the context window. This might lead the model to "forget" the earlier rules in the system prompt toward the end of the chain.

  • Instruction Overload: Sometimes, very rigid formatting constraints (like "no emojis outside designated sections") take up "reasoning bandwidth," making the model slightly less capable of solving the actual problem because it’s so focused on the formatting.

### 🛠️ Suggested Tweaks

If you want to make this even more bulletproof, consider these additions:

  1. The "Negative Constraint" Rule: In the Verification Checklist, add a line: "I have searched for 'Error', 'Warning', or 'None' in the output and found [Results]."

  2. Explicit "Correction" Loop: In Step 4, instead of just "STOP IMMEDIATELY," tell the model: "If verification fails, suggest a modified version of the command to fix the error."

  3. Variable Carry-over: If Step 5 depends on Step 2, add a rule to "Carry forward the specific value of [Variable X]" to ensure data integrity across the chain.

Overall Verdict: It’s a very high-quality template for technical tasks, data extraction, or code refactoring where precision is more important than creativity.

Would you like me to try executing a sample 10-step task using this template to see how it performs?


Improve the template

To improve this template, we need to address the "compliance paradox": the more rules you add, the more likely an LLM is to hallucinate "success" just to satisfy the formatting requirements.

The improvements below focus on State Persistence (ensuring data from Step 1 reaches Step 10), Negative Constraints (preventing the model from ignoring errors), and Verification Rigor.


🛠️ The Enhanced Multi-Step Prompt Template

[System]
You are a **DETERMINISTIC EXECUTION ENGINE**. Your goal is to complete a 10+ step sequence with 100% fidelity. You operate in a "Verify-Before-Proceed" loop. 

### 💡 Core Execution Logic
1.  **State Initialization:** List all steps (1–N) before starting to anchor your context.
2.  **The "Isolation" Rule:** Treat each step as if it is the only task that exists. Do not reference future steps or hallucinate their completion.
3.  **The "Red-Flag" Check:** Before marking a step ✅, you MUST explicitly search the output for keywords: "Error", "Fail", "Exception", "Warning", "Null", or "Undefined".

---

### 📋 Strict Execution Protocol (For Every Step i)

=== STEP i ===
**Command/Action:** [Copy verbatim from list]
**Input State:** [List any variables or data carried over from previous steps]
**Execution:** [Perform the task]

--- Output ---
[Raw, unedited output]

--- Verification Checklist ---
- [ ] Command executed without modification? (Yes/No)
- [ ] Output contains required success markers? (List them)
- [ ] "Red-Flag" Check passed? (Checked for errors/nulls)
- [ ] Data Integrity: If this step produced a value, state it clearly for the next step.

**Status:** [✅ STEP i COMPLETED] OR [❌ STEP i FAILED - REASON]
---

### 📄 Instruction List (User-Provided)

1. **[STEP 1 - Title]**
   *Description:* ...
   **Success Criteria:** ...
   **Outputs to Carry Forward:** [List specific variables/data]

... [Steps 2-10] ...

### 🏁 Final Integrity Report
After Step N, provide a table:
| Step # | Status | Key Result/Variable |
| :---   | :---   | :---                |
| 1      | ✅     | [Value]             |
| ...    | ...    | ...                 |

Final Statement: "Execution Chain Verified. No deviations detected."

🚀 Key Improvements Explained

1. Input State & Carry-Forward

Standard LLMs often lose the "thread" of data. By adding an Input State section to every step, you force the model to look back at Step 1's results before starting Step 2. This prevents "context drift."

2. The "Red-Flag" Check

LLMs are naturally "agreeable" and might overlook a small error message in a sea of text. Forcing a specific search for terms like Null or Exception breaks the "yes-man" habit.

3. The Final Integrity Table

The original template had a summary block, but a Markdown Table is better. It forces the model to re-parse its own previous outputs into a structured format, which acts as a final layer of "Self-Correction."

4. State Initialization

By listing all steps at the very beginning, you utilize the model's "pre-computation" or "Chain of Thought" capabilities, making it less likely to skip steps in the middle.


Would you like me to demonstrate this improved template by performing a complex 10-step data analysis or code-generation task?


From <https://gemini.google.com/app/>  Google Gemini 3.0 (Fast)


Comments

Popular posts from this blog

#2892 Example of ChatGPT o1 Prompt

#3105 Federal and State Spending Idiocy