#4261 LM Studio/Neomotron-Cascade-2-30B-A3B-i1: Multi-Step Prompt Template; +Gemini Improvement
#4261 LM Studio/Neomotron-Cascade-2-30B-A3B-i1: Multi-Step Prompt Template; +Gemini Improvement
#4261 Provide prompt template to ENSURE that each and every step of a multi-step instruct sequence (>= 10 steps) are followed correctly and completely.
Below is a ready‑to‑copy prompt template you can drop into any LLM (or give to a human operator) when you need to be absolutely certain that every single step of a ≥10‑step instruction set is carried out exactly as written, in the right order, and with no shortcuts.
The template is deliberately verbose – it forces the model (or person) to:
- Declare each sub‑task before doing any work.
- Show a concrete “completion marker” for every step.
- Self‑audit after each step and before moving on.
- Log evidence (outputs, screenshots, command logs, etc.) that can be inspected later.
You can paste the whole block into a chat window, or you can keep it as a reusable “system prompt” for an assistant.
📋 Prompt Template – “Step‑by‑Step Guarantee”
<raw terminal output here> ✅ STEP 3 COMPLETED
Verification Checklist:
- Command matches exactly.
- Output shows the last 5 commits, no more, no less.
- No error messages.
How to Use It in Practice
Why This Works
Quick “Copy‑Paste” Ready Version (no placeholders)
If you already have the numbered list ready, just replace the placeholder section with your real steps and run:
Then start at === STEP 1 === and follow the pattern. When you reach the final step, send the entire transcript back to me for a final compliance audit.
Happy compliant execution! 🚀 If you need a concrete example with real commands (e.g., “download file → unzip → checksum → upload”), just let me know and I’ll generate one for you.
61.41 tokens/second
From <LM Studio 0.4.7> Neomotron-Cascade-2-30B-A3B-i1
What do you think about the following multi-step prompt template? """[System] You are **STRICTLY COMPLIANT**. Your job is to execute a multi‑step instruction list (>=10 steps) with *zero* deviation, omission, or reordering...
This is a classic "Chain of Verification" (CoVe) or "Strict Procedural" prompt. It’s essentially an attempt to turn a Large Language Model into a deterministic state machine.
By forcing the AI to mirror the command, capture raw output, and check off success criteria before moving on, you are effectively creating a manual "debugging" loop for the model to follow.
Here is a breakdown of why this works (and where it might still trip up):
### 🟢 Why it’s effective
Reduces "Skipping": LLMs often suffer from "middle-of-the-prompt" neglect. Forcing a numbered, 10-step minimum structure ensures the model maintains focus on the sequential nature of the task.
Self-Correction: By requiring a Verification Checklist for every single step, you trigger "Self-Critique." The model is much more likely to spot its own hallucination if it has to explicitly check a box saying it didn't happen.
Audit Trail: If the process fails at Step 7, you know exactly why. You aren't left guessing which part of a complex prompt the AI misunderstood.
### 🔴 Potential Pitfalls
The "Yes-Man" Effect: LLMs have a strong prior toward being helpful and saying "Yes." There is a risk the model will simply check the boxes (✅) out of habit even if the output is slightly flawed, especially if the success criteria are vague.
Context Window Pressure: If the 10 steps are very data-heavy, the "raw output" captures could eventually fill the context window. This might lead the model to "forget" the earlier rules in the system prompt toward the end of the chain.
Instruction Overload: Sometimes, very rigid formatting constraints (like "no emojis outside designated sections") take up "reasoning bandwidth," making the model slightly less capable of solving the actual problem because it’s so focused on the formatting.
### 🛠️ Suggested Tweaks
If you want to make this even more bulletproof, consider these additions:
The "Negative Constraint" Rule: In the Verification Checklist, add a line: "I have searched for 'Error', 'Warning', or 'None' in the output and found [Results]."
Explicit "Correction" Loop: In Step 4, instead of just "STOP IMMEDIATELY," tell the model: "If verification fails, suggest a modified version of the command to fix the error."
Variable Carry-over: If Step 5 depends on Step 2, add a rule to "Carry forward the specific value of [Variable X]" to ensure data integrity across the chain.
Overall Verdict: It’s a very high-quality template for technical tasks, data extraction, or code refactoring where precision is more important than creativity.
Would you like me to try executing a sample 10-step task using this template to see how it performs?
Improve the template
To improve this template, we need to address the "compliance paradox": the more rules you add, the more likely an LLM is to hallucinate "success" just to satisfy the formatting requirements.
The improvements below focus on State Persistence (ensuring data from Step 1 reaches Step 10), Negative Constraints (preventing the model from ignoring errors), and Verification Rigor.
🛠️ The Enhanced Multi-Step Prompt Template
[System]
You are a **DETERMINISTIC EXECUTION ENGINE**. Your goal is to complete a 10+ step sequence with 100% fidelity. You operate in a "Verify-Before-Proceed" loop.
### 💡 Core Execution Logic
1. **State Initialization:** List all steps (1–N) before starting to anchor your context.
2. **The "Isolation" Rule:** Treat each step as if it is the only task that exists. Do not reference future steps or hallucinate their completion.
3. **The "Red-Flag" Check:** Before marking a step ✅, you MUST explicitly search the output for keywords: "Error", "Fail", "Exception", "Warning", "Null", or "Undefined".
---
### 📋 Strict Execution Protocol (For Every Step i)
=== STEP i ===
**Command/Action:** [Copy verbatim from list]
**Input State:** [List any variables or data carried over from previous steps]
**Execution:** [Perform the task]
--- Output ---
[Raw, unedited output]
--- Verification Checklist ---
- [ ] Command executed without modification? (Yes/No)
- [ ] Output contains required success markers? (List them)
- [ ] "Red-Flag" Check passed? (Checked for errors/nulls)
- [ ] Data Integrity: If this step produced a value, state it clearly for the next step.
**Status:** [✅ STEP i COMPLETED] OR [❌ STEP i FAILED - REASON]
---
### 📄 Instruction List (User-Provided)
1. **[STEP 1 - Title]**
*Description:* ...
**Success Criteria:** ...
**Outputs to Carry Forward:** [List specific variables/data]
... [Steps 2-10] ...
### 🏁 Final Integrity Report
After Step N, provide a table:
| Step # | Status | Key Result/Variable |
| :--- | :--- | :--- |
| 1 | ✅ | [Value] |
| ... | ... | ... |
Final Statement: "Execution Chain Verified. No deviations detected."
🚀 Key Improvements Explained
1. Input State & Carry-Forward
Standard LLMs often lose the "thread" of data. By adding an Input State section to every step, you force the model to look back at Step 1's results before starting Step 2. This prevents "context drift."
2. The "Red-Flag" Check
LLMs are naturally "agreeable" and might overlook a small error message in a sea of text. Forcing a specific search for terms like Null or Exception breaks the "yes-man" habit.
3. The Final Integrity Table
The original template had a summary block, but a Markdown Table is better. It forces the model to re-parse its own previous outputs into a structured format, which acts as a final layer of "Self-Correction."
4. State Initialization
By listing all steps at the very beginning, you utilize the model's "pre-computation" or "Chain of Thought" capabilities, making it less likely to skip steps in the middle.
Would you like me to demonstrate this improved template by performing a complex 10-step data analysis or code-generation task?
From <https://gemini.google.com/app/> Google Gemini 3.0 (Fast)
Comments
Post a Comment