#4432 Fine-Tuning a Small Model with NPR 7150.2D and NASA-STD-8739.8B
#4432 Fine-Tuning a Small Model with NPR 7150.2D and NASA-STD-8739.8B #4432 What would training data look like to fine-tune a small model with NASA NPR 7150.2D? To fine-tune a small, local Large Language Model (like an IBM Granite or Nemotron variant) to understand, audit, or trace requirements against NASA NPR 7150.2D (NASA Software Engineering Requirements) , your training data needs to be structured, deterministic, and highly specific. Because small models have limited parameter capacity, they struggle with raw, open-ended text ingestion. Instead of feeding them the raw PDF, you must transform the document into instruction-response pairs or context-constrained completion pairs that mimic the exact tasks you want the model to perform. Here is a breakdown of what that training data looks like, how to structure it, and concrete examples. 1. The Core Schema: Supervised Fine-Tuning (SFT) For a small model to assist with compliance auditing or Requirements Mapping Matrices (RMM), t...