Why Neural Networks Forget — And How They Might Stop

A deep investigation into catastrophic forgetting: the mechanisms, the brain's solutions, and ideas for continuous learning

Research Notes · March 2026 ~40 min read

Contents

  1. The Problem: What Is Catastrophic Forgetting?
  2. The Fundamental Cause: Distributed Representations
  3. Six Identified Mechanisms from the Literature
  4. Why Human Brains Don't Catastrophically Forget
  5. Experience Replay: The Simplest Brain-Inspired Fix
  6. Idea 1: Gradual Freezing — Let Earlier Layers Slowly Solidify
  7. Idea 2: Self-Directed Learning — The Model Teaches Itself
  8. The Model Collapse Problem
  9. Why Pre-Training Doesn't Cause Forgetting (But Fine-Tuning Does)
  10. A Research Proposal: The Five-Piece Continuous Learning System
  11. References

1. The Problem: What Is Catastrophic Forgetting?

When you fine-tune a large language model on a new task — say, medical question answering — something disturbing happens. The model gets good at medical Q&A, but it gets worse at everything else. It might forget how to write code, lose its ability to do basic math, or start producing garbled text for general questions. This isn't gradual decay — it's sudden and severe. A few hundred training steps on medical data can destroy capabilities that took trillions of tokens to build.

This phenomenon is called catastrophic forgetting, first identified by McCloskey & Cohen in 1989. The word "catastrophic" was deliberately chosen — it's not gentle forgetting (like a human slowly losing high school chemistry), it's wholesale destruction of prior knowledge.

Before fine-tuning:           After fine-tuning on medical data:
┌─────────────────────┐       ┌─────────────────────────────┐
│ General knowledge ██│       │ General knowledge ░░░░░░░░░░│  ← degraded
│ Code generation  ██ │       │ Code generation  ░░░░░░░░░░ │  ← degraded
│ Math reasoning   ██ │       │ Math reasoning   ░░░░░░░░░░ │  ← degraded
│ Creative writing ██ │       │ Creative writing ░░░░░░░░░░ │  ← degraded
│ Medical Q&A      ░░ │       │ Medical Q&A      ██████████ │  ← improved
└─────────────────────┘       └─────────────────────────────┘

This is why companies don't just fine-tune GPT-4 on their proprietary data and call it a day. The resulting model would be great at their domain but broken for everything else. Instead, they use techniques like LoRA (which freezes the original weights entirely and adds small trainable adapters) or RAG (which avoids changing the model at all, retrieving relevant information at inference time).

2. The Fundamental Cause: Distributed Representations

The root cause is deceptively simple: neural networks store knowledge in a distributed way across shared weights. A single weight participates in representing many different pieces of knowledge simultaneously. When you update that weight to learn something new, you unavoidably perturb all the old knowledge it was participating in.

Think about it this way. In a database, each fact lives in its own row:

Database storage (localized):
┌──────────────────────────────────────────┐
│ Row 1: "Capital of France" → "Paris"     │  ← independent
│ Row 2: "Capital of Germany" → "Berlin"   │  ← independent
│ Row 3: "Boiling point of water" → "100°C"│  ← independent
└──────────────────────────────────────────┘
Updating Row 1 cannot affect Rows 2 or 3.

In a neural network, knowledge is spread across millions of weights:

Neural network storage (distributed):
┌──────────────────────────────────────────────────┐
│ Weight w₁: participates in "Paris", "French",    │
│            "European capitals", "Eiffel Tower"... │
│ Weight w₂: participates in "Paris", "Berlin",    │
│            "capital cities", "geography"...       │
│ Weight w₃: participates in "boiling", "water",   │
│            "temperature", "Paris" (Paris climate) │
└──────────────────────────────────────────────────┘
Updating w₁ to learn a new fact perturbs EVERYTHING
w₁ was participating in.

There's no way to surgically update only the weights for "medical knowledge" without disturbing "coding ability" and "math reasoning" — because the same weights serve all of them. This is the fundamental structural problem.

The irony

The same property that makes neural networks powerful — distributed representations enable generalization (learning about "cats" helps with "dogs" because they share features) — is what makes them vulnerable to forgetting. The overlap that enables generalization is the overlap that causes interference.

3. Six Identified Mechanisms from the Literature

Researchers have identified several specific mechanisms through which catastrophic forgetting operates. These aren't competing theories — they're different aspects of the same underlying problem:

3.1 Weight Interference / Gradient Conflict

The most direct mechanism. When training on task B, the gradient points in a direction that improves B but may move weights away from their optimal values for task A. Research from Tsinghua University showed that forgetting is directly proportional to the negative gradient similarity between old and new tasks — when the gradient directions for two tasks point in opposite directions, learning one actively degrades the other.

Source: Continual Learning's Next Frontier (2026 survey)

3.2 Representational Drift

As the model trains on new data, the internal representations (hidden states) shift. The representations that downstream layers relied on for old tasks no longer look the same, so those layers produce wrong outputs even though their own weights haven't changed much. It's as if someone rearranged the filing system that all your colleagues depend on — even though the files are still there, nobody can find anything.

Research from the Technical University of Denmark decomposed forgetting into three components, with representational drift being a major contributor, particularly in attention heads within lower layers.

3.3 Loss Landscape Sharpness

When the model sits in a sharp minimum (a narrow valley in the loss landscape), even small weight changes push it out of the valley and destroy performance. Flat minima are more robust — the model can move around without falling off a cliff.

Source: Revisiting Catastrophic Forgetting in LLM Tuning (EMNLP 2024) — showed that loss landscape flatness directly predicts how much forgetting occurs, and proposed sharpness-aware minimization as a mitigation.

3.4 Geometric Transformation of Features

Research from KU Leuven modeled forgetting as geometric transformations — rotations and scalings — of feature vectors. When new training rotates the feature space, the readout layers (which expect features in the old orientation) break. This provides a precise geometric characterization of what "representational drift" actually looks like mathematically.

3.5 Latent Space Collapse (LLM-specific)

When fine-tuning an LLM on a narrow task (e.g., only medical Q&A), the model's output distribution collapses. It produces less diverse outputs and loses the ability to generate text outside the fine-tuning domain. The model doesn't just forget specific facts — its entire output space contracts.

Source: Latent Space Collapse? Understanding the Effects of Narrow Fine-Tuning (LessWrong)

3.6 Layer-Specific Vulnerability

Not all layers forget equally. Lower layers (closer to the input) are more vulnerable to forgetting than higher layers, particularly the attention heads in early layers. This connects to the intuition that early layers learn general features (syntax, tokenization, basic semantics) that should be preserved, while later layers learn more task-specific features that can adapt.

Source: Layer-wise Importance Matters (EMNLP 2024)

4. Why Human Brains Don't Catastrophically Forget

Human brains also use distributed representations across shared synapses. So why doesn't learning French erase your English? The brain has at least five mechanisms that neural networks lack:

4.1 Two Separate Learning Systems

This is the most important difference. The Complementary Learning Systems (CLS) theory (McClelland, McNaughton & O'Reilly, 1995) proposes:

New information goes into the hippocampus first — quickly, without disturbing the neocortex. Then during sleep, the hippocampus replays recent memories to the neocortex, interleaved with older memories. The neocortex sees a mix of old and new, so it never gets the purely sequential "train only on task B" signal that causes catastrophic forgetting.

A neural network has one system that must do both jobs: learn quickly AND retain old knowledge.

4.2 Sparse Representations

In the neocortex, only about 1-5% of neurons are active for any given input. Different concepts use largely non-overlapping sets of neurons, so updating one doesn't heavily interfere with the other.

In a typical neural network, representations are dense — most neurons participate in most inputs, maximizing interference. LLMs make this worse through superposition: they pack far more concepts into their dimensions than they have dimensions, creating massive overlap.

4.3 Synaptic Consolidation

Over time, important synapses in the brain become physically harder to modify. Frequently-used, important connections literally become structurally stronger. This is the biological version of Elastic Weight Consolidation (Kirkpatrick et al., 2017).

4.4 Continual Mixed Exposure

Humans don't learn tasks sequentially. We don't spend a month learning ONLY French and then switch to ONLY Spanish. We're constantly exposed to a mix of old and new. When we fine-tune an LLM on medical data, we feed it ONLY medical data — 100% biased toward one domain. The brain never experiences this.

4.5 Neurogenesis and Modularity

The brain can grow new neurons. The brain also has functional specialization — learning piano (motor cortex) interferes minimally with visual memory (visual cortex) because they use largely separate hardware.

Brain MechanismAI EquivalentStatus
Hippocampal replay during sleepExperience replay / rehearsalImplemented, but crude
Synaptic consolidationEWC, Synaptic IntelligenceImplemented, partially effective
NeurogenesisProgressive neural networksRare in practice
Sparse codingSparse autoencoders, MoEGrowing adoption
Two learning systemsNone widely adoptedMajor gap
Continual mixed exposureData mixing during fine-tuningDone manually, ad hoc

5. Experience Replay: The Simplest Brain-Inspired Fix

Experience replay is conceptually simple: when training on new data, mix in some old data so the model doesn't forget. It's inspired by hippocampal replay during sleep.

In practice:

The challenge for LLMs: the "old data" is the pre-training corpus (trillions of tokens). You can't replay all of it. So you need to select a representative subset. SSR (ACL 2024) addresses this by having the model generate synthetic replay data from itself — the model creates its own "study notes" for topics it wants to remember.

The fundamental limitation of replay

Replay treats the symptom (forgetting) rather than the cause (shared distributed weights). It works, but it's expensive — you're essentially re-training on old data alongside new data, which multiplies the training cost. The brain solves this during sleep, which is metabolically cheap. We don't have an equivalent.

6. Idea 1: Gradual Freezing — Let Earlier Layers Slowly Solidify

The core intuition

As learning progresses, gradually freeze earlier parts of the network. Core concepts are represented in earlier layers (like how convolutional networks learn edges in initial layers — fundamental concepts that shouldn't change). Later layers can remain plastic for new learning. Different learning rates for different parts of the network, with early layers learning increasingly slowly until they're effectively frozen.

This idea has strong support in the literature. It exists under several names:

6.1 Layer-wise Learning Rate Decay (LLRD)

From ULMFiT (Howard & Ruder, 2018). Each layer gets a different learning rate — lower layers get smaller rates, upper layers get larger rates:

η_l = η · ξ^(L-l)    where ξ ≈ 0.9-0.95

Example for a 12-layer transformer with base η = 1e-4, ξ = 0.9:
  Layer 12 (top):     1e-4         ← adapts freely
  Layer 11:           9e-5
  Layer 10:           8.1e-5
  ...
  Layer 1 (bottom):   3.1e-5       ← changes very slowly
  Embedding layer:    2.8e-5       ← nearly frozen

Layer 1 learns at roughly 1/3 the rate of layer 12. This protects foundational features while allowing task-specific adaptation at the top. This is now the standard approach for transformer fine-tuning.

6.2 Gradual Unfreezing

Also from ULMFiT. An even more aggressive version: start by training ONLY the final layer. After some steps, unfreeze the next layer down. Continue until all layers are unfrozen. This prevents early layers from getting large, destructive updates before the upper layers have adapted.

Epoch 1:   [frozen][frozen][frozen][frozen][TRAIN]     ← only top layer
Epoch 2:   [frozen][frozen][frozen][TRAIN ][TRAIN]     ← unfreeze one more
Epoch 3:   [frozen][frozen][TRAIN ][TRAIN ][TRAIN]     ← unfreeze one more
Epoch 4:   [frozen][TRAIN ][TRAIN ][TRAIN ][TRAIN]     ← unfreeze one more
Epoch 5:   [TRAIN ][TRAIN ][TRAIN ][TRAIN ][TRAIN]     ← all layers active

6.3 Selective Layer Freezing

Explored systematically in 2024: which layers should you freeze? The finding: freezing lower layers preserves general knowledge while allowing upper layers to specialize. But the optimal freezing strategy depends on the task — some tasks need changes in middle layers, not just the top.

6.4 Elastic Weight Consolidation (EWC)

Instead of freezing by layer position, freeze by importance. Kirkpatrick et al. (2017) compute the Fisher Information Matrix to identify which weights matter most for old tasks, then penalize changes to those specific weights. Within a single layer, some weights are frozen hard and others are free to change. This is more fine-grained than layer-level freezing.

6.5 LoRA: The Nuclear Option

LoRA freezes ALL layers entirely and adds small trainable adapters (rank-16 matrices). The original weights never change, so there's zero forgetting. The adapters are too small to overwrite the base model's knowledge — they can only steer it. This is effective but limited: the model can learn new tasks but can't deeply modify its core representations.

6.6 Would Slow Continuous Learning Work?

The intuition: if updates are small enough and spread out over time, the model could continuously learn without catastrophic forgetting. This is partially right, but faces problems:

The practical sweet spot

The most effective approaches combine ideas: use LLRD (different rates per layer) + EWC-style importance weighting (different rates per weight within each layer) + replay (mix old data to provide a counter-signal). No single technique is sufficient alone.

7. Idea 2: Self-Directed Learning — The Model Teaches Itself

The core intuition: think about how you study

Imagine you're studying for an exam. You don't just randomly read textbooks — you have a strategy for studying:

  1. You read a passage
  2. You write yourself study notes
  3. You quiz yourself on the notes
  4. If the quiz went well, you remember what kind of notes worked
  5. Next time, you write better notes because you learned what helps

That's self-directed learning. You're not just memorizing — you're learning how to learn. The five papers in this section each explore a different piece of this idea, applied to LLMs.

None of these papers solves the full problem on its own. But together, they show the pieces that a complete self-improving system would need. We'll go through each one, explain exactly what it does in plain language, and then in sections 7.7 and 7.8, step back and assess how close (or far) we are from a model that can truly learn on its own without forgetting.

7.1 SEAL: The Model That Learns How to Write Its Own Study Notes (MIT, 2025)

What does SEAL do, in plain English?

SEAL teaches a model to get better at teaching itself. That sentence sounds circular, so let's unpack it with the studying analogy.

When you're studying, there are two different skills involved:

Most fine-tuning methods focus on the first skill — give the model some data, train on it, hope it sticks. SEAL focuses on the second. It teaches the model to figure out what kind of training data to create for itself so that the training is actually effective.

Concretely, SEAL teaches a model to: read a passage, write "study notes" about it (called "implications" — things like restatements, inferences, and consequences of what the passage says), train on those notes using a lightweight fine-tuning method called LoRA (a technique that adjusts a small number of the model's parameters rather than all of them — see section 6 for details), take a quiz, and then use the quiz results to learn what kind of study notes are most helpful.

SEAL learning loop: write study notes, train on them, quiz, keep good notes, discard bad ones, repeat

The concrete pipeline

Here's what happens, step by step:

  1. Read a passage. The model is given new information — say, a paragraph about quantum physics.
  2. Write study notes. The model generates multiple sets of "implications" from the passage. These are its candidate study notes. Each set is a different attempt at summarizing and restating the passage in a way that might be useful for learning.
  3. Train on the notes. For each set of candidate notes, the model fine-tunes a small copy of itself (using LoRA) on those notes. This is like a student who writes notes and then reviews them.
  4. Take a quiz. After training on each set of notes, the model is tested: can it now answer questions about the passage without looking at the passage? The quiz measures whether the notes actually helped the model internalize the knowledge.
  5. Keep the good notes, discard the bad ones. If training on a particular set of notes improved quiz performance, those notes are labeled "good." If quiz performance didn't improve (or got worse), those notes are labeled "bad" and discarded.
  6. Learn from what worked. The model is trained to generate more notes like the "good" ones and fewer like the "bad" ones. This updates the model's note-writing strategy.
  7. Repeat. Next time the model reads a passage, it writes better study notes because it learned from past successes and failures.

The technical name for steps 5-6 is ReSTEM (Reward-weighted Self-Training via Expectation Maximization). That name is opaque jargon, so here's what it actually means: generate many candidate study notes, test which ones helped (the "Expectation" step — you're estimating which notes are good), throw away the bad ones and train the note-generation strategy on only the good ones (the "Maximization" step — you're maximizing quality by only learning from successes). Repeat this generate-test-filter-retrain cycle several times. Each time, the model gets better at writing useful notes.

Results

Task 1: Knowledge Incorporation (SQuAD). The model reads a passage, generates study notes, trains on them, and is then quizzed without access to the original passage.

MethodWhat it doesSQuAD Accuracy
Base model (no adaptation)Model just answers from its pretraining knowledge32.7%
Train directly on the passageFine-tune on the raw passage text33.5%
GPT-4.1 writes the notesUse a much larger model as a teacher to generate study notes46.3%
SEAL (self-taught)Model writes its own notes, trained by trial and error47.0%

Two things stand out. First, training directly on the passage barely helps (32.7% to 33.5%). Raw text is bad training data — the model needs the information rewritten in a form it can absorb, just like a student who reads a textbook but doesn't retain anything until she rewrites it in her own words. Second, SEAL slightly beat GPT-4.1 as a teacher (47.0% vs. 46.3%). A small model that has learned how to write study notes for itself outperformed a model 10-100x larger writing notes for it. The strategy matters more than the intelligence of the teacher.

Task 2: Few-Shot Learning (ARC-AGI). ARC-AGI is a set of visual pattern puzzles — the model has to figure out a transformation rule from a few examples, then apply it. Here, SEAL doesn't just write study notes — it generates specifications for how to augment the training data and chooses its own learning settings (hyperparameters) for each puzzle.

MethodWhat it doesARC-AGI Accuracy
In-context learning onlyJust show the model examples, no training0%
Test-time training (no outer loop)Train at test time but with random strategy20%
SEAL (with RL outer loop)Model learns which strategies work, then uses the best ones72.5%
Oracle (perfect strategy)If you could always pick the ideal strategy100%

The jump from 20% to 72.5% is the value of learning how to learn, not just learning. The 20% model was training at test time, but it was choosing its training strategy blindly. The 72.5% model was training at test time with a strategy it had learned to be effective.

Models tested

Llama-3.2-1B-Instruct, Qwen2.5-7B, and Qwen2.5-3B — all relatively small models. This is noteworthy: the self-teaching mechanism works even without massive model capacity.

Limitations — what SEAL can't do

Why SEAL matters for the bigger picture

SEAL is the first system that teaches a model the meta-skill of self-teaching. This is a prerequisite for genuine self-directed learning. But it's only one piece. A complete system would also need forgetting prevention (section 6) built into the inner loop, so that each round of self-teaching doesn't destroy what was learned in previous rounds.

Reference: Tack et al. (2025) — SEAL: Self-Adapting Large Language Models


7.2 Self-Instruct: Bootstrapping Instruction-Following from 175 Examples (Wang et al., 2023)

What does Self-Instruct do, in plain English?

Imagine you need to train a new employee, but you only have time to write 175 example tasks for them. You need 50,000+ examples for thorough training. Self-Instruct solves this by having the employee write the rest of the training manual herself, using those 175 examples as inspiration.

More precisely: you start with 175 hand-written examples of tasks (things like "Summarize this paragraph," "Classify this movie review as positive or negative," "Write a poem about autumn"). You show the model a few of these examples and ask it to invent new, different tasks. Then you ask it to generate sample inputs and outputs for those new tasks. You filter out the bad ones, add the good ones to your growing collection, and repeat. By the end, you have 52,000 training examples — generated entirely by the model — that you can use to fine-tune the model (or a different model) to follow instructions.

The concrete pipeline

  1. Start with 175 seed tasks. These are hand-written examples, each consisting of an instruction ("Write a haiku about..."), an input (if needed), and the expected output. These are the only human-written data in the entire pipeline.
  2. Sample 8 tasks from the pool. Pick 8 tasks at random from the growing collection (initially, all 8 come from the 175 seeds).
  3. Ask the model to generate 8 NEW tasks. Show the model the 8 sampled tasks as examples and prompt it: "Here are some tasks. Write 8 more tasks that are different from these." The model invents new tasks inspired by the examples — things like "Rewrite this sentence in passive voice" or "Explain photosynthesis to a 5-year-old."
  4. Check if the new tasks are different enough. Each new task is compared against all existing tasks using a similarity metric called ROUGE-L (which measures word overlap between two texts). If a new task has ROUGE-L similarity of 0.7 or higher with any existing task, it's too similar — it gets thrown out. This prevents the collection from filling up with near-duplicates.
  5. Generate input-output examples for each surviving task. For each new task that passed the filter, the model generates sample inputs and correct outputs. For example, for the task "Classify this email as spam or not spam," it would generate a sample email and the correct classification.
  6. Filter out bad examples. Remove cases where the output is too short, too long, identical to the input, or where the same input produced conflicting outputs.
  7. Add surviving examples to the pool. The new tasks and their examples join the growing collection.
  8. Repeat from step 2. As the pool grows, the model sees more diverse examples when sampling, which leads to more diverse new tasks.

Results

MetricValue
Seed examples (human-written)175
Generated examples (model-written)52,000
Model used for generationGPT-3 (vanilla, no instruction tuning)
Improvement on Super-NaturalInstructions benchmark+33% absolute over base GPT-3
Compared to InstructGPT (trained with human feedback)Within 5% of InstructGPT-001

The most striking result: GPT-3 with Self-Instruct nearly matched InstructGPT, a model that was trained with expensive human feedback (RLHF). Self-Instruct achieved similar performance using only self-generated data. The "secret sauce" of instruction-following turned out to be mostly about the format of the training data, not the source.

Why it matters: the Alpaca moment

Self-Instruct directly led to Stanford Alpaca, which became famous for replicating instruction-following ability for just $600. The Alpaca team used the Self-Instruct pipeline with GPT-3.5-Turbo (cheaper than GPT-3) to generate 52K examples, then fine-tuned Meta's LLaMA-7B model on them. The result was qualitatively comparable to GPT-3.5 in many evaluations. This proved that the expensive part of building an instruction-following model wasn't the architecture or the RLHF process — it was the training data. And a model could generate that data for itself.

What it's missing

Self-Instruct is a one-shot pipeline, not a learning loop. You run it once to generate data, train on that data, and you're done. If you tried to iterate — train on the generated data, then use the trained model to generate more data, then train on that — quality would degrade rapidly. Each generation would drift further from the original 175 seeds, becoming more generic, more repetitive, and less diverse. This is an early warning sign of the model collapse problem (discussed in section 8).

The aggressive filtering (the ROUGE-L check, removing duplicates and bad examples) was doing heavy lifting to maintain quality. Without it, the generated tasks quickly converge to a narrow set of common, easy tasks. The model can only generate tasks similar to what it already knows — it cannot bootstrap capabilities it doesn't have.

The fundamental limitation

Self-Instruct can amplify existing capabilities but cannot create new ones. The model generates tasks it already knows how to do — it can't invent tasks that require skills it doesn't have. This sets a ceiling: self-generated data can make a model more polished and better-formatted, but it can't make the model genuinely smarter than it already is.

Reference: Wang et al. (2023) — Self-Instruct: Aligning Language Models with Self-Generated Instructions


7.3 STaR: Learning to Think Step-by-Step from Your Own Mistakes (Zelikman et al., 2022)

What does STaR do, in plain English?

STaR teaches a model to reason better by practicing math problems (or logic problems) and learning from both its successes and its failures. The key trick: when the model gets a problem wrong, it's shown the correct answer and asked "Can you write a step-by-step explanation that would have led to this answer?" This reverse-engineered explanation — called rationalization — becomes training data, allowing the model to learn from problems it couldn't solve on its own.

Think of it like a math tutor who works like this: first, try the problem yourself. If you get it right, great — your work is added to your "good reasoning examples" file. If you get it wrong, the tutor shows you the answer and says "Now write out the steps that would lead to this answer." Your attempt at explaining the right answer — even if it's a bit clumsy — goes into the file too. Then you study the whole file, and try a new set of problems. Each round, you get better.

The concrete pipeline

  1. The model attempts a problem. Given a question (e.g., from the CommonsenseQA benchmark), the model writes out step-by-step reasoning and arrives at a final answer.
  2. Check: is the answer right? Compare the model's final answer to the known correct answer.
  3. If RIGHT: keep the reasoning trace. The model's step-by-step work is saved as a training example. "This is what good reasoning looks like."
  4. If WRONG: rationalize. The model is shown the correct answer and asked: "Can you write reasoning that would have led to this answer?" The model reverse-engineers a step-by-step explanation for the answer it now knows is correct. This rationalized trace is also saved as a training example.
  5. Fine-tune on all kept traces. The model is trained on the combined set — both the naturally correct reasoning (step 3) and the rationalized reasoning (step 4).
  6. Repeat with the improved model. Go back to step 1. The improved model gets more problems right on its first try, and the rationalizations for the remaining failures are higher quality.

Why rationalization is the key insight

Without rationalization, the model can only learn from problems it already gets right. This creates a ceiling effect: the model polishes its existing reasoning but can never break through to problems it currently fails on. It's like a student who only reviews material she already understands — she'll ace the easy stuff but plateau on the hard stuff.

Rationalization breaks this ceiling. By giving the model the correct answer and asking it to explain why, the model generates reasoning traces for problems that were previously beyond it. These rationalized traces aren't always elegant — sometimes the model's "explanation" is a bit forced or awkward — but they give the model enough of a foothold to improve. After fine-tuning on rationalized traces, some problems that were previously unsolvable become solvable without the hint. Those new natural successes expand the training set for the next round, creating a virtuous cycle.

The bootstrapping cycle

Each round improves the model in two ways: (1) it generates better reasoning for problems it could already solve, and (2) it rationalizes its way into new problems via the hint of the correct answer. After fine-tuning, some of those previously-impossible problems become solvable without hints, expanding the set of natural successes for the next iteration. The model literally pulls itself up by its own bootstraps.

Results

MetricValue
ModelGPT-J 6B (a relatively small model)
BenchmarkCommonsenseQA
Performance achievedMatched a 30x larger model (~180B parameters) that didn't use STaR
Across iterationsAccuracy improved consistently each round
Reasoning qualityVisibly improved — early rationales are vague; later ones are precise and logical

A 6B model matching a 180B model is remarkable. It suggests that reasoning ability is not just about model size — it's about the quality of reasoning examples the model trains on. Self-generated, iteratively refined reasoning traces can be nearly as effective as having 30x more parameters.

What it's missing

STaR requires ground-truth answers to check against. The model can't use this technique on open-ended problems where there's no single correct answer. It also can't generate its own problems — it needs a problem set with known solutions. And like every method in this section, it has no mechanism to prevent forgetting: if you run STaR on math problems and then on science problems, the math reasoning may degrade.

Connection to the bigger picture

STaR demonstrates that a model can teach itself how to think, not just what to answer. The reasoning traces are entirely self-generated. But it needs an external signal (correct answers) to know if its thinking is on track. A fully self-directed system would need to generate its own evaluation criteria — which is what the next paper (section 7.4) begins to address.

Reference: Zelikman et al. (2022) — STaR: Bootstrapping Reasoning With Reasoning


7.4 Autonomous Learning: The Model That Quizzes Itself and Studies What It Got Wrong (ACL 2025)

What does this do, in plain English?

This system solves a problem the previous methods don't address: how does the model figure out what it doesn't know?

Think about how a good student studies. She doesn't re-read the entire textbook cover to cover — that's a waste of time because she already knows half the material. Instead, she quizzes herself first, identifies the topics she's weak on, and then focuses her study time on those specific topics. That's exactly what this system does.

The clever mechanism: give the model a document and have it answer questions about it twice — once with the document open in front of it (open-book), and once from memory (closed-book). The gap between these two scores tells you exactly what the model can understand but hasn't memorized. Then you train specifically on those gaps.

The concrete pipeline

  1. Read a document. The model is given a new document — say, a technical paper about a medical procedure.
  2. Generate questions about it. The model reads the document and creates questions that test understanding of the key concepts. ("What are the three risk factors mentioned?" "Why is early detection important in this context?")
  3. Answer open-book. The model answers each question with the document available. These answers are typically correct because the model can look up the information directly — it's like taking an open-book exam.
  4. Answer closed-book. The model answers the same questions but without the document. These answers reveal what the model actually knows from its training — what's already baked into its weights.
  5. Find the gaps. Compare the open-book and closed-book answers. Wherever they diverge — the model got it right open-book but wrong closed-book — that's a knowledge gap. The model can understand this information (it answered correctly when looking at the document) but hasn't internalized it (it failed when relying on memory alone).
  6. Train on the gaps. Fine-tune the model specifically on the question-answer pairs where it had the largest gap. This focuses learning time on exactly what the model doesn't know, rather than wasting training on material it already has down.

Results

ComparisonResult
vs. traditional fine-tuning (training on the whole document)Autonomous Learning was more effective
vs. other self-improvement baselinesAutonomous Learning outperformed them
External labels or reward signals needed?None — entirely self-sufficient
Sample efficiencyHigher — trains on less data but learns more, because it targets weaknesses

Why it matters

This is the first method in this section that requires no external signal at all. STaR needs correct answers to check against. Self-Instruct needs seed examples written by humans. SEAL needs a quiz with known answers. Autonomous Learning generates its own questions, creates its own answer key (from the open-book pass), and identifies its own weaknesses (from the gap). The entire learning cycle is internally driven.

The self-quizzing principle

This mirrors a well-studied human learning technique called retrieval practice: test yourself on material, identify what you got wrong, and study specifically those topics. Education research shows this is significantly more effective than re-reading, and the same principle applies to models. Targeted practice on weaknesses beats uniform review of everything.

What it's missing

The method assumes the model's open-book answers are correct — that it can understand the document when looking at it, even if it hasn't memorized it. For very complex or novel material that the model can't comprehend even with the document in front of it, the gap signal becomes unreliable (both open-book and closed-book are wrong, so there's no useful gap). And like every method in this section, it doesn't address how to retain newly learned knowledge across multiple learning episodes — forgetting across sequential topics remains unsolved.

Reference: Li et al. (2025) — Autonomous Learning for LLM Domain Adaptation, ACL 2025 Findings


7.5 SELF: The Model That Edits Its Own Essays (Lu et al., 2023)

What does SELF do, in plain English?

SELF teaches a model to do something every good writer does: write a draft, critique it, revise it, and repeat until it's good. Then it trains on the revised version instead of the original draft. Over time, this produces better and better training data, which produces a better model, which produces better critiques, and so on.

Think of a writing workshop where participants read each other's work and give feedback. Except here, the model plays both roles — writer and critic. It writes a response, puts on its "editor hat" and critiques the response ("This explanation is confusing in the second paragraph, and the conclusion doesn't follow from the argument"), then puts its "writer hat" back on and revises the response based on its own critique. The revised version becomes the training data.

The concrete pipeline

SELF works in two phases:

Phase 1: Learning to critique and revise. Before the self-improvement loop can start, the model needs to be decent at two meta-skills:

These meta-skills are the foundation. Without them, the model's self-critiques would be random and unhelpful, and the whole loop would produce garbage.

Phase 2: The self-improvement loop.

  1. Generate a response. The model writes an answer to a prompt.
  2. Critique the response. The model evaluates its own answer: "What's wrong with this? What could be better?"
  3. Revise based on the critique. The model produces an improved version of its response, addressing the issues it identified.
  4. Train on the revised version. The model is fine-tuned on the improved response, not the original draft.
  5. Repeat. The improved model writes better initial drafts, produces sharper critiques, creates better revisions, and trains on higher-quality data. Each cycle raises the floor.

Bonus: it works at inference time too

An interesting side benefit: once the model has learned the critique-and-revise skills, it can use them at inference time (when answering a user's question) without any additional training. The model generates a response, silently critiques it, silently revises it, and returns the improved version. This is an immediate quality boost that works on every query, not just during training.

Results

FindingDetail
Domains testedMathematics and general tasks
Improvement across iterationsMeasurable and consistent — each round of critique/revision produced higher-quality output
Compounding effectBetter model → better critiques → better revisions → better training data → better model
Human intervention neededNone after Phase 1 setup

What it's missing

The self-critique ceiling

A model can only critique as well as it can understand the domain. If the model has a blind spot — a systematic error it doesn't recognize — its critique will miss the error, and training on the "revised" version will reinforce the mistake. For example, if the model consistently gets a scientific fact wrong and doesn't know it's wrong, its self-critique will say "looks good!" and the error gets baked deeper into the weights. This is different from model collapse (which narrows the distribution) but equally dangerous: it cements specific errors rather than correcting them. External evaluation — benchmarks, human spot-checks — is necessary to catch blind spots that the model can't see in itself.

Connection to the bigger picture

SELF fills a specific gap in the self-directed learning vision. SEAL (7.1) learns what kind of training data to create. STaR (7.3) learns from its own reasoning traces. Autonomous Learning (7.4) identifies what topics to study. SELF adds the ability to evaluate and improve its own work. A complete self-directed system would combine all of these: identify what to learn (gap detection), generate training material (SEAL), verify the material's quality (SELF's critique), refine it (SELF's revision), and then train on the polished result.

Reference: Lu et al. (2023) — SELF: Self-Evolution with Language Feedback


7.6 The Self-Testing Component: How the Model Catches Its Own Regressions

None of the five systems above (SEAL, Self-Instruct, STaR, Autonomous Learning, SELF) include systematic regression testing — checking whether learning something new broke something old. This is the missing piece that connects self-directed learning to the catastrophic forgetting problem.

In software engineering, regression testing is standard practice: after every code change, you run your test suite to make sure you didn't break anything. The same idea applies to model learning. After every training round, you should check that the model's existing abilities are still intact.

A concrete pipeline for self-testing

  1. Establish a diagnostic benchmark suite. Before any self-directed learning begins, measure the model's baseline scores on a battery of tests covering all domains it should be competent in:
    • General knowledge (e.g., MMLU or a subset)
    • Reasoning (e.g., GSM8K, ARC)
    • Code generation (e.g., HumanEval)
    • Domain-specific benchmarks for any specialized capabilities
    • A canary set — a small, carefully chosen collection of diverse examples that serves as an early warning system (cheap to run, catches obvious problems fast)
  2. Run diagnostics after each learning round. After every fine-tuning step — whether from SEAL self-edits, STaR rationalization, or any other self-training method — run the diagnostic suite. This produces a score vector across all domains.
  3. Detect regression. Compare post-training scores to the baseline and to the previous round's scores. Flag any domain where performance dropped by more than a threshold (e.g., 2% absolute or 5% relative).
  4. Generate targeted replay data. If regression is detected on domain X:
    • Generate replay examples for domain X using the gap-detection method from section 7.4
    • Include these replay examples in the next training round's data mix
    • Increase the importance-based protection (EWC, from section 6) on weights that the diagnostic indicates are critical for domain X
  5. Maintain a capability dashboard. Track scores across all iterations in a structured log. This catches:
    • Slow drift: is math slowly degrading even if each individual drop is below threshold?
    • Domain correlations: does improving medical knowledge always hurt code generation?
    • Early signs of model collapse: is output diversity decreasing across the board?

Practical considerations

This is the autonomous version of what ML teams do manually

Production ML teams already run regression tests after model updates — they do it manually, with human engineers reviewing dashboard metrics and deciding whether to roll back changes. The self-testing component automates this and integrates it into the learning loop. The model monitors its own health, detects its own regressions, and prescribes its own remediation. Combined with the self-directed learning methods from sections 7.1-7.5, this completes the closed loop: learn, test, detect problems, fix them, repeat.


7.7 How Close Are We? An Honest Assessment

We've now seen five self-directed learning systems plus a proposal for self-testing. The natural question: how close are we to a model that can genuinely learn continuously, on its own, without forgetting?

The answer depends on what you mean by "learn continuously." There's a spectrum from easy problems (that we've mostly solved) to hard problems (that nobody has solved), and most of the excitement in this section falls somewhere in the middle.

Spectrum from solved (narrow task adaptation with LoRA) to unsolved (human-like continuous learning)

The spectrum of continuous learning challenges

Challenge Example Status Best current solution
Narrow task adaptation "Fine-tune for medical QA without forgetting general ability" Mostly solved LoRA (freeze most weights, adjust a few)
Runtime knowledge "Answer questions using documents provided at query time" Mostly solved RAG (retrieval-augmented generation — look it up, don't memorize it)
Self-generated training data "Generate your own practice problems and learn from them" Works for one round Self-Instruct, STaR (but degrades if iterated)
Self-directed learning strategy "Figure out what to study and how to study it" Early research SEAL (learns the meta-skill, but still forgets)
Multi-domain sequential learning "Learn medicine, then law, then finance over months without degrading" Partially addressed Replay + EWC + LoRA (works for a few domains, degrades eventually)
Human-like continuous learning "Learn indefinitely from experience, never forget fundamentals, integrate everything" Not solved No existing solution works at scale

What each approach does well and where it falls short

System Strength Weakness
SEAL (7.1) Learns the meta-skill of how to teach itself No forgetting protection; slow; needs a test to score against
Self-Instruct (7.2) Generates massive amounts of training data from tiny seeds One-shot only; can't iterate without quality collapse; can't create new capabilities
STaR (7.3) Teaches reasoning through self-practice with rationalization Needs ground-truth answers; limited to domains with verifiable solutions
Autonomous Learning (7.4) Identifies its own knowledge gaps without any external signal Assumes open-book answers are correct; no retention mechanism
SELF (7.5) Quality control through self-critique and revision Can't detect its own blind spots; risks cementing systematic errors

The fundamental barrier: everything is entangled

All five systems share the same underlying vulnerability: the model stores all its knowledge in shared weights. When you learn something new, you change weights. Some of those weights are also responsible for things you learned before. Change them too much, and you forget the old thing. Change them too little, and you don't learn the new thing.

This isn't just an engineering problem that better algorithms can solve — it's a consequence of how neural networks represent knowledge. In a large language model, concepts aren't stored in dedicated locations. They're distributed across millions of weights in complex, overlapping patterns. Researchers call this superposition: the model packs far more concepts into its weight space than it has dimensions, so concepts inevitably share weights. When you update those shared weights for one concept, you perturb every other concept that depends on them.

Why LLMs might be especially vulnerable

The honest bottom line

We can do narrow task adaptation without forgetting (LoRA). We can do one round of self-improvement (Self-Instruct, STaR). We have early prototypes of self-directed learning strategy (SEAL). But genuine continuous learning — where a model keeps getting smarter over weeks and months without degrading — remains unsolved. The gap between "works in a paper" and "works in production, indefinitely" is large.


7.8 What Would a Real Solution Look Like?

If continuous learning is this hard, what would it take to actually solve it? Research in neuroscience, machine learning, and cognitive science points to four capabilities that a truly continuously-learning system would need. No existing system has all four. Most have one or two.

The four missing pieces: two learning systems, sparse representations, adaptive protection, self-monitoring

The four missing pieces

Piece What the brain does What AI currently has What's still missing
1. Two learning systems
(fast absorber + slow integrator)
The hippocampus quickly records new experiences. During sleep, these memories are gradually transferred to the neocortex, where they're integrated with existing knowledge. The fast system captures, the slow system consolidates. Nothing comparable in standard LLMs. LoRA is sometimes described this way (adapter = fast, base = slow), but it's a stretch — LoRA adapters are thrown away or merged, not gradually consolidated. A mechanism for "sleeping on it" — a process that takes quickly-learned information and slowly, safely integrates it into the model's core knowledge without disrupting what's already there.
2. Sparse representations
(each concept uses few weights)
Only about 5% of neurons are active for any given stimulus. This sparsity means that different memories use mostly different neurons, so learning one thing rarely interferes with another. Mixture of Experts (MoE) is a partial step — only a fraction of the model's parameters are active for each input. But within each expert, representations are still dense and entangled. True concept-level sparsity, where learning "quantum physics" activates a largely different set of weights than learning "contract law," so updating one doesn't affect the other.
3. Adaptive weight protection
(critical weights resist change)
Synaptic consolidation: synapses that are heavily used become physically stronger and harder to modify. Important memories are literally harder to overwrite at the biological level. EWC (Elastic Weight Consolidation) and similar methods estimate which weights are important and penalize changes to them. This works for a few tasks but degrades as the number of tasks grows — the importance estimates become noisy and conflicting. Importance tracking that scales to thousands of tasks and updates continuously, not just between tasks. Per-weight protection that adapts in real time as the model's knowledge structure evolves.
4. Self-monitoring
(detect your own forgetting)
Metacognition: humans can often tell when they've forgotten something ("I used to know this, but I can't remember it now"). This awareness triggers re-study. SEAL is an early attempt at self-monitoring (the model evaluates whether self-edits helped). The Autonomous Learning gap-detection method (7.4) is another form. But neither operates continuously or at the level of individual knowledge items. A system that continuously monitors all of the model's capabilities and detects degradation in real time — not just after each training round, but as an ongoing background process.

Why nobody has combined all four yet

Each of these pieces is individually difficult. Two learning systems require architectural changes to how models are trained and deployed. Sparse representations would require rethinking how information is encoded in transformer weights. Adaptive weight protection at scale requires solving the computational problem of tracking importance across billions of weights and thousands of concepts. Self-monitoring requires the model to maintain a running inventory of everything it knows — which is hard when "what it knows" is distributed across billions of parameters with no clear catalog.

But the deeper challenge is integration. These four pieces interact with each other in complex ways:

Current best systems combine about two of the four pieces (e.g., SEAL combines self-monitoring with self-teaching; LoRA + EWC combines sparse adaptation with weight protection). A system that successfully combines all four and works at the scale of frontier models (70B+ parameters) would be a major breakthrough — arguably as significant as the transformer architecture itself.

The path forward

The most promising near-term direction is probably combining SEAL-style self-directed learning (the model learns what to study) with EWC-style weight protection (important weights resist change) and the self-testing framework from section 7.6 (detect regressions automatically). This would give a system with three of the four pieces — self-monitoring, adaptive protection, and a learning strategy. The hardest piece — truly sparse representations — would likely require architectural innovation beyond what we can bolt onto current transformers.

8. The Model Collapse Problem

The critical risk with self-training

When models train on their own outputs iteratively, the output distribution gradually narrows and degrades. The model loses diversity — rare but valid outputs become rarer with each generation until they disappear entirely. After enough iterations, the model produces only generic, common outputs. This is model collapse.

Shumailov et al. (2023) demonstrated this convincingly, and follow-up work confirmed that LLMs suffer from training on their own outputs. The problem persists as of 2025.

Concretely: if the model generates training data about European capitals, and it already believes (with 95% confidence) that the capital of France is Paris, its self-generated training data will be 95% "Paris." The 5% of cases where it might have produced something unexpected (a discussion of Vichy as the historical capital, or context where "capital" means money) gradually disappear. Each generation amplifies the majority and suppresses the minority, until the model's outputs are completely homogeneous.

Generation 0 (real data):     "A fierce tiger stalked through the dense jungle"
Generation 1 (model output):  "A large tiger walked through the jungle"
Generation 2 (trained on G1): "A tiger was in the jungle"
Generation 3 (trained on G2): "The animal was in the forest"
Generation 4 (trained on G3): "The cat sat on the mat"
                               ↑ total collapse to the most generic output

Solutions: always mix in real human-generated data alongside self-generated data, filter self-generated data aggressively for quality and diversity, and use temperature/sampling strategies that preserve output variety.

9. Why Pre-Training Doesn't Cause Forgetting (But Fine-Tuning Does)

A natural question: if catastrophic forgetting happens because updating shared weights destroys old knowledge, why doesn't it happen during pre-training? The model trains on 15 trillion tokens — surely the last batch overwrites what was learned from the first batch?

The answer reveals exactly what causes forgetting and how to avoid it:

FactorPre-trainingFine-tuning
Data diversity per batch High — each batch is a random mix of code, Wikipedia, books, medical text, legal text, everything Low — every batch is 100% from one domain (e.g., all medical QA)
Learning rate Decays over training — tiny by the end (maybe 1/100th of peak) Relatively high (needs to be high enough to learn the new task quickly)
Total data Trillions of tokens — every topic reinforced thousands of times Thousands to millions of examples — old topics get zero reinforcement
Distribution shift None — data is shuffled, so batch 1 and batch 10,000,000 have the same distribution Complete — 100% shift from general to narrow domain

Pre-training avoids forgetting by design: shuffled data means every batch looks the same, and decaying learning rates mean late-training updates are tiny. Fine-tuning creates the perfect conditions for forgetting: narrow data, high learning rate, and zero exposure to old topics.

This points directly at the solution

If pre-training doesn't cause forgetting because the data is diverse and shuffled, the obvious fix for fine-tuning is: mix in some pre-training data during fine-tuning. Make each fine-tuning batch 80% new domain + 20% general pre-training data. This is exactly "experience replay" — and it works. Llama 2's RLHF training did this. But it's not a complete solution: you need access to the original pre-training data (often proprietary), you have to decide the right ratio, and 20% of your compute goes to re-learning what the model already knows.

10. A Research Proposal: The Five-Piece Continuous Learning System

Each of the techniques we've discussed addresses one aspect of the forgetting problem. Nobody has combined all of them into one system. Here's what that system would look like, and why it hasn't been built yet.

10.1 The Five Pieces

#PieceWhat it doesWhat problem it solvesExists individually?
1 Gradual freezing (LLRD) Earlier layers learn slowly, later layers adapt freely Protects general knowledge encoded in early layers Yes — standard since ULMFiT (2018)
2 Low-rank updates (LoRA) Freeze all weights, add small trainable adapters Limits the total amount of change, preventing wholesale overwriting Yes — widely used since 2021
3 High-quality curated replay Mix in "textbook quality" data from the pre-training corpus Prevents forgetting by reminding the model of core knowledge (math, science, logic, code) Yes — Llama 3 annealing, Phi "textbooks are all you need"
4 Self-reflection on knowledge gaps The model quizzes itself, identifies what it doesn't know, generates targeted training material Makes learning efficient — study what you DON'T know, not random content Yes — Autonomous Learning (ACL 2025), SEAL
5 Outer RL loop for learning strategy The model learns HOW to teach itself — which study notes work, which don't Optimizes the self-teaching process over time, getting better at learning Yes — SEAL (MIT, 2025)

10.2 What's Been Combined (and What Hasn't)

CombinationExists?Example
LLRD + LoRAYesDifferent LoRA ranks per layer. Standard practice.
LoRA + replayYesO-LoRA (2024) — orthogonal LoRA subspaces with replay
Replay + EWCYesMost common combination in continual learning literature
Self-reflection + fine-tuningYesSEAL, Autonomous Learning
LLRD + LoRA + replayMaybePossibly in internal production systems, no prominent paper
Self-reflection + LoRA + replayNo
SEAL RL loop + LLRD + replayNo
All five combinedNo

This is a genuine research gap

Each piece is well-motivated, each exists and works individually, each addresses a different aspect of the problem, and nobody has combined them into one system. That's the definition of a research opportunity. The communities that work on these pieces (continual learning, parameter-efficient fine-tuning, self-improving LLMs) publish in different venues, attend different workshops, and don't cite each other's work. SEAL doesn't cite any continual learning papers. Continual learning papers don't cite SEAL.

10.3 Why Hasn't It Been Done?

  1. The communities don't overlap. The SEAL/self-improvement people are in the "LLM capabilities" community. The EWC/replay/LLRD people are in the "continual learning" community. The LoRA people are in the "parameter-efficient fine-tuning" community. They don't read each other's papers.
  2. Engineering complexity. Each piece alone is a significant engineering effort. Combining five into one system that works requires getting each piece right AND getting their interactions right. The RL outer loop (piece 5) is particularly finicky — reward signals are noisy, training is unstable, and adding it on top of everything else creates a complex optimization landscape.
  3. Evaluation is hard. How do you measure success for continuous learning? Standard benchmarks test performance at a snapshot. You'd need benchmarks that evaluate over time — "learn domain A, then B, then C, show performance on all three after each step." These barely exist at LLM scale.
  4. Compute cost. Running the full system on a 70B model would be extremely expensive. Most academic labs can't afford it. Industry labs could but haven't prioritized it.

10.4 A Concrete Experiment Design

If someone were to run this experiment, here's how:

Setup

Conditions to compare (each adds one piece)

ConditionPieces usedExpected result
(a) Naive fine-tuning None Severe forgetting after each domain
(b) LoRA only Piece 2 Less forgetting (weights frozen), but limited adaptation
(c) LoRA + replay Pieces 2 + 3 Significantly less forgetting, decent adaptation
(d) LoRA + replay + LLRD Pieces 1 + 2 + 3 Better knowledge preservation, especially in early layers
(e) Full system All 5 pieces Best overall: efficient learning + knowledge preservation + improving strategy

Measurements after each domain

The prediction

Each component should help incrementally: (a) < (b) < (c) < (d) < (e). But the largest marginal gains would come from pieces 4 and 5 (self-reflection + RL outer loop) on top of the protection mechanisms (pieces 1-3). The protection mechanisms prevent forgetting but don't help with efficient learning of new domains. Self-reflection does — it focuses learning on exactly what the model doesn't know, instead of wasting gradient updates on things it already understands.

The key hypothesis

The combination of "protect old knowledge" (pieces 1-3) and "learn new knowledge efficiently" (pieces 4-5) is more than the sum of its parts. Protection without efficient learning means slow adaptation. Efficient learning without protection means forgetting. Together, the model can learn fast and retain — which is what the brain does with its two-system architecture (hippocampus for fast learning, neocortex protection via slow consolidation).

References

  1. McCloskey & Cohen (1989) — First identification of catastrophic interference in neural networks
  2. McClelland, McNaughton & O'Reilly (1995) — Complementary Learning Systems theory
  3. Kirkpatrick et al. (2017) — Elastic Weight Consolidation (EWC)
  4. Howard & Ruder (2018) — ULMFiT: discriminative fine-tuning and gradual unfreezing
  5. Rolnick et al. (2019) — Experience Replay for Continual Learning
  6. Zelikman et al. (2022) — STaR: Self-Taught Reasoner
  7. Wang et al. (2023) — Self-Instruct
  8. Shumailov et al. (2023) — Model collapse from training on generated data
  9. Lu et al. (2023) — SELF: Self-Evolution with Language Feedback
  10. Del Gaudio et al. (2024) — Self-Synthesized Rehearsal (SSR)
  11. Luo et al. (2024) — Loss landscape sharpness and catastrophic forgetting
  12. Kim et al. (2024) — Layer-wise importance in fine-tuning
  13. Li et al. (2025) — Autonomous Learning for LLM Domain Adaptation
  14. Tack et al. (2025) — SEAL: Self-Adapting Language Models
  15. Continual Learning's Next Frontier (2026 survey) — Comprehensive survey of forgetting mechanisms
  16. Nature: Continual Learning and Catastrophic Forgetting
  17. IBM: What is Catastrophic Forgetting?
  18. LLRD and Fine-tuning Strategies
  19. Self-Evolution in LLMs (Survey)