Snapshot Cannot Be Restored Due to Subsequent Snapshots
March 13, 2026
Sometimes when rolling back an incus container snapshot you may get an error like Error: ... cannot be restored due to subsequent snapshot ... Set zfs.remove_snapshots to override I did. here is the solution
Scenario
We sometimes want to roll a system way back. it should be a simple snapshot restore. Here is the message.
incus snapshot restore CONTAINER 2026-01-18-1737-FRESH
Error: Snapshot "2026-01-18-1737-FRESH" cannot be restored due to subsequent snapshot(s). Set zfs.remove_snapshots to override
If we list the snapshots, we see that we have taken several since so the system is trying to protect us from rolling back too far.
incus snapshot list CONTAINER
+---------------------------------+----------------------+------------+----------+
| NAME | TAKEN AT | EXPIRES AT | STATEFUL |
+---------------------------------+----------------------+------------+----------+
| 2026-01-18-1737-FRESH | 2026/01/18 17:37 CST | | NO |
+---------------------------------+----------------------+------------+----------+
| 2026-01-18-1738-FRESHER | 2026/01/18 17:38 CST | | NO |
+---------------------------------+----------------------+------------+----------+
| 2026-01-19-0921-DID-THING-1 | 2026/01/19 09:21 CST | | NO |
+---------------------------------+----------------------+------------+----------+
| 2026-01-19-1303-DID-THING-2 | 2026/01/19 13:03 CST | | NO |
+---------------------------------+----------------------+------------+----------+
| 2026-01-19-1535-DID-THING-3 | 2026/01/19 15:36 CST | | NO |
+---------------------------------+----------------------+------------+----------+
Check your storage pools
incus storage list
+---------+--------+-------------+---------+---------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
+---------+--------+-------------+---------+---------+
| default | zfs | | 43 | CREATED |
+---------+--------+-------------+---------+---------+
Since mine is default let's run incus storage show default
incus storage show default
config:
source: tank/virt/lxd
volatile.initial_source: tank/virt/lxd
zfs.pool_name: tank/virt/lxd
description: ""
name: default
driver: zfs
used_by:
...
Closer
zfs list -r tank/virt/lxd
NAME USED AVAIL REFER MOUNTPOINT
tank/virt/lxd 211G 1.99T 96K legacy
tank/virt/lxd/buckets 96K 1.99T 96K legacy
tank/virt/lxd/containers 204G 1.99T 96K legacy
...
tank/virt/lxd/containers/CONTAINER 2.66G 1.99T 3.26G legacy
So the full path is tank/virt/lxd/containers/CONTAINER and our pool is tank/virt/lxd. We only need the last portion: containers/CONTAINER
To bring it all together, use the location we just found and apply the change just to this CONTAINER's storage, run the following.
incus storage volume set default container/CONTAINER zfs.remove_snapshots=true
Prove it works
incus snapshot restore CONTAINER 2026-01-18-1737-FRESH
The subsequent snapshots are gone; however, so take care.
Another Option
Remember you can also copy from a snapshot. So another option would be powering down the contain and creating a new container from the old snaphot.