TimeFinder from EMC

TimeFinder SnapVX

TimeFinder SnapVX, usually just called SnapVX, was introduced with the DMX3. SnapVX uses Redirect-On-Write (ROW) technology, which is new to TimeFinder. When an update is made to a source track, this update is asynchronously written to a new location in the SRP, and the original data is preserved for the snapshot. Pointers are used to make sure that each copy of the track is used by the correct data. SNAPVX snapshots do not require target volumes and in nocopy mode they will only use extra space when the source volume is changed. A single source volume can have up to 256 snapshots, and these snapshots also save space by sharing point-in-time (PIT) tracks, or 'snapshot deltas'. The snapshot deltas are stored in the SRP alongside the source volume and each snapshot has a set of pointers that reference the set of snapshot deltas that are needed to preserve a PIT image.

If you set your DMX3 up with a storage group for each application, or group related storage groups together, then the beauty of this arrangement is that you can snap entire storage groups, or sets of storage groups with a single command, and the snap by definition will be Point In Time consistent. When the establish command is issued, SnapVX will pause IO to the storage Group to ensure that no writes are active while the snapshot is being created. Once the snapshot activation completes, writes are allowed to the source disks again, but the snapshot is a consistent copy of those source disks at T0, the time when the establish command was issued.
Note that only a single Storage Group can be specified in a single operation. You could schedule multiple snapshots to run at the same time, but they are likely to have slightly different timestamps, which would not be a valid backup for groups of database files. One way to guarantee consistency would be to implement these storage groups as Cascaded Groups, then take the snapshot of the Parent Storage Group.

Establish

To create a snapshot, you use the symsnapvx establish command. Here, we take a snapshot of our app1_sg storage group, call it daily_snap, and keep it for 7 days.

symsnapvx -sid 038 -nop -sg app1_sg establish -name daily_snap -ttl -delta 7

Snapshot names are case sensitive, can be up to 32 characters long and can contain underscores '_' and hyphens '-'. If you run this command every day, then 7 days worth of snapshots will be kept. The current snapshot will be generation 0,

Link

You can then define a link from the snapshot data to another storage group which is host mapped, to make this snapshot accessible to the host, which then makes consistent PIT copy of an entire application available for offline backup, or for testing purposes.
If you define your Linked Target Volume to be in Copy Mode, then the snapshot will copy all the data in the background and create a complete set of application data.

To access this snapshot, you need to link a host mapped target volume to the snapshot data. The links may be created in Copy mode (by adding -copy after the link command) for a permanent copy of the target volume, or in default Nocopy mode for temporary use. You can add a -gen n parameter to link in an earlier snapshot.

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap link -lnsg StorageGroup2

If you already have a snapshot linked to that host, you can unlink then relink it to a different snapshot with the relink command. If the original snapshot was using copy mode, then the new snapshot will jus need to copy over differential data

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap relink -lnsg StorageGroup2 - gen 5 relink -copy

If you wanted to permanently remove the snapshot relationship you would use the unlink command

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap link -lnsg StorageGroup2 - gen 5 unlink

Restore

You can restore your original source volume from a snapshot with the symsnapvx restore command. Restores work by copying back the differential data, so they can be fast if there was little update activity since T0. The second restore command will wind the same storage group back to generation 4. If the snapshots run daily at 05:00 and this is Friday, then the generation 4 snapshot would have run on Monday at 05:00.

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap restore
symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap -GEN 5 restore

Display

If you run the same snap create command twice, SnapVX will create a new snapshot with the same name, but a different generation number. To get some information about your snaps you can try the following command, which will list all existing snapshots, with generation numbers, space used and expiry dates.

symsnapvx -sid 038 list -sg app1_sg -detail

If you want to find out if what link relationships exist, use the list command

symsnapvx -sid 038 list snapvx_devices -linked

Removing Snapshots

If you terminate a snapshot, you remove from the system completely. You must delete any active link sessions and terminate any restore sessions first. Typical commands are

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap -gen 4 terminate -restored
symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap -gen 4 terminate

Cascading Snapshots

It is possible to have snapshots of snapshots of snapshots. You can't exactly take a snap of another snap, but you can get the equivalent by linking through a target LUN. You present a snapshot to a host by linking a target LUN, and you can link several targets to a single snapshot. So you start to use that target LUN for testing, and update the data in the process. Then you get to the point where you would like to take a copy of that data before you do more updates and you can do this by taking a snapshot of the linked target. You could then link that snapshot to another target LUN present it to a host then snap it again. There is no theoretical limit to how far you could go down this chain. With SnapVX software, this action is referred to as cascading rather than snap of a snap because the linked target, not the targetless snapshot, is replicated.


Older EMC Snapshot Products

EMC has three older instant copy products,

Timefinder/Clone

TimeFinder/Clone creates physical volume copies called clones. The Clone copies can be in RAID5 format and do not require that a previous mirror has been established. You can have up to 8 concurrent clone copies. Clone data is immediately accessable from a host, unlike standard BCVs where you need to wait for the copy to complete.

TimeFinder/Clone has two activate modes; -copy and -nocopy. With the -copy mode you will eventually have a complete copy of the original disk at the clone, as it was at the point-in-time the activate command was issued. With the -nocopy mode, only updated tracks are copied and uncopied data is maintained at the clone with pointers. Either option requires that the clone be the same size as the source. In open systems, Nocopy is the default and as all the data is not copied, it cannot be used as a DR position. The create command has a -precopy option that starts the full copy process off before the activate, so speeding up the process of creating a full copy. In a mainframe setup, the SNAP command automatically starts a full copy process.

The TimeFinder/Clone Commands are -

Timefinder/Snap

Timefinder/snap works by creating a new image of a LUN that consists of pointers that reference the data on the original LUN. If any updates are scheduled to the source, the original data is copied to the snap before the source is overwritten. However, the snap does not reserve space for a full disk to cater for any updates. You allocate a 'Save Device', which is a common pool for original data which needs to be copied if updates are made to the primary.
Unlike other implementations, TimeFinder/Snap is designed for applications that need temporary access to production data, maybe for reporting or testing. It is not designed to be, nor is it suitable for disaster recovery, as it is completely dependent on the existence of the source data.

The Snap utility can normally create up to 16 independent copies of a LUN, when the target data appears to be frozen at the time that each Snap command was issued. You can increase this to 128 copies by issuing the command

SET SYMCLI_MULTI_VIRTUAL_SNAPSIZE=ENABLED

Timefinder/VP Snap

TimeFinder/ VP Snap is used where the source volumes use virtual provisioning, and so means you can create space efficient clones. You achieve this efficiency by having the shared thin pool extents across multiple VP Snap targets. Before you can use VP Snap you need to set up a thin snap volume group. This is described in the Timefinder / Snap section above.
The commands to establish and activate two VP snap sessions are:

symclone -sid 475 -g testdg create -vpool DEV001 sym ld TARG01 -nop
symclone -sid 475 -g testdg activate DEV001 sym ld TARG01 -nop
symclone -sid 475 -g testdg create -vpool DEV001 sym ld TARG02 -nop
symclone -sid 475 -g testdg activate DEV001 sym ld TARG02 -nop

Nore the '-vpool' parameter, this is the virtual provisioning pool name and so makes the clone use virtual provisioning.

RecoverPoint

EMC has introduced a Timefinder feature called RecoverPoint, which allows you to roll a source volume back to a specific point in time, rather than just to the point when the clone was activated. This could be very useful to recover a disk from a corruption situation, where you want to get back to the nearest point in time to when the corruption occured. EMC uses the term 'DVR like' recovery, as once you pick and restore to a point-in-time, it is possible to 'fast-forward' or 'reverse' to another point-in-time if the original one was not suitable.

At a local level, RecoverPoint essentially gives you local continuous data protection (CDP), but you can also extend it to use remote storage, and so get continuous remote replication (CRR). To achieve this RecoverPoint traps every write IO, writes them to local and remote journals and then distributes the data to target volumes. For remote writes, the data is deduplicated and compressed before sending it to the remote site to save on bandwidth.

RecoverPoint volume terminology is 'source volumes', the production volumes that are to be copied and 'replica volumes', the target RecoverPoint volumes. You would normally write disable your replica volumes to essure that the data is an exact copy of the source. However it is possible to allow a remote host to get direct read/write access to a replica volume and its associated journal so it can get access to the data at any point in time. You can also swap the configuration around so the source volumr becomes the replica and the replica the source, and swap the hosts so the primary hosty becomes the standbye and the standbye the primary.

If you use remote replication, then you need an appliance server, called a RecoverPoint appliance (RPA), to manage it. The appliance sits on the SAN and runs the RecoverPoint software. You need at least 2 RPAs at each site for fault tolerance, and up to 8 can be installed.
RPAs are connected to the SAN by four 4 Gb FC connections and two 1 Gigabit Ethernet connections. The RPA needs to be able to access all the write IOs that originate from the production host, so the RPA ports need to be zoned to the same Symmetrix VMAX front end-adapters (FAs) as are zoned to the production host.
You also need to be able to split the write IOs so they are directed to both local and remote replication volumes. This is done by a Symmetrix VMAX write splitter for RecoverPoint, an enhanced implementation of Open Replicator. You need to have Enginuity 5876 and RecoverPoint 3.5 or higher installed on your VMAX. The splitter simultaneously sends the writes to the local target volumes and the RPA (RecoverPoint Appliance), which then forwards the writes to the target device in the remote array.

RecoverPoint needs a few special volumes that are only visible to the RPA cluster. They hold the RecoverPoint journals and management information required for RecoverPoint replication operations.

If you need to preserve the write order of your IOs, then you need RecoverPoint consistency groups. They are very similar to TimeFinder consistency groups and consist of one or more replication sets.

back to top


Snapshot Backups

Lascon latest major updates

Welcome to Lascon Storage. This site provides hints and tips on how to manage your data, strategic advice and news items.