TimeFinder from EMC

TimeFinder SnapVX

TimeFinder SnapVX, usually just called SnapVX, was introduced with the DMX3. SnapVX uses Redirect-On-Write (ROW) technology, which is new to TimeFinder. When an update is made to a source track, this update is asynchronously written to a new location in the SRP, and the original data is preserved for the snapshot. Pointers are used to make sure that each copy of the track is used by the correct data. SNAPVX snapshots do not require target volumes and in nocopy mode they will only use extra space when the source volume is changed. A single source volume can have up to 256 snapshots, and these snapshots also save space by sharing point-in-time (PIT) tracks, or 'snapshot deltas'. The snapshot deltas are stored in the SRP alongside the source volume and each snapshot has a set of pointers that reference the set of snapshot deltas that are needed to preserve a PIT image.

If you set your DMX3 up with a storage group for each application, or group related storage groups together, then the beauty of this arrangement is that you can snap entire storage groups, or sets of storage groups with a single command, and the snap by definition will be Point In Time consistent. When the establish command is issued, SnapVX will pause IO to the storage Group to ensure that no writes are active while the snapshot is being created. Once the snapshot activation completes, writes are allowed to the source disks again, but the snapshot is a consistent copy of those source disks at T0, the time when the establish command was issued.

Establish

To create a snapshot, you use the symsnapvx establish command. Here, we take a snapshot of our app1_sg storage group, call it daily_snap, and keep it for 7 days.

symsnapvx -sid 038 -nop -sg app1_sg establish -name daily_snap -ttl -delta 7

Snapshot names are case sensitive, can be up to 32 characters long and can contain underscores '_' and hyphens '-'. If you run this command every day, then 7 days worth of snapshots will be kept. The current snapshot will be generation 0,

Link

You can then define a link from the snapshot data to another storage group which is host mapped, to make this snapshot accessible to the host, which then makes consistent PIT copy of an entire application available for offline backup, or for testing purposes.
If you define your Linked Target Volume to be in Copy Mode, then the snapshot will copy all the data in the background and create a complete set of application data.

To access this snapshot, you need to link a host mapped target volume to the snapshot data. The links may be created in Copy mode (by adding -copy after the link command) for a permanent copy of the target volume, or in default Nocopy mode for temporary use. You can add a -gen n parameter to link in an earlier snapshot.

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap link -lnsg StorageGroup2

If you already have a snapshot linked to that host, you can unlink then relink it to a different snapshot with the relink command. If the original snapshot was using copy mode, then the new snapshot will jus need to copy over differential data

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap relink -lnsg StorageGroup2 - gen 5 relink -copy

If you wanted to permanently remove the snapshot relationship you would use the unlink command

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap link -lnsg StorageGroup2 - gen 5 unlink

Restore

You can restore your original source volume from a snapshot with the symsnapvx restore command. Restores work by copying back the differential data, so they can be fast if there was little update activity since T0. The second restore command will wind the same storage group back to generation 4. If the snapshots run daily at 05:00 and this is Friday, then the generation 4 snapshot would have run on Monday AT 05:00.

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap restore
symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap -GEN 5 restore

Display

If you run the same snap create command twice, SnapVX will create a new snapshot with the same name, but a different generation number. To get some information about your snaps you can try the following command, which will list all existing snapshots, with generation numbers, space used and expiry dates.

symsnapvx -sid 038 list -sg app1_sg -detail

If you want to find out if what link relationships exist, use the list command

symsnapvx -sid 038 list snapvx_devices -linked

Removing Snapshots

If you terminate a snapshot, you remove from the system completely. You must delete any active link sessions and terminate any restore sessions first. Typical commands are

symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap -gen 4 terminate -restored
symsnapvx -sid 038 -sg app1_sg -snapshot_name daily_snap -gen 4 terminate

Cascading Snapshots

It is possible to have snapshots of snapshots of snapshots. EMC states that there is no architectural restrictions on how deep a set of cascaded relationships can go. To set up a cascade relationship, simply use an existing target snapshot as the source of a new snapshot.


Older EMC Snapshot Products

EMC has three older instant copy products,

Timefinder/Clone

TimeFinder/Clone creates physical volume copies called clones. The Clone copies can be in RAID5 format and do not require that a previous mirror has been established. You can have up to 8 concurrent clone copies. Clone data is immediately accessable from a host, unlike standard BCVs where you need to wait for the copy to complete.

TimeFinder/Clone has two activate modes; -copy and -nocopy. With the -copy mode you will eventually have a complete copy of the original disk at the clone, as it was at the point-in-time the activate command was issued. With the -nocopy mode, only updated tracks are copied and uncopied data is maintained at the clone with pointers. Either option requires that the clone be the same size as the source. In open systems, Nocopy is the default and as all the data is not copied, it cannot be used as a DR position. The create command has a -precopy option that starts the full copy process off before the activate, so speeding up the process of creating a full copy. In a mainframe setup, the SNAP command automatically starts a full copy process.

The TimeFinder/Clone Commands are -

  • Create initiates a session between a standard volume and a clone copy. You can initiate sessions for an entire device group, between two devices in a group, or between two ungrouped devices. The first command below assumes a device group called CLONEDB has already been defined and creates clone sessions to target devices within the group. The second command will initiate a session between 2 specific devices. The third command uses the -precopy option so the copy process begins as soon as the clone relationship is established, and -differential, which allows the clone to be refreshed at at later date.

    symclone -g CLONEDB -tgt create
    symclone create DEV001 sym ld DEV002
    symclone create DEV001 sym ld DEV002 -precopy -differential

  • Activate makes the clone available for read/write and with the -copy option, starts the data copy process from standard volume to clone. The default action is no-copy, which means that only updated tracks are copied over from the source. You can query the status of a clone, including the status of the copy process, with the third command below. The copy status will be either 'copyinprog' or 'copied'.

    symclone -g CLONEDB -tgt activate -consistent
    symclone activate DEV001 sym ld DEV002
    symclone -g CLONEDB query

  • If the clone was started with the -differential option, it is possible to refresh the clone copy to the current point in time. To do this you need to issue the recreate then activate commands below.

    symclone -g CLONEDB -tgt recreate
    symclone -g CLONEDB -tgt activate -consistent

  • You use RESTORE to recover a volume or group back to the point in time state. This can be the original volume or a new volume. You need the -force option if your source volume is in an active RDF session with remote R2 devices. The symclone query command will show the status as 'Restore in Progress' or 'Restored'. Once the restore completes you need to split the clone before you can re-establish cloning in the normal direction

    symclone -g CLONEDB -tgt restore -force
    symclone restore DEV001 sym dev 0041
    symclone -g CLONEDB query
    symclone -g CLONEDB split

  • Use terminate to break a clone relationship into discrete volumes, but the clone must be in 'copied' status or the data on it will not be complete

    symclone -g CLONEDB query
    symclone -g CLONEDB terminate DEV001 sym ld DEV002

Timefinder/Snap

Timefinder/snap works by creating a new image of a LUN that consists of pointers that reference the data on the original LUN. If any updates are scheduled to the source, the original data is copied to the snap before the source is overwritten. However, the snap does not reserve space for a full disk to cater for any updates. You allocate a 'Save Device', which is a common pool for original data which needs to be copied if updates are made to the primary.
Unlike other implementations, TimeFinder/Snap is designed for applications that need temporary access to production data, maybe for reporting or testing. It is not designed to be, nor is it suitable for disaster recovery, as it is completely dependent on the existence of the source data.

The Snap utility can normally create up to 16 independent copies of a LUN, when the target data appears to be frozen at the time that each Snap command was issued. You can increase this to 128 copies by issuing the command

SET SYMCLI_MULTI_VIRTUAL_SNAPSIZE=ENABLED

  • The starting point for defining a snap copy is to set up a volume group that contains all the data that you want snapped. The examples below refer to a volume group called SNAPDB. Once you have your volume group, you need to start the session between a standard volume and a snap copy with a create command. The device numbers are for illustration only. Use your own device numbers. addall means add all the ungrouped devices in the specified range, -vdev means the command just applies to virtual devices

    symdg create SNAPB
    symld -g SNAPDB addall -range 00:09
    symld -g SNAPDB addall dev -range 3E:37 -vdev
    symsnap -g SNAPDB create

  • Activate starts the copy-on-write process that preserves the snap copy.

    symsnap -g SNAPDB activate -consistent

  • If you want to 'refresh' your snap copy to make it look like a current copy of the source group, you need to terminate the existing session, then re-establish the snap. This starts a new point in time copy using a differential update.

    symsnap -g SNAPDB terminate
    symsnap -g SNAPDB create
    symsnap -g SNAPDB activate -consistent

  • RESTORE used to recover a volume back to the point in time state. This can be the original volume or a new volume.

    symsnap -g SNAPDB restore

Timefinder/VP Snap

TimeFinder/ VP Snap is used where the source volumes use virtual provisioning, and so means you can create space efficient clones. You achieve this efficiency by having the shared thin pool extents across multiple VP Snap targets. Before you can use VP Snap you need to set up a thin snap volume group. This is described in the Timefinder / Snap section above.
The commands to establish and activate two VP snap sessions are:

symclone -sid 475 -g testdg create -vpool DEV001 sym ld TARG01 -nop
symclone -sid 475 -g testdg activate DEV001 sym ld TARG01 -nop
symclone -sid 475 -g testdg create -vpool DEV001 sym ld TARG02 -nop
symclone -sid 475 -g testdg activate DEV001 sym ld TARG02 -nop

Nore the '-vpool' parameter, this is the virtual provisioning pool name and so makes the clone use virtual provisioning.

RecoverPoint

EMC has introduced a Timefinder feature called RecoverPoint, which allows you to roll a source volume back to a specific point in time, rather than just to the point when the clone was activated. This could be very useful to recover a disk from a corruption situation, where you want to get back to the nearest point in time to when the corruption occured. EMC uses the term 'DVR like' recovery, as once you pick and restore to a point-in-time, it is possible to 'fast-forward' or 'reverse' to another point-in-time if the original one was not suitable.

At a local level, RecoverPoint essentially gives you local continuous data protection (CDP), but you can also extend it to use remote storage, and so get continuous remote replication (CRR). To achieve this RecoverPoint traps every write IO, writes them to local and remote journals and then distributes the data to target volumes. For remote writes, the data is deduplicated and compressed before sending it to the remote site to save on bandwidth.

RecoverPoint volume terminology is 'source volumes', the production volumes that are to be copied and 'replica volumes', the target RecoverPoint volumes. You would normally write disable your replica volumes to essure that the data is an exact copy of the source. However it is possible to allow a remote host to get direct read/write access to a replica volume and its associated journal so it can get access to the data at any point in time. You can also swap the configuration around so the source volumr becomes the replica and the replica the source, and swap the hosts so the primary hosty becomes the standbye and the standbye the primary.

If you use remote replication, then you need an appliance server, called a RecoverPoint appliance (RPA), to manage it. The appliance sits on the SAN and runs the RecoverPoint software. You need at least 2 RPAs at each site for fault tolerance, and up to 8 can be installed.
RPAs are connected to the SAN by four 4 Gb FC connections and two 1 Gigabit Ethernet connections. The RPA needs to be able to access all the write IOs that originate from the production host, so the RPA ports need to be zoned to the same Symmetrix VMAX front end-adapters (FAs) as are zoned to the production host.
You also need to be able to split the write IOs so they are directed to both local and remote replication volumes. This is done by a Symmetrix VMAX write splitter for RecoverPoint, an enhanced implementation of Open Replicator. You need to have Enginuity 5876 and RecoverPoint 3.5 or higher installed on your VMAX. The splitter simultaneously sends the writes to the local target volumes and the RPA (RecoverPoint Appliance), which then forwards the writes to the target device in the remote array.

RecoverPoint needs a few special volumes that are only visible to the RPA cluster. They hold the RecoverPoint journals and management information required for RecoverPoint replication operations.

If you need to preserve the write order of your IOs, then you need RecoverPoint consistency groups. They are very similar to TimeFinder consistency groups and consist of one or more replication sets.