UHAC (production) - 1TB Data Celerra (NSG or Integrated or Unified) / CENTERA Gen3 or 4
Objective - Migrate data to any other target storage (not EMC). Decomission existing celerra and Centera.
Checkpoint / Red Flag- archived data in centera and how to recall those data.
Best Approach
option (A)
(1) Break connection from Celerra using fs_dhsm command (Centera connection) -
***Prerequisite - equal amount of free space is required in Celerra when you recall the data from Centera***
(2) Once all data recalled; complete migration using host based migration tool or backup/restore option **prerequisite - dedicated Windows host with high end configuration and domain admin access **
option (B)
(1) Use host based migration tool like emcopy or xcopy or robocopy **prerequisite - dedicated Windows host with high end configuration and domain admin access**
(2) map target drive or assign lun to target drive
(3) migrate data (while migrating data; it will recall the data from Centera during read and copy process; and it will copy the entire content of the file system)
option (C)
Working directly with SDK API's of centera (the SNIA XAM protocol *used by Interlock or any other vendor using this migration approach*).
option (D)
(1) break connection from celerra to centera
(2) take backup of stub files/and entire file system contents
(3) restore in new target storage (with same config of CIFS/fs)
(4) create new connection like in celerra (from new storage to centera)
(5) recall data from new storage
(6) once all data recalled; decomission centera
Note: this will depend on the compatibility of CENTERA with new target storage.
Option C is much faster than any other options.
option A and B are a best approach if customer has any financial constraints
Thursday, September 15, 2011
Monday, May 30, 2011
Good Config - EMC
Following are a good configuration:::
MirrorView -> can be target for multiple CLARiiON from same source CLARiiON.
SANCopy -> 10-15% cache reserve in RLP
SnapView (Clone/snapshot) -> Good for a backup copy on same array
RaidGroup -> Same RG should not be used for SAN and NAS Data, Same RG should not be used for RLP and for DATA Luns
Celerra fs -> Same file system should not be expanded using multiple pools
Celerra pool -> 4+1/4+2 pools (in FC Drives) are best config for performance sensitive applications.
Celerra iSCSI -> recommend seperate VLAN / redundant LAN configuration for high availability and iSCSI replication requires separate reservation of space for savvol.
Celerra Multiporotocol - FS -> It is good to go with mixed mode fs for multiprotocol access (unix access will use unix credentials / windows will use AD for authentication **If no standalone CIFS not in use**)
Centera -> recommend to use PEA files for archive rather using anonymous
Centera -> Always lock the node and recommend to keep minimum of 2 nodes as access nodes.
MirrorView -> can be target for multiple CLARiiON from same source CLARiiON.
SANCopy -> 10-15% cache reserve in RLP
SnapView (Clone/snapshot) -> Good for a backup copy on same array
RaidGroup -> Same RG should not be used for SAN and NAS Data, Same RG should not be used for RLP and for DATA Luns
Celerra fs -> Same file system should not be expanded using multiple pools
Celerra pool -> 4+1/4+2 pools (in FC Drives) are best config for performance sensitive applications.
Celerra iSCSI -> recommend seperate VLAN / redundant LAN configuration for high availability and iSCSI replication requires separate reservation of space for savvol.
Celerra Multiporotocol - FS -> It is good to go with mixed mode fs for multiprotocol access (unix access will use unix credentials / windows will use AD for authentication **If no standalone CIFS not in use**)
Centera -> recommend to use PEA files for archive rather using anonymous
Centera -> Always lock the node and recommend to keep minimum of 2 nodes as access nodes.
Friday, March 25, 2011
VNXe - EMC Storage
VNXe - Smart Storage
--------------------
EMC introduced storage For small and midsized businesses (SMBs), the real gem here is the VNXe series
*tool to manage called ünisphere
*easy to install
*easy to manage
*easy to provision
*data protection feature
*replication and DR solutions
*Multi protocol support
*best practice wizards to configure the virtual storage
*east to deploy storage for VMware based environment
*detail resource report
*detail component view and report
*knowledge base and community links included
*east to load license and credentials through file
VNXe Models
--------------
VNXe3100
VNXe3300
VNX5100/5300/5500/5700/7500
http://www.emc.com/microsites/record-breaking-event/index.htm?pid=home-megalaunch-012511
--------------------
EMC introduced storage For small and midsized businesses (SMBs), the real gem here is the VNXe series
*tool to manage called ünisphere
*easy to install
*easy to manage
*easy to provision
*data protection feature
*replication and DR solutions
*Multi protocol support
*best practice wizards to configure the virtual storage
*east to deploy storage for VMware based environment
*detail resource report
*detail component view and report
*knowledge base and community links included
*east to load license and credentials through file
VNXe Models
--------------
VNXe3100
VNXe3300
VNX5100/5300/5500/5700/7500
http://www.emc.com/microsites/record-breaking-event/index.htm?pid=home-megalaunch-012511
Thursday, December 23, 2010
DeDup in Celerra
Consideration:
Running production/high intensive file system should have atleast couple of free space to enable DeDup – but fact is to enable DeDup only few MB is minimum requirement but it may encounter problem.
a. Why encounter problem? – If there is 10GB file it will decompress and put decompressed file in the same location and eliminate the duplicate data; so we should have space to put this decompressed data
b. So to keep free space we need to consolidate the file system by reducing utilization or increase couple of gig if we are not sure each file size.
(2) To enable DeDup – Performance
a. It will run in background and not that performance sensitive compared to replication because it going to run for few days or for a while.
Running production/high intensive file system should have atleast couple of free space to enable DeDup – but fact is to enable DeDup only few MB is minimum requirement but it may encounter problem.
a. Why encounter problem? – If there is 10GB file it will decompress and put decompressed file in the same location and eliminate the duplicate data; so we should have space to put this decompressed data
b. So to keep free space we need to consolidate the file system by reducing utilization or increase couple of gig if we are not sure each file size.
(2) To enable DeDup – Performance
a. It will run in background and not that performance sensitive compared to replication because it going to run for few days or for a while.
Tuesday, May 25, 2010
Data Reduction Technology
Data Reduction Technology
Data size is increasing! Corporates struggles to maintain the costs and also investing huge in backup and DR solutions to protect the critical data and for high availability of data. Major storage companies new offering related data reduction technologies will help in shrinking the data size which will help in performance, data integrity, eliminating redundant data, reduce data protection costs, improve the utilization of storage, faster remote backups, replication, and disaster recovery.
There are a number of technologies that fall under the classification of data reduction or deduplication techniques.
NetApp & EMC provides data reduction technology.
NetApp Inc – Deduplication works at block level is the most prominent of the offerings taking aim at primary storage.
EMC - Celerra Data Deduplication, which actually performs compression before tackling deduplication on file-based data.
The following table shows four major data reduction technologies along with the space they can be expected to save when applied to a “file server or nas data set.
Technology
"Typical" Space Savings
Resource Footprint
File-level deduplication
10%
Low
Fixed block deduplication
20%
High
Variable Block Deduplication
28%
High
Compression
40% - 50%
Medium
File-level deduplication, also known as file single instancing, provides relatively modest space savings but is also relatively lightweight in terms of the CPU and memory resources required to implement it. Fixed-block deduplication provides better space savings but is far more resource-intensive due to the processing power required to calculate hashes for each block of data and the memory required to hold the indices used to determine if a given hash has been seen before. Variable-block deduplication provides slightly better space savings than fixed block deduplication but the difference is not significant when applied to file system data. Variable block deduplication is much effective to data sets that contain misaligned data, such as backup data in backup-to-disk or VTL environments. The resource footprint of variable block deduplication is not dissimilar to fixed block deduplication. It requires similar amounts of memory and slightly more processing power. Compression is often considered to be different from deduplication. However, compression can be described as infinitely variable, bit-level, intra-object deduplication. Technical pedantry aside it is simply another technique that alters the way in which data is stored to improve the efficiency with which it is stored. In fact it offers by far the greatest space savings of all the techniques listed for typical NAS data, and is relatively modest in terms of its resource footprint. It is relatively computer-intensive but requires very little memory.
Technological Classification
The practical benefits of this technology depend upon various factors like –
Application’s Point – Source Vs Target
Time of Application – Inline vs Post-Process
Granularity – File vs Sub-File level
Algorithm – Fixed size blocks Vs Variable length data segments
Data size is increasing! Corporates struggles to maintain the costs and also investing huge in backup and DR solutions to protect the critical data and for high availability of data. Major storage companies new offering related data reduction technologies will help in shrinking the data size which will help in performance, data integrity, eliminating redundant data, reduce data protection costs, improve the utilization of storage, faster remote backups, replication, and disaster recovery.
There are a number of technologies that fall under the classification of data reduction or deduplication techniques.
NetApp & EMC provides data reduction technology.
NetApp Inc – Deduplication works at block level is the most prominent of the offerings taking aim at primary storage.
EMC - Celerra Data Deduplication, which actually performs compression before tackling deduplication on file-based data.
The following table shows four major data reduction technologies along with the space they can be expected to save when applied to a “file server or nas data set.
Technology
"Typical" Space Savings
Resource Footprint
File-level deduplication
10%
Low
Fixed block deduplication
20%
High
Variable Block Deduplication
28%
High
Compression
40% - 50%
Medium
File-level deduplication, also known as file single instancing, provides relatively modest space savings but is also relatively lightweight in terms of the CPU and memory resources required to implement it. Fixed-block deduplication provides better space savings but is far more resource-intensive due to the processing power required to calculate hashes for each block of data and the memory required to hold the indices used to determine if a given hash has been seen before. Variable-block deduplication provides slightly better space savings than fixed block deduplication but the difference is not significant when applied to file system data. Variable block deduplication is much effective to data sets that contain misaligned data, such as backup data in backup-to-disk or VTL environments. The resource footprint of variable block deduplication is not dissimilar to fixed block deduplication. It requires similar amounts of memory and slightly more processing power. Compression is often considered to be different from deduplication. However, compression can be described as infinitely variable, bit-level, intra-object deduplication. Technical pedantry aside it is simply another technique that alters the way in which data is stored to improve the efficiency with which it is stored. In fact it offers by far the greatest space savings of all the techniques listed for typical NAS data, and is relatively modest in terms of its resource footprint. It is relatively computer-intensive but requires very little memory.
Technological Classification
The practical benefits of this technology depend upon various factors like –
Application’s Point – Source Vs Target
Time of Application – Inline vs Post-Process
Granularity – File vs Sub-File level
Algorithm – Fixed size blocks Vs Variable length data segments
Sunday, January 17, 2010
ReplV2 & NAS Shutdown
Celerra Gateway NAS 5.6 / Replication V2
(1) whether i need to stop replication for a nas gateway planned shutdown?
No, you do not need to stop replication V2 since it is common base checkpoints on both the source and destination.
(2) how to shut down celerra gateway?
if connected through serial
-step1- stop all nas services using respective celerra commands; then mount respective partitions.
-step2-then shutdown nas using server_cpu (if IP Replication configured otherwise use nas_halt). Before these DM's should be contacted each other and atleast one to be in "5".
**if not connected through serial - then follow only step2**
**use latest procedures in powerlink**
(1) whether i need to stop replication for a nas gateway planned shutdown?
No, you do not need to stop replication V2 since it is common base checkpoints on both the source and destination.
(2) how to shut down celerra gateway?
if connected through serial
-step1- stop all nas services using respective celerra commands; then mount respective partitions.
-step2-then shutdown nas using server_cpu (if IP Replication configured otherwise use nas_halt). Before these DM's should be contacted each other and atleast one to be in "5".
**if not connected through serial - then follow only step2**
**use latest procedures in powerlink**
Monday, December 28, 2009
Any conversion for backend CLARiiON in Celerra GW attached ?
Note1 - Check compatibility for upgrades for Flare code, Dart Code & Compatibility NAS GW with Target CX Backend (in EMC ESM)
Note2 - Validate Flare / Dart (in some cases NAS Code upgrade may reqd)
Note3 - Check whether required space is available in first five drive for conversion (vault drives). This is to accomodate latest flare code and mandatory pre-requisites (after considering the utilised space for NAS OS Luns)
Note4 - Shutdown procedure for NAS GW
Note5 - Conversion procedure for CX Backend
Note6 - Power ON procedure for CX and NAS GW
Note7 - Validate accessibility/environment
EMC has internal procedures which will be released case-to-case basis. It is mandatory to follow all those procedure after change control process !
Note2 - Validate Flare / Dart (in some cases NAS Code upgrade may reqd)
Note3 - Check whether required space is available in first five drive for conversion (vault drives). This is to accomodate latest flare code and mandatory pre-requisites (after considering the utilised space for NAS OS Luns)
Note4 - Shutdown procedure for NAS GW
Note5 - Conversion procedure for CX Backend
Note6 - Power ON procedure for CX and NAS GW
Note7 - Validate accessibility/environment
EMC has internal procedures which will be released case-to-case basis. It is mandatory to follow all those procedure after change control process !
Subscribe to:
Posts (Atom)