Friday, June 30, 2023

Resuming Tech Blog

Pl watch out for this blog ref to "Cloud Governance,  Cyber security and Public Cloud Data Residency"

Thursday, May 2, 2013

Storage with VPLEX & CISCO (for HA/DR)

·         High Availability Design Principles
o   Intra-site replication solution for High Availability for Servers and quick Disaster recovery options.
o   Eliminate single point of failure (redundant configurations)
  The duplication of components to ensure that should a primary resource fail, a secondary resources can take over its function
o   Guarantee 100% uptime
 Implementation intended to ensure a 100% of operational and business continuity
o   Fault tolerance
  The ability to continue properly when a hardware or software fault or failure occurs. Designed for reliability by building multiples of critical components like controllers, switches, connectivity,  adapters, memory and disk drives.
o   DR Replication
provides Storage Level Replication and able to recover the data as per the agreed RPO/RTO

Example - pic


Saturday, December 31, 2011

IBM Storage N Series - Overview

IBM Storage N series

OS :
operating system (Data ONTAP) across the entire platform, and offer a combination of multiple advanced function software features that provide one of the industry’s most multifaceted storage platforms, including comprehensive system management, storage management, onboard copy services, virtualization technologies, and disaster recovery and backup solutions.

Capabilities:
Network File System (NFS),
Common Internet File System (CIFS),
HTTP, FTP and iSCSI,
Fibre Channel Over Ethernet (FCoE)

Model:
N Series - Appliance model and gateway model
*Gateway model backend can be any storage
*Appliance model will be integrated with IBM

Solutions:
(1) Heterogeneous unified storage solution-Unified access for multiprotocol storage environments.
(2) Versatile-single integrated architecture designed to support concurrent block I/O and file servicing over Ethernet and Fibre Channel SAN infrastructures.
(3) Comprehensive software suite designed to provide robust system management, copy services, and virtualization technologies.
(4) Tune the storage environment to a specific application while maintaining flexibility to increase, decrease, or change access methods with minimal disruption.
(5) Ease of changing storage requirements, and providing quick reactions. If additional storage is required, you can expand it quickly and non-disruptively. If existing storage is deployed incorrectly, you have the capability to reallocate available storage from one application to another quickly and easily.
(6) Maintain availability and productivity during upgrades. If outages are necessary, they can be kept to the shortest time possible.
(7) Easily and quickly implement the upgrade process. Non-disruptive upgrade is possible.
(8) Create effortless backup and recovery solutions that operate in a common manner across all data access methods.
(9) Tune the storage environment to a specific application while maintaining its availability and flexibility.
(10) Change the deployment of storage resources non-disruptively, easily, and quickly. Online storage resource redeployment is possible.
(11) Achieve strong data protection solutions with support for online backup and recovery.
(12) Include added value features such as deduplication to optimize space management.













Thursday, September 15, 2011

CENTERA Recall (Migration Process)

UHAC (production) - 1TB Data Celerra (NSG or Integrated or Unified) / CENTERA Gen3 or 4

Objective - Migrate data to any other target storage (not EMC). Decomission existing celerra and Centera.
Checkpoint / Red Flag- archived data in centera and how to recall those data.

Best Approach
option (A)
(1) Break connection from Celerra using fs_dhsm command (Centera connection) -
***Prerequisite - equal amount of free space is required in Celerra when you recall the data from Centera***
(2) Once all data recalled; complete migration using host based migration tool or backup/restore option **prerequisite - dedicated Windows host with high end configuration and domain admin access **

option (B)
(1) Use host based migration tool like emcopy or xcopy or robocopy **prerequisite - dedicated Windows host with high end configuration and domain admin access**
(2) map target drive or assign lun to target drive
(3) migrate data (while migrating data; it will recall the data from Centera during read and copy process; and it will copy the entire content of the file system)

option (C)
Working directly with SDK API's of centera (the SNIA XAM protocol *used by Interlock or any other vendor using this migration approach*).

option (D)
(1) break connection from celerra to centera
(2) take backup of stub files/and entire file system contents
(3) restore in new target storage (with same config of CIFS/fs)
(4) create new connection like in celerra (from new storage to centera)
(5) recall data from new storage
(6) once all data recalled; decomission centera
Note: this will depend on the compatibility of CENTERA with new target storage.

Option C is much faster than any other options.
option A and B are a best approach if customer has any financial constraints

Monday, May 30, 2011

Good Config - EMC

Following are a good configuration:::

MirrorView -> can be target for multiple CLARiiON from same source CLARiiON.
SANCopy -> 10-15% cache reserve in RLP
SnapView (Clone/snapshot) -> Good for a backup copy on same array
RaidGroup -> Same RG should not be used for SAN and NAS Data, Same RG should not be used for RLP and for DATA Luns

Celerra fs -> Same file system should not be expanded using multiple pools
Celerra pool -> 4+1/4+2 pools (in FC Drives) are best config for performance sensitive applications.
Celerra iSCSI -> recommend seperate VLAN / redundant LAN configuration for high availability and iSCSI replication requires separate reservation of space for savvol.
Celerra Multiporotocol - FS -> It is good to go with mixed mode fs for multiprotocol access (unix access will use unix credentials / windows will use AD for authentication **If no standalone CIFS not in use**)

Centera -> recommend to use PEA files for archive rather using anonymous
Centera -> Always lock the node and recommend to keep minimum of 2 nodes as access nodes.

Friday, March 25, 2011

VNXe - EMC Storage

VNXe - Smart Storage
--------------------

EMC introduced storage For small and midsized businesses (SMBs), the real gem here is the VNXe series

*tool to manage called ünisphere
*easy to install
*easy to manage
*easy to provision
*data protection feature
*replication and DR solutions
*Multi protocol support
*best practice wizards to configure the virtual storage
*east to deploy storage for VMware based environment
*detail resource report
*detail component view and report
*knowledge base and community links included
*east to load license and credentials through file

VNXe Models
--------------
VNXe3100
VNXe3300
VNX5100/5300/5500/5700/7500

http://www.emc.com/microsites/record-breaking-event/index.htm?pid=home-megalaunch-012511

Thursday, December 23, 2010

DeDup in Celerra

Consideration:

Running production/high intensive file system should have atleast couple of free space to enable DeDup – but fact is to enable DeDup only few MB is minimum requirement but it may encounter problem.

a. Why encounter problem? – If there is 10GB file it will decompress and put decompressed file in the same location and eliminate the duplicate data; so we should have space to put this decompressed data

b. So to keep free space we need to consolidate the file system by reducing utilization or increase couple of gig if we are not sure each file size.

(2) To enable DeDup – Performance

a. It will run in background and not that performance sensitive compared to replication because it going to run for few days or for a while.