- Information Technology
- Policies and Procedures
- IT Provisioning for Business Continuity & Disaster Recovery
IT Provisioning for Business Continuity & Disaster Recovery
Download the Provisioning for Business Continuity & Disaster Recovery policy or review the policy below.
Organization: Information Technology (IT) is the division of the Graduate Center responsible for voice, video and data systems and services. The mission of this unit is to promote, facilitate and support the effective use of technology in instruction and learning, in research, and in processing and accessing institutional information. Organizationally, IT is comprised of three divisions: Administrative Services, Client Services and Systems Services.
Facilities: The Graduate Center has two locations. One location is 365 Fifth Avenue (“GC/Fifth”) and includes data centers located on the second floor directly connected to the CUNY ring. Another, at the GC Advanced Science Research Center (“GC/ASRC”), includes a main data center at ground level and as well as small data center on the 5th floor and is directly connected to the CUNY ring using infrastructure currently overseen by CUNY central office CIS.
Scope of Provisioning: This document outlines the provisioning currently in place by Information Technology to safeguard ongoing functionality for select IT systems and services specifically identified herein. The scope and suitability of such provisioning is reviewed on a regular basis as systems and services are decommissioned, added and changed.
Definitions: We distinguish between the phrases “business continuity” and “disaster recovery” by viewing a business continuity plan as essentially a proactive approach to safeguarding ongoing daily operations while a disaster recovery plan must attempt to react to the scope and nature of a calamity. That is, our business continuity provisioning guarantees that we have systems in place that are backed up and fail over, ensuring that key services stay up and running, that business processes remain operational, in the course of minor disruptions which are reasonably anticipated. Disaster recovery, in which an unlikely but catastrophic incident has rendered the GC/Fifth data center or the GC/ASRC main data center lost in part or in total for an extended period, may call for wholesale reconstitution of facilities, resources and services depending on the nature and specifics of the disaster.
Context: For the purposes of this document, IT services are considered to be centered at GC/Fifth, with business continuity provisioning intended to safeguard that perspective and disaster scenarios envisioned deleteriously impacting that location. IT services based at GC/ASRC are not addressed in this document.
Related Policies & Procedures
Essential IT Services and BC/DR Provisioning
This section outlines the provisioning currently in place by Information Technology to safeguard ongoing functionality for select essential IT systems and services.
Two types of provisioning are identified, business continuity and disaster recovery, for the essential IT services identified below.
The data center at GC/Fifth consists of one dedicated room.
Business Continuity:
- Access to the GC/Fifth data center is via restricted card-entry system, security cameras monitor the entrances and public safety routinely patrols the adjacent hallways.
- The data center is on the second floor, above ground level, and set apart from the general traffic patterns used by the community of individuals occupying the building on a daily basis.
- Supported by individual redundant cooling systems, with continuous monitoring and alerting in place for Facilities and Engineering. o Supported by individual power systems provisioned with UPS backup, with continuous monitoring and alerting in place for Facilities and Engineering.
- Supported by individual sprinkler systems.
- Supported by temperature, humidity and water alerting, monitored by IT staff and Facilities and Engineering.
- Access to the main GC/ASRC data center is via restricted card-entry system and public safety routinely patrols the adjacent hallways.
- The data center is on the ground level and set apart from the general traffic patterns used by the community of individuals occupying the building on a daily basis.
- Supported by individual redundant cooling systems, with continuous monitoring and alerting in place for Facilities and Engineering.
- Supported by individual power systems provisioned with UPS backup, with continuous monitoring and alerting in place for Facilities and Engineering.
- Supported by individual sprinkler systems.
- Supported by temperature, humidity and water alerting, monitored by IT staff and Facilities and Engineering.
Disaster Recovery
- Subject to the nature of the disaster, facilities at GC/ASRC or GC/Fifth may be used as emergency relocation centers for restoration of targeted GC IT services.
The network infrastructure at GC/Fifth and GC/ASRC consists of core switches and related componentry in the data center, connected via fiber risers to multiple IDFs on each floor housing edge switches serving end-user devices. A wireless network infrastructure rides on top of this framework.
Business Continuity:
GC Fifth
- In the data center, core switches, distribution switches, firewalls, server-region switches and switches connecting the internal network to the CUNY ring are all deployed in pairs, with failover provisioning, providing high redundancy.
- From the data center, the IDF on each floor is supported by redundant fiber connections; however, these are encased in the same pathway. The stack of edge switches in each IDF on each floor are configured for failover, however there are single horizontal paths from the edge switch to the individual end-user wall ports. There typically are multiple data ports in any given room.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- The CUNY ring provides two paths, in opposite directions, for redundancy. There is a single path from the data center to the external connection to the actual ring. There is no secondary internet connection in place at GC/Fifth.
GC/ARSC
- In the data center, core switches, distribution switches, firewalls, server-region switches and switches connecting the internal network to the CUNY ring are all deployed in pairs, with failover provisioning, providing high redundancy.
- From the data center, the IDF on each floor is supported by redundant fiber connections; however, these are encased in the same pathway. The stack of edge switches in each IDF on each floor are configured for failover, however there are single horizontal paths from the edge switch to the individual end-user wall ports. There typically are multiple data ports in any given room.
- The CUNY ring provides two paths, in opposite directions, for redundancy. There is a single path from the data center to the external connection to the actual ring. There is no secondary internet connection in place at GC/Fifth.
Disiaster Recovery:
- Subject to the nature of the disaster, facilities at GC/ASRC and GC/Fifth may be used as emergency relocation centers for restoration of targeted GC IT services.
Email for GC and ASRC staff and faculty uses the gc.cuny.edu domain, and is implemented in a Microsoft Exchange environment, hosted locally at GC/Fifth.
Business Continuity
gc.cuny.edu domain (faculty & staff) email
- The Exchange platform is maintained in a dedicated cluster environment housed in the GC/Fifth data center, made up of multiple fault-tolerant servers, provisioned for fail-over redundancy.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner
- ProofPoint and other supplementary systems apply additional security processing to incoming/outgoing email for the purpose of safeguarding operations; these systems are likewise configured as a cluster of multiple faulttolerant servers, provisioned for fail-over redundancy.
- Backups are executed daily, stored onsite; backups are copied weekly to GC/ASRC
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful. File restores from back-ups are executed routinely.
Disiaster Recovery
gc.cuny.edu domain email
- Notwithstanding weekly Exchange backup copies at GC/ASRC, it will take time to operate gc.cuny.edu email from the ASRC:
- Appropriate servers will need to be setup
- Mailboxes will need to be restored from backup
- DNS changes will need to be made
- Given the possibility of migrating email to a CUNY cloud-hosted email solution, dedicating resources to email DR does not seem prudent
Services such as email, file storage and collaboration services for GC students (using the gradcenter.cuny.edu domain), Office 365, Dropbox, CUNYfirst HR and finance services, Blackboard and NetCommunity are externally hosted.
Business Continuity
- GC student email, file storage and collaboration services (using the gradcenter.cuny.edu domain) are implemented in a Microsoft Office 365 environment, hosted externally by Microsoft and controlled by CUNY central office CIS. This is a Microsoft-hosted and supported environment, accessible to users from any internet-connected location.
- CUNYfirst and Blackboard are hosted systems overseen and managed by central office CIS.
- NetCommunity is a system contracted for by the GC Development office and hosted externally by the vendor.
- CUNY OneDrive storage hosted externally by Microsoft and controlled by CUNY central office CIS. This is a Microsoft-hosted and supported environment, accessible to users from any internet-connected location.
- Dropbox storage is hosted externally by Dropbox and administered by CUNY central office CIS.
Disaster Recovery
- Subject to the nature of the disaster, these externally hosted and supported environments are expected to remain operational and accessible to users from any internet-connected location.
- CUNYfirst, OneDrive, Office 365, Dropbox, and Blackboard are the purview of CUNY central.
Electronic databases underpinning file systems and applications are hosted locally at the GC.
Business Continuity
GC/Fifth
- The MS SQL database environment is maintained in a dedicated cluster environment housed in the GC/Fifth data center, made up of two fault-tolerant servers, provisioned for fail-over redundancy. It is backed up daily and then copied to GC/ASRC. In addition, transaction logs are backed up every 4 hours and transaction log backups are then copied to GC/ASRC.
- There are two fault tolerant servers at GC/Fifth and one at the GC/ASRC for business continuity.
- Although the mariaDB database environment at GC/Fifth is not clustered, it is backed up daily by two independent backups and these two different backups are copied daily to GC/ASRC.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful. File restores from back-ups are executed routinely.
Disaster Recovery
- In a disaster scenario, GC/Fifth SQL servers will fail over to the GC/ASRC node.
- In a disaster scenario – until a standby/passive VM is created at the GC/ASRC with live updates from GC/Fifth -- the mariaDB VM will be restored from latest Veeam backup.
- Future planning: mariaDB cluster nodes will be created at the GC/ASRC with real-time updates. If a management priority, we can try to test failover/failback to/from the GC/ASRC with the mariaDB database server on a future GC/5th Shutdown day.
Electronic file services for GC faculty and staff, such as the S: and R: drives, and SharePoint, are hosted locally at GC/Fifth. (File services for GC students are discussed above.)
Business Continuity
GC/Fifth
- The system for file access is maintained in a dedicated cluster environment housed in the GC/Fifth data center, made up of multiple fault-tolerant virtual servers, provisioned for fail-over redundancy.
- The back-end system for file storage is a SAN system housed in the GC/Fifth data center.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- The file server cluster is backed up daily and copied weekly to GC/ASRC.
- In addition, the S: drive and R: drive – but not 4 large “archive” folders! – are synchronized daily to a standby file server at GC/ASRC
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful. File restores from back-ups are executed routinely.
- SharePoint On-Premise (not part of the O365 offering) is utilized as a file storage option for those wanting to move away from traditional network drive solutions. SharePoint On-Premise servers are housed at GC/Fifth data center, made up of multiple fault-tolerant virtual servers, provisioned for fail-over redundancy. The back-end system utilizes our MS SQL Cluster with Data at rest encryption in place.
GC/ASRC
- The back-end system for file storage is a SAN system housed in the GC/ASRC data center.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- The file server cluster is backed up daily and copied weekly to GC/ASRC.
Disaster Recovery
- Ongoing preparedness: Servers at GC /Fifth are patched and maintained on the same schedule as those at GC/ASRC.
- In a disaster scenario, GPOs that map S: and R: will be changed to use the replica file server at GC/ASRC. If a management priority, (and if GC workstations are granted access to the standby server at the ASRC), we can test failover to the ASRC on a future Shutdown Day.
The IT infrastructure (“CUNY ring”) traversed by incoming and outgoing traffic between the internet and the IT infrastructure internal to GC/Fifth, GC/Apt and GC/ASRC is controlled by CUNY central office CIS.
Business Continuity
- GC/Fifth is connected to the CUNY ring.
- GC/ASRC is connected to the CUNY ring.
- The CUNY ring is the purview of CUNY central office CIS.
Disaster Recovery
- The CUNY ring is the purview of CUNY central office CIS.
Resources such as the primary GC website (gc.cuny.edu), Password Reset, Track-IT, SSRS, SharePoint, and OOS server are hosted locally at GC/Fifth.
Business Continuity
- The front-end systems are maintained in a dedicated cluster environment housed in the GC/Fifth data center, made up of multiple fault-tolerant virtual servers, provisioned for fail-over redundancy.
- The back-end databases are MS SQL and mariaDB, discussed elsewhere in this document.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- If content of the front-end web server changes frequently, the server is backed up daily onsite and there is a daily offsite backup copy to GC\ASRC. If content is “static,” the server is backed up weekly and there is a weekly offsite backup copy to GC\ASRC.
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful. File restores from back-ups are executed routinely.
Disaster Recovery
gc.cuny.edu website
- Ongoing preparedness: Servers at GC/Fifth are patched and maintained monthly.
- In a disaster scenario, if mariaDB servers do not yet exist at GC\ASRC, mariaDB database servers will be restored from Veeam backup and second critical web servers will be restored from Veeam backup. Next, new IPs will be assigned and DNS entries updated.
- MS SQL servers are automatically setup to failover to 2nd pair in the cluster or in the cases where the cluster fails, it will failover to the third node located at GC\ASRC.
- In the future, we will test Global Load Balancing to reduce the time to failover critical web servers to GC\ASRC
Resources such as WSUS, SCCM, EPO, application server, software license server, print server, and Active Directory/DNS are hosted locally at GC/Fifth.
Business Continuity
- These systems are housed in the GC/Fifth data center. There are multiple Domain Controllers and DNS servers for redundancy/high-availability. The other servers in this category do not have redundancy/high-availability.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- If content of the server changes frequently, the server is backed up daily onsite and there is a daily offsite backup copy to GC\ASRC. If content is “static,” the server is backed up weekly and there is a weekly offsite backup copy to GC\ASRC.
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful. File restores from back-ups are executed routinely.
Disaster Recovery
- GC Domain Controller/DNS servers replicate to Domain Controller/DNS servers located at the ASRC.
- Except for the SCCM image server, the WSUS, SCCM (front-end), antivirus EPO and software license server are backed up daily with an offsite backup copy to GC\ASRC
- Although three GC\Fifth print servers are backed up, the backups are not copied offsite to GC\ASRC because GC\ASRC has its own printers and print server.
Resources such as GC Web Services, ASRC Web Service, the CUNY Academic Commons, and the GC Library website, as well as the NML, MLD, and RedMine websites are hosted locally at GC/Fifth.
Business Continuity
- The GC Linux environment is purely virtual and consists of three layers: the mariadb database back-end, discussed elsewhere in this document, the file system and the front-end web services layer.
- The mariadb database back-end is discussed elsewhere in this document.
- The environment is maintained in a dedicated cluster environment housed in the GC/Fifth data center, made up of multiple fault-tolerant virtual servers, provisioned for fail-over redundancy.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- For critical, production linux-based servers, there are two independent, daily backups and the backups are copied daily to GC\ASRC.
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful. File restores from back-ups are executed routinely.
Disaster Recovery
- In a disaster scenario, critical VMs can be restored from Veeam backup at GC\ASRC, IPs can be changed and DNS modified. Certain existing supplementary systems may be considered non-essential and will not be restored. See above discussion of mariaDB.
Resources such as AD & GC Guest Account Assignment Systems, Automated GC Student Account Creation System, Apply Yourself/I856/I805/Research Foundation/Ingestion services, Graduate Assistant Tracking System (GATS), AD Account Expiration Notification System and Registrar ASTA ID Generator are hosted locally at GC/Fifth and GC/ASRC.
Business COntinuity
GC/Fifth
- Web and Desktop applications are source version controlled and can be recreated from any local or server instance of the code.
- The back-end databases are MS SQL, discussed elsewhere in this document.
- Patches and system updates are applied in a timely manner.
A collection of Apple iOS Mac and Windows PC desktop computers are available at GC/Fifth and GC/ASRC to support faculty, staff and students. These resources are maintained and supported on an ongoing basis.
Business Continuity
GC/FIfth and GC/ASRC
- For desktop computers currently deployed, systems are kept current and protected by way of central management, via SCCM for Windows PCs and Jamf for Apple Macs. SCCM is noted elsewhere in this document. Jamf resides in the cloud.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- WSUS, noted elsewhere this document, is used to keep PC computers current with Windows updates and patches as well as Microsoft application software updates. Jamf is used to keep Macs current with Apple iOS updates and patches.
- Ongoing preparedness: Critical updates issued by Microsoft are automatically pushed to all Windows desktop computers via WSUS. Critical updates issued by Apple are automatically pushed to all Mac desktop computers via Jamf.
- McAfee EPO, noted elsewhere in this document, is used to keep the end-point security updated on both Windows PCs and Apple Macs. This includes antivirus, full-drive encryption for portable storage and data loss prevention for designated sensitive data (for the latter, Windows only). The same utility is used to provision full-drive encryption on GC laptops.
- Ongoing preparedness: McAfee updates are automatically pushed to desktop computers on a routine basis.
- Ongoing preparedness: A small collection of PCs and Macs are retained in stock (“spares”), as a redundant fail-safe precaution should a desktop computer currently deployed fail beyond immediate repair. • Standard images for PCs and Macs are maintained in SCCM for Windows and in Jamf for Macs. • Ongoing preparedness: These images are backed up routinely.
Disaster Recovery
- In a disaster scenario, certain existing supplementary systems may be considered non-essential.
- Subject to the nature of the disaster, facilities at GC/ASRC will be used as emergency relocation centers for staff workstations. Existing desktop computers at GC/ASRC are suitable for such work.
- In progress: DR capability to serve GC members operating at GC\ASRC will be enhanced by migrating ASRC MACs and PCs (from stand-alone and ASRC.cuny.adlan domain) to the GC domain.
Telephony service for GC/Fifth is provided via a voice-over-IP system hosted locally at GC/Fifth, using circuits to Verizon and AT&T. Telephony service for GC/Apt is provided via Verizon. Telephony service for GC/ASRC uses infrastructure currently controlled by CUNY central office CIS.
Business Continuty
GC/Fifth
- The telephony system is comprised of a collection of components; all components are provisioned for redundancy. Additional end-user hand-sets are also stored on-site.
- Voicemail services enabling a caller to leave a message are maintained in a dedicated cluster environment housed in the GC/Fifth data center, made up of multiple fault-tolerant servers, provisioned for fail-over redundancy. Voicemail services enabling the recipient to retrieve stored messages are not similarly provisioned.
- Ongoing preparedness: System health is monitored continuously and alerts are in place to ensure that components are operational and functioning properly. Patches and system updates are applied in a timely manner.
- Backups are executed daily, stored onsite.
- Ongoing preparedness: Monitoring and alerting is in place to ensure that backups were successful.
- Redundant routes exist to the PSTN, however these traverse the same pathway between GC/Fifth and the street connection.
GC/ASRC
- This is currently the purview of CUNY central office CIS.
Disaster Recovery
GC/Fifth
- A small number of telephones at GC/Fifth are provisioned independently of the IP network using traditional analog service.
- Telephone services at GC/Apt and GC/ASRC are independent of GC/Fifth and therefore, subject to the nature of the disaster, may remain operational.
- In a disaster scenario, certain existing supplementary systems such as voicemail may be considered non-essential. • Subject to the nature of the disaster, facilities at GC/ASRC and GC/Apt may be used as emergency relocation centers for staff workstations.
GC/ASRC
- This is currently the purview of CUNY central office CIS.