Changing CUCM hostnames to FQDN hostnames

CUCM-LogoI recently ran into a scenario that wasn’t really covered in Cisco’s Changing IP Address and Hostname for Cisco Unified Communications Manager documentation. We had already had our CUCM hostnames defined by hostname, but they were not using Fully Qualified Domain Name (FQDN) hostnames. In this post I will cover the steps I used to just add the domain name to my node definitions as the process is much less complicated when compared to the full rename procedure.

The process for renaming and/or changing the IP address on your CUCM nodes is well documented, but the guide did not cover my exact conditions where the node names were already defined by hostname, but they were not fully qualified.

Here were my node definitions in the CUCM Admin GUI before my change, I omitted my IM&P nodes from this picture as they were already defined with FQDNs:

CUCM-NonFQDN

This picture, below, is from the Show –> Network section of the OS Administration GUI which shows the DNS servers and domain name are already configured:

CUCM-DNSDomainInfo-Sanitized

If you still have doubts about if this procedure would apply to you or not, I recommend you engage your VAR or Cisco TAC. Overall, the change is pretty simple but there are a few things I want to call out:

  1. If you suspend your RTMT alerts before your change, you will still get RTMT alerts after the name updates are completed. This is likely caused by the suppression being done on the old CUCM node names. Just be aware that if you have alerting configured, you will get RTMT alerts even if you have alerts suppressed.
  2. You should plan to recreate any custom RTMT profiles. If you want to venture into unsupported territory, here is a procedure I wrote up that you can try. You’re on your own for support. Fix existing RTMT profile after CUCM node rename
  3. There are certificate implications if you are running a secure cluster. Make sure you understand the impact of your certificates on this change. Seek additional assistance if you are unsure. I am not running a secured cluster which simplified things significantly.

If you are still with me at this point, what follows is the procedure I used. I am a pretty nice person, but don’t call me if this doesn’t work for you. Test this in a lab before you go changing anything in your production cluster. Be smart; if you are uncomfortable about any of this, I recommend you seek the assistance of an experienced Cisco UC Engineer. Don’t call/email/DM me!

On to the good stuff

  1. Make sure your Informix Replication is already normal on all of your CUCM nodes. You can check this via the CLI on your First Node using this command:
    utils dbreplication runtimestate
    If your Informix replication is NOT normal, do NOT proceed! Get this fixed first.
  2. From the CUCM Admin GUI, change the server name definition on a single node: System –> Server
    Add your domain name to the end of each CUCM node entry.
  3. After each node is changed, verify that the ProcessNode table is updating with the new names on all nodes in your cluster using the following SQL command:
    run sql select name,nodeid from ProcessNode

    run sql select name,nodeid from ProcessNode
    name                        nodeid 
    =========================== ====== 
    CMSN-02.example.com         4
    CMSN-01.example.com         3
    CMSN-05.example.com         7
    CMSN-06.example.com         8
    CMFN-01.example.com         2
    CMSN-04.example.com         6
    CMSN-03.example.com         5
    CUP-01.example.com          10     
    CUP-02.example.com          11
    EnterpriseWideData          1
    
  4. After all the node names have been updated, and the ProcessNode table has been verified on each CUCM node in your cluster, you must reset cluster replication using this CLI command. Issue this command on the First Node only!
    utils dbreplication reset all
  5. What follows will be a waiting game. My 7 node cluster took about 20 minutes to normalize Informix Replication after the replication reset was initiated. Monitor the replication status using the ‘utils dbreplication runtimestate’ CLI command on the First Node. Do not proceed further until replication is normal on all of your nodes!
  6. Once replication status is 2 (Normal) on all nodes, reboot each node in the cluster one data center at a time. Make sure you understand the impact of rebooting nodes and how the endpoints failover.
  7. After all nodes have been rebooted, verify replication status is good on all nodes:
    utils dbreplication runtimestate
  8. Execute your normal CUCM test plan

CUCM-FQDN-Sanitized

Viola! That is all there is to it! Good luck and I hope this helps!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.