Planet Collab

❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayUnified Communications Guerrilla

Cisco Mobile and Remote Access Troubleshooting Basic Connectivity

The Cisco Mobile and Remote Access (MRA) feature is a "client edge" solution that allows external software and hardware clients to register to enterprise Cisco Unified Communication (UC) solutions without requiring a VPN. Like most things, there are a lot of moving parts working together to create a relatively seamless user experience. And, like most things, the first time you deploy MRA there are a few "gotchas" that can eat up a significant amount of troubleshooting time. 

This blog entry captures procedures I use when troubleshooting or validating a MRA deployment. These procedures can be used to validate the initial deployment or they can be used to troubleshoot connectivity problems for an individual user.


Proper troubleshooting technique requires that you have a thorough understanding of how things should work during normal operations. I presented on the MRA registration process during a NetCraftsmen Cisco Mid-Atlantic User Group (CMUG) meeting last year. If the reader needs a review of the architecture with a walk through of the Jabber client discovery and registration process then a PDF of that presentation is available here:

At a high-level, the MRA registration process follows this flow:
  1. Service Discovery
  2. Service Provisioning
  3. XMPP Registration
  4. SIP Registration
  5. Establish Visual Voicemail connectivity
This blog entry is focused on a scenario where we are using corporate presence services and UCM for call control. We are also roughly following the sequence of transactions that are actually used by a Jabber client. Procedures were originally developed with the 10.x version of Jabber running on Mac OS X and Windows. 

Overview of Process
    Service Discovery

    Upon initialization, the Jabber client enters into a "Service Discovery" mode. At this stage, the client is trying to determine if it is inside the corporate network or outside of the network. The mechanism that it is used is DNS. Specifically, the Jabber client will query for specific DNS service records (SRV record) based on the assigned service domain.

    The service domain is derived from the Jabber ID (JID) assigned to the end user. For example, if my JID is then the service domain is The service domain is usually specified by the user the first time they attempt to log into Jabber. Though, the service domain can also be administratively assigned in the jabber-config.xml file. 

    Once the service domain is known, the Jabber client will go through the sub-process of Service Discovery. This starts with DNS SRV queries for:  : Points to the UDS service on a UCM cluster : Legacy record that points to XMPP service on IM&P service 

    In a MRA scenario, where a client is outside of the corporate network, the above SRV records should not be resolvable. If the client fails to receive a positive response to the UDS/cuplogin queries, it will then send a DNS SRV query is for In a properly implemented solution, this query should return one or more records that point to your Expressway Edge (or VCS-E) cluster.

    Assuming everything is configured correctly and fully operational, the Jabber client will attempt to establish a TLS connection to one of your Edge appliances.

    Service Provisioning

    Once the client establishes a TLS connection to port 8443 on the Edge appliance, the user credentials are authenticated. At this point, the proxy connection is established and the client will start downloading configuration information from the UCM cluster. This configuration information is used to complete the service registration phases.

    XMPP Registration

    If the Jabber client is provisioned for IM&P presence services, the client will attempt to establish a connection on TCP port 5222. Registration requests are sent to the Edge appliance, which then proxies the transaction through the Core appliance to the IM&P cluster node(s).

    SIP Registration

    If the Jabber client is provisioned as a voice/video soft phone, the client will attempt to establish a connection on TCP port 5061. Registration requests are sent to the Edge appliance, which is then proxied through the Core appliance to the UCM cluster node(s). Successful registration is required for voice/video call functionality.

    Visual Voicemail

    If the Jabber client is provisioned with visual voicemail, the Jabber client will submit registration requests to the Edge appliance using the already established TLS connection on port 8443. The Edge appliance proxies the request through the Core to the REST API on Unity Connection.

    Troubleshooting MRA Initialization Process

    All of these procedures are performed from the client perspective.  

    Service Discovery

    This step is fairly straightforward. We need to determine if the client can resolve the proper DNS SRV records. Using dig or nslookup, verify that the client can resolve the collaboration edge SRV records. For example:


    dig srv


    nslookup -type=srv

    It is also a good idea to verify that the client is unable to resolve the UDS and cuplogin SRV records. 

    If the client can resolve the UDS records then the Jabber client will never attempt to connect to the Edge. If the client receives a positive response to the UDS query and/or the client fails to receive a positive response to the Edge discovery then review your external DNS configuration. 

    Service Provisioning

    This troubleshooting step is a little more involved. The web-based API on the Edge appliance uses API calls comprised of Base64 values. Therefore, you need a way to generate Base64 values (such as openssl). The API calls also are built using specific application hostnames in your environment. So, you will need to have that information handy.

    Let's start with the base64 conversion process. On a Mac OSX system, you can use openssl to generate the base64 string. For example:

    echo -n 'human readable string' | openssl base64

    For Windows users, there are several tools that you can download and install. That is a pain, so you may be better off using this online encoder:

    Now that we have a method to create the base64 values for the API calls, we'll need to compile a set of values that we can use for our testing. Specifically, we'll need to create base64 values for the following strings:

    String 1:
    String 2:
    String 3:

    • : is the service domain
    • : is the UCM publisher node (for UDS calls)
    • : is one of your TFTP service nodes (for service provisioning)

    Basic TLS Connection

    OK, we are now armed with almost all of the tools we need. The last tool you want to use is a web browser. I use Google Chrome (tested with version 45.0.2454.101). The first step is to confirm that we can establish the basic TLS connection and can authenticate the user through the Edge appliance. 

    Sticking with our example, the base64 value for String 1 (above) is PW4gY29tcGFueS5jb20K. Now, assume that the Edge appliance hostname is Armed with this information, we can use our web browser to go to the following URL:

    If everything is provisioned correctly, your browser should render a login window where you will enter the Jabber user ID and assigned password. Similar to the following.

    After a successful login, the browser window should render the XML content that is returned from the Edge appliance. For example:

    At this point, we are testing a few things:

    1. If you are not prompted for a login, receive a connection error, or the connection times out then you most likely have a firewall configuration issue (blocking port 8443).
    2. You should check to see if your browser prompts you with certificate errors or warnings. If there are cert errors then your Jabber client may not be able to complete the service provisioning phase. You should check the certificate on the Edge appliance and verify it is signed by a CA that is in the local client trust store.
    3. Your browser should render the complete XML response that identifies your _cuplogin, _cisco-uds, tftpserver, SIP Edge, and XMPP Edge services. If you don't get a response then you have an issue between the Core and Edge OR your Core is unable to resolve the proper DNS records.

    A clue is revealed in item 3. The internal DNS SRV records we identified during the Service Discovery phase are used by the Core appliance to do its job. So, if you have misconfigurations on your internal DNS, errors will be seen in the XML response above.

    Verify UDS Discovery

    The next step in troubleshooting is to verify that your client can communicate with the UDS service on your UCM cluster. To do this we use the base64 value of String 2 (above). Using our example: 


    The URL we are going to test is:

    As with the previous test, a successful transaction will render an XML response. If you are running this after you completed the basic TLS connection then you won't be challenged for authentication credentials. 

    If you are receiving a response then UDS is operational. If not then the UDS service on the UCM may be experiencing a problem.

    You can also test querying a list of UCM UDS servers:

    Verify TFTP Configurations

    If the previous validation procedures are successful then you have determined that the Jabber client can communicate to the Edge appliance for the purposes of Service Provisioning. Certificates are validated, credentials are validated, and basic UDS functionality is confirmed. 

    The next step that a Jabber client would take is to to identify device configurations. As with standard telephony devices, the Cisco TFTP service has configuration files that Jabber can download to retrieve device specifications.

    To do this test we use the base64 value of String 2 (above). The URL we will put in our browser to get a list of devices for the Jabber user is:

    You may or may not be prompted to authenticate. If you are authenticated then enter the same Jabber user credentials as before. If all goes well, then you will receive a list of devices that are associated to the user in UCM (Edit User pages). For example:

    Once you have a list of the devices associated with the user, you can then pull the detail configuration for a specific device. The Jabber client (or DX80 or whatever you are using for the Edge registration) will be able to identify which device configuration to retrieve (by device type). To test this yourself, you will need to look at the "name" child node associated to the device identified as a "Cisco Unified Client Services Framework" model identifier. 

    To test retrieval of the configuration file via the UCM TFTP service we use the base64 value of String 3 (above). Using our example:


    The URL we can test with is:

    Where "devicename" is the name as provided in the UDS device list query in the previous step. A successful response will provide XML content that provides a complete device configuration file. 

    If you get to this point then the Core/Edge proxy function is fully tested and functional. Next, we need to verify service registration.

    Service Registration (XMPP and SIP)

    We are now done with the funky base64 strings (yay!). To test basic XMPP and SIP connectivity we are going to dumb things down a bit. We can use telnet from a command prompt to verify connectivity to the appropriate ports. 

    For example:

    galactus-2:utils wjb$ telnet 5222
    Trying a.b.c.d...
    Connected to
    Escape character is '^]'.
    Connection closed by foreign host.
    galactus-2:utils wjb$ telnet 5061
    Trying a.b.c.d...
    Connected to

    The fact that we received a "Connected" response means that we were able to connect to the Edge device using port 5222 (XMPP) and port 5061 (SIP/TLS). If you receive a Connection Refused response then you may be running into a firewall issue.

    This covers all of the basic connectivity tests that you can use to verify or troubleshoot your MRA implementation. Used in conjunction with event logs and validation tools on the Expressway appliances you should be well on your way to buttoning this up and calling it a day.

    Thanks for reading. If you have time, post a comment!

    Dealing with Apple Gatekeeper Unidentified Developer Error when Launching WebEx from Jabber

    A couple of weeks ago I noticed that I was receiving the Apple untrusted developer error message (left) whenever I attempted to use the Meet Now functionality directly from Jabber. 

    Clicking OK brings another error message and WebEx never loads. I finally got around to trying to fix the issue. This article provides the procedures I used to resolve this issue on my Mac.


    I am not sure "when" the problem cropped up nor do I know if it is related to a Jabber upgrade or an upgrade to WebEx Meeting Center. I reload Jabber versions constantly (for testing purposes) and I don't pay close enough attention to MC updates. I know, that isn't helpful but, hey, at least I am honest.

    The version info in my scenario:
    • Meeting Center MC30
    • Jabber Version 11.1.0 (221135)
    • OS X Mavericks 10.9.5

    The Issue

    The issue I experienced occurred one day when I was attempting to initiate my personal meeting room using the "Meet Now" option in my Jabber client. I also ran into the issue when attempting to initiate a conference from a group chat or when attempting to join another user's WebEx conference when prompted that a meeting I was invited to has started.

    So, in other words, WebEx functionality from Jabber was completely busted. Which was annoying because I use it regularly. Of course, it wasn't annoying enough to investigate until earlier this week. What can I say, you get busy. 

    My Fix

    As usual, I should say: YMMV. This is the fix that worked for me. 

    The Gatekeeper control mechanism that is blocking this action is documented by Apple in this knowledge base article. This security mechanism is a good thing and while you can turn it off via System Preference, I wouldn't do that. Instead, I prefer to selectively enable specific applications to bypass this security control. 

    The trick here is that you have to find the application that Jabber is trying to load. I checked the Applications folder in Finder and I didn't see any WebEx application packages. I also wasn't sure what filename to search for in spotlight, so I didn't bother looking there. 

    I am fond of using the whenever I have application errors like this. I found it to be the easiest way (for me) to finding root cause. The procedure I used:

    1. Make sure you have Jabber loaded and are ready to click the Meet Now button under Meetings
    2. Go to Applications > Utilities > Console
    3. Under System Log Queries select All Messages
    4. Click on the Clear Display toolbar function
    5. In the Jabber application, click on Meet Now
    6. Go back to the Console app and you should see a message from CoreServicesUIAgent that corresponds to the error
    7. This message will point you in the right direction
    On my system, the Console message was:

    10/27/15 8:23:44.249 PM CoreServicesUIAgent[85730]: File /Users/myuserid/Library/Application Support/WebEx Folder/T30_MC/Meeting Center failed on loadCmd /Users/myuserid/Library/Application Support/WebEx Folder/T30_MC/Meeting

    My first reaction when I saw the console message was "gotcha". Right behind that was a big "duh", I probably could have guessed the application name if I put 2 seconds into the thought process. Anyway, once you know where the application package is, you can use Finder to navigate to the file. In my case:

    /Users/user/Library/Application Support/WebEx Folder/T30_MC/Meeting

    Use Ctrl+Right Click to bring up the context menu and click on Open. You'll receive a different error:

    From here, you can click on Open to create an exception for this application. You'll likely receive another message conveying how Meeting Center is launched automatically when you start or join a WebEx meeting. That's cool. We only wanted to add the exception to Gatekeeper. 

    Now, whenever I attempt to launch WebEx from Jabber I am taken right into the appropriate meeting room. 


    I haven't tested this with the latest versions of OS X. I did recently upgrade one of my lab machines to El Capitan, so I'll probably test there to see if the issue is presented on that platform. 

    If someone finds out before I do, please post in the comments!

    Thanks for reading. If you have time, post a comment!

    Using SQL to Query SIP Trunks

    This post is in response to a query I received on Twitter:
    @ucguerrilla - Would you have a SQL query in your toolbox to list SIP trunks with ip address, or point me in the right direction?
    This is an interesting question because the tables you need to look at may not be as obvious as seen with other queries where we need to join tables. So, let's take a look at what is involved with this query and possibly touch on some related queries.


    A brief primer is provided in the first blog of this series.


    For this installment we are going to look at how we can generate a recordset that shows individual SIP trunks with the assigned SIP destination(s). Starting with CUCM version 8.5(1), a feature was added that allowed more than one SIP destination to be added to a single SIP trunk. This necessitated a database schema modification in order to preserve referential integrity and keep the database tables optimized. The updated schema introduced some new tables to the database, which we are going to need to answer the original question.

    The queries provided in this entry are going to focus on UCM 10.5 but the queries should also work with UCM 8.5 and later. 
      Moving Parts

      The queries provided will leverage the following database tables:

      • Device: This table contains all of the information concerning devices provisioned on the system. Phones, gateways, trunks, CTI route points, media resources, and CTI ports are common devices. It is also worth noting that Route Lists and Hunt Lists are considered devices, too. 
      • Sipdevice: This table contains data specific to SIP trunk devices such as calling/called IE transformations, normalization script references, preferred codecs, QSIG parameters, and the like.
      • Siptrunkdestination: This table contains destination details for SIP trunk destinations associated with the entries in sipdevice

      Example: Displaying SIP Trunks with Destinations

      The original question was to provide a way to list all of the SIP trunks with their associated destinations. The basic device information is stored in the device table, as one would expect. This includes the fields you would see on the SIP trunk configuration page under the "Device Information" section (such as MRGL, CSS, etc.). 

      We'll also need to use the sipdevice table, which contains data that is more specific to the SIP trunk parameters. The data in this table is similar to the data in the digitalaccesspri table used when querying MGCP trunks or the h323device table when querying H.323 gateways/trunks. For our basic query, we only use the sipdevice table to map our trunk device to the destination information stored in the siptrunkdestination table. 

      The Query

      A basic query that lists the device name, description, and destination information is provided below: 

      select as device, d.description, std.sortorder, std.address, std.port
      from device d
      inner join sipdevice sd on sd.fkdevice=d.pkid
      inner join siptrunkdestination std on std.fksipdevice=sd.pkid
      order by, std.sortorder

      The field identifies the name of the trunk as provisioned in UCM. I like to pull descriptions (e.g. d.description) whenever I can just to provide some contextual information. Since there can be more than one destination, it is a good idea to pull the "sort order" field along with the address and port. The address will either be an IP address, FQDN, or SRV record. 

      We are joining tables where there is a one-to-many relationship. Given that, it is a good idea to use the "order by" clause to ensure everything is presented in a deterministic way. 

      Additional Information

      When I query for SIP trunks, I also grab other information such as:

      • The calling/called IE parameters in the sipdevice table
      • MRGL and CSS information from the device table (with joins to other tables, as needed)
      • Information from the securityprofile table (SIP trunk security profile)
      • Information from the sipprofile table
      You can see what is available in these tables by querying the systables and syscolumns system tables. I provide basic information on how to explore the system tables in this supplemental series entry on the Informix DB.

      Thanks for reading. If you have time, post a comment!

      Installing Cisco RTMT 10.5 on Apple OS X

      "Could you re-post the article covering installation of RTMT on OS X?" is a request I receive at least once a month. Well, the short answer to the question is: no, I can't repost the original content. I wasn't the original author and I am not willing to post someone else's content without their explicit consent. I doubt @ciscomonkey would mind but it still isn't cool.

      That said, I have an obligation to my readers and there have been enough changes to the RTMT installer to warrant revisiting the whole process. This article provides an updated step-by-step procedure for installing RTMT on Mac OS X. The procedures cover the most recent Cisco UC applications.


      Any work I have done around RTMT on Mac OS X wouldn't have been possible without the work of @ciscomonkey on a blog he used to maintain ( That site is gone now but the original process provides a good foundation. So, props given.

      There have been changes along the way. When UCM 8.6 came out, there were some procedural modifications that were needed to avoid issues with loading the installer. Those modifications were originally documented by me here and eventually rolled into the original article. 

      The content provided below is based on the RTMT Linux installer from a CUCM 10.5 system and Mac OS X Mavericks. I also tested with OS X El Capitan, with some mixed results. I have a functional installation on OS X 10.11 but I encountered some procedural gaps.

      For those that have attempted to load RTMT and are running into issues, you may want to jump to the "Troubleshooting" section to see if you can find a solution.

      Enough background, let's get to it. 

      Step 1. Download the Installation File

      You can get the RTMT binary from any CUCM cluster. Simply open a web browser and go to https://publishernode/ccmadmin. Login as a user with the appropriate permissions and then go to Application > Plugins

      Using the search facility find all files that contain the word "Real" (without quotes). Download the Linux binary by clicking on the appropriate download link. Save the file to the appropriate download folder on your Mac. 

      Step 2. Prepare Your Environment

      There are a couple of things to be mindful of and your mileage may vary. I am not a Java guru and RTMT has Java environment dependencies. I don't know them all, just the ones I have run across. 

      Modify Installer Attributes

      The .bin file won't run by default. You need to toggle the executable attribute on the file. To do this, launch a terminal application and go to the directory where you downloaded the binary file (I downloaded it to installfiles/cisco/rtmt/).

      galactus-2:RTMT xx$cd ~/installfiles/cisco/rtmt/
      galactus-2:RTMT xx$ls
      CcmServRtmtPlugin-10-5-2.bin CcmServRtmtPlugin-9-1-2.exe
      galactus-2:RTMT xx$ chmod +x ./CcmServRtmtPlugin-10-5-2.bin

      Check the Java Version

      To avoid running into the Unsupported major.minor version error discussed in the "Troubleshooting" section, I recommend you just implement the fix ahead of time. Either way, you'll know if you have the issue soon enough. Take a look at the fix provided at the end of this blog if you want to preemptively address the issue.

      Custom Install Directory

      I usually plan on having multiple RTMT versions installed at any given time. It is annoying but I just accept it as a fact of life. So, I create a custom sub-folder in my Application folder for Cisco RTMT and then I create another sub-folder for my new version. I do this ahead of time because the installer wizard is going to bulk at creating folders due to permissions issues. So, for this version of RTMT I created a new folder (/Applications/Cisco RTMT/JRTMT10.5.2/) in Finder.

      Step 3. Install RTMT

      To run the installer, use the following command:
      sh ./CcmServRtmtPlugin-10-5-2.bin LAX_VM /usr/bin/java

      The reason for specifying the VM is documented in the RTMT 8.6 blog article I published a while back. I found the installer behavior to be consistent with 9.x and 10.x installers.

      This should launch the installation wizard and you are almost home. 

      Walk through the wizard as you normally would. Note that if you want to use a custom directory, you'll want to create it before selecting the install directory in the wizard. Once the wizard completes the install, click on Finish.

      Step 4. Modify the Run Shell Script

      The "" script that is installed with RTMT uses a fully qualified path to load the Java binary. The default path isn't the same as on a typical OS X system. So, we simply have to edit the shell script to use the correct path. 

      The script is installed to whatever directory you chose. On my system: 

      /Applications/Cisco RTMT/JRTMT10.5.2/Jrtmt

      I recommend backing up the shell script: cp

      Then edit the script file using vi, TextWrangler, or whatever floats your boat. We only need to change the very first part of the rather long command. Change the following:

      "./jre/bin/java"  (with quotes)



      Save your changes. If using vim from the command line, it is easy peasy:


      Step 5. Run Shell Script

      I recommend doing this from the command line on the initial test. This will allow you to see any error messages that are generated. From terminal, change to the directory where you installed RTMT and run the shell script: ./

      If you see the following, you are on the right track. Specify your host FQDN or IP address and click OK.

      You should then be prompted to accept the certificate (unless it is already in your trust store).

      Finally, you will be prompted to authenticate yourself.

      If you run into issues, go to the troubleshooting section below. Also, double check that you followed the steps provided. 

      Step 6. Create Apple Script

      If you want to be able to run the RTMT application without having to load terminal then create an AppleScript. Load the AppleScript edit and type in the following:

      do shell script "cd /Applications/Jrtmt; ./"

      The above shows the default installation path, if you used a custom path then change the "cd" command accordingly (as shown in the following image). Run the script to make sure it works. 

      Once you confirm the AppleScript is functional, save the script locally on your machine. When you save the script set the file type as Application. You may need to save it on your desktop and then copy it to your Applications folder due to Applications folder permission constraints. 


      Unsupported major.minor version Error

      There is an error that I ran into (and Google suggests others have, as well) where I received an error similar to the following AFTER the install but during the initial run of the shell script ( 

      galactus-2:JRtmt xx$ ./
      Exception in thread "main" java.lang.UnsupportedClassVersionError: com/cisco/ccm/serviceability/rtmt/ui/JRtmtMain : Unsupported major.minor version 51.0
      at java.lang.ClassLoader.defineClass1(Native Method)
      at java.lang.ClassLoader.defineClassCond(
      at java.lang.ClassLoader.defineClass(
      at Method)
      at java.lang.ClassLoader.loadClass(
      at sun.misc.Launcher$AppClassLoader.loadClass(
      at java.lang.ClassLoader.loadClass(

      I ran into this error on my Maverick's system. This is due to the fact the JDK version used to compile the Cisco binary is newer than the version loaded in Mac OS X. First, what the hell is major.minor version 51.0? Based on my digging, the version identifiers are defined as:

      J2SE 8 = 52 (0x34 hex)
      J2SE 7 = 51 (0x33 hex)
      J2SE 6.0 = 50 (0x32 hex)
      J2SE 5.0 = 49 (0x31 hex)
      JDK 1.4 = 48 (0x30 hex)
      JDK 1.3 = 47 (0x2F hex)
      JDK 1.2 = 46 (0x2E hex)
      JDK 1.1 = 45 (0x2D hex)

      To verify what is going on, do the following. You'll notice that my Java run time environment was version 6. RTMT is compiled with version 7 and I believe that is the crux of the issue.

      Check Your Current Java Version:

      galactus-2:JRtmt xx$ java -version
      java version "1.6.0_65"
      Java(TM) SE Runtime Environment (build 1.6.0_65-b14-462-11M4609)
      Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-462, mixed mode)

      Check Your Existing SDK:

      galactus-2:JRtmt xx$ /usr/libexec/java_home -verbose
      Matching Java Virtual Machines (2):
      1.6.0_65-b14-462, x86_64: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
      1.6.0_65-b14-462, i386: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home

      The Fix

      There may be other ways to resolve this problem but I went ahead and installed JDK 8. From my understanding, you want to use the JDK vs. the JRE since the JDK will (a) include the JRE and (b) update the necessary symbolic links to the JRE used by OS X. 

      To verify:

      galactus-2:JRtmt xx$ /usr/bin/java -version
      java version "1.8.0_60"
      Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
      Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
      galactus-2:JRtmt xx$ /usr/libexec/java_home -verbose
      Matching Java Virtual Machines (3):
      1.8.0_60, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
      1.6.0_65-b14-462, x86_64: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
      1.6.0_65-b14-462, i386: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home

      NOTE: If you had to update Java then you will most likely need to implement the following TzDataManager fix.

      TzDataManager Error: NullPointerException

      This error occurs after you run RTMT and provide your login credentials. So, it gets pretty far into the initialization process (or so it seems) and then pukes on you. The RTMT error is nondescript:

      At the console, you will see something like this:

      2015-10-10 16:44:13,908 [SplashThread] ERROR rtmt.control - TzDataManager:getStrClientTzVersion:[ERROR]:Ex: /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/jre/lib/zi/ZoneInfoMappings (No such file or directory)
      2015-10-10 16:44:13,908 [SplashThread] ERROR rtmt.control - [ERROR] In run thread SplashWindow: java.lang.NullPointerException
      2015-10-10 16:44:13,909 [SplashThread] ERROR rtmt.control - [ERROR] In run thread SplashWindow: java.lang.NullPointerException
      at Source)

      at Source)
      at Source)
      at$SplashWindow$ Source)

      Basically, RTMT is looking for a file "ZoneInfoMappings" in the home environment for the Java version that we are using (i.e. Java 8, in this case). The file isn't there and that breaks things. Actually, the entire directory (zi) where RTMT is looking for the file is missing. At least, this was the issue in my environment. 

      The Fix

      Again, there are probably other (and better) ways to fix this issue. I fixed it by doing the following. I found the "zi" directory in my Java 6 environment and I copied the entire directory to my Java 8 environment.

      galactus-2:lib xx$ /usr/libexec/java_home -verbose
      Matching Java Virtual Machines (3):
      1.8.0_60, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
      1.6.0_65-b14-462, x86_64: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
      1.6.0_65-b14-462, i386: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home

      galactus-2:lib xx$ sudo cp -R /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/lib/zi /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/jre/lib/

      On the El Capitan test system, I had no previous Java version and I installed JDK 8. I honestly don't know if I had Java 6 on this system before I upgraded to 10.11. I found that I could workaround the issue by creating a [JavaHome]/jre/lib/zi directory and creating a blank ZoneInfoMappings file by using touch.

      wolverine:lib xx$ sudo mkdir zi
      wolverine:lib xx$ cd zi
      wolverine:zi xx$ sudo touch ZoneInfoMappings

      The above fix is suboptimal. For the life of me, I couldn't find a clean solution to download the ZoneInfoMappings and all of the time zone information files. I also tested by copying a full "zi" folder from another system. That worked. If you don't have that option then I also found that simply creating the "dummy" file as noted above was functional (albeit a bit quirky). Perhaps the answer is to download the legacy Java 6 runtime from Apple and then load JDK 8? If someone has a better handle then please post in the comments!

      Thanks for reading. If you have time, post a comment!

      Centrally Refreshing Jabber Contact Photos in MRA Deployments

      I have assisted several customers with Jabber deployments lately. Almost all of them driven by the desire to implement Mobile and Remote Access (MRA). Provisioning MRA is not the topic of this article. Instead, I wanted to touch on an operational task that has annoyed me for some time. 

      The issue is with caching of Jabber contact photos. Windows and Mac Jabber clients are coded to cache various things. I assume this is an effort to minimize network transactions and optimize client performance. One of the elements cached are contact photos. I have no problem with caching but, in my experience, the issue is that old contact photos tend to be permanently cached and I can't easily (i.e. centrally) force a refresh. 

      I played around with some options and found a method that worked in some test environments and my own production environment. Maybe this will work for others. I would be interested in hearing about other options. Please use the comments to enlighten me and others!


      There are several Jabber scenarios and it is possible that other combinations of parameters will behave differently. This article is specifically focused on the following scenario:

      • Jabber for Windows (J4W) 10.5, 11.0
      • Jabber for Mac (J4M) 10.6
      • Cisco Expressway or VCS X8.5.2
      • CUCM 10.5(2)

      Cisco Mobile and Remote Access (MRA). Which implies that we are using URLs to serve up contact photos and NOT AD object attributes. Specifically, we have an entry in the jabber-config.xml that specifies a URL that should be used for downloading contact photos. Something like this:



      The Issue

      The issue is fairly simple. Assume that you have provisioned your MRA environment and you add or change the contact image for one or more Jabber user. What you'll find is that your J4W and J4M clients may not render the new contact photo for an existing contact in their contact list. 

      Now, the user experience will vary depending on the scenario:

        New Contact, New Photo

        Assume that a Jabber user (e.g. had a new image added to the repository. Whether the user jsmith had a previous image or not, the behavior in the "New Contact, New Photo" scenario is the same.

        In this scenario, you have a watcher (someone with a contact list in Jabber) that was not watching jsmith prior to the photo change. When this user adds the contact, the new photo should be rendered. Note, I have noticed that if a Jabber user previously searched for the new contact (e.g. jsmith) to place a call (but never added the contact) then it is possible that the old image is still cached.

        Existing Contact, New Photo

        In this scenario, the Jabber user (jsmith) is in another user's contact list. However, jsmith never had a contact photo. So, we were seeing the "grey head" avatar. The admin uploads a new photo to the web server and expects the contact photo to automatically refresh. 

        In my experience, the contact photo does not refresh for Jabber clients until something else happens. For instance, the presence status for jsmith changes or a Jabber user starts a chat with or places a call to jsmith. Logging out and logging back in would probably refresh the contact photo, too. I believe I tested that but the memory may be off. 

        Existing Contact, Changed Photo

        This is the annoying scenario. User jsmith is a contact for another Jabber user and we recently changed the contact photo for Mr. Smith. In my experience, the contact photo for J4W and J4M clients will never refresh. We'll just keep seeing the old photo. I tested this across a multi-day period in a production environment. Contact photos would not refresh. 

        Deleting and re-adding the contact does not do squat nor does logging out/logging back in. Without some intervention, the contact image doesn't change.

        What About Mobile Clients

        For the sake of clarity let's talk about the iOS and Android Jabber clients. In my testing (iPhone, iPad, and Android tablet) the Jabber clients on these platforms will automatically refresh contact photos. My assumptions is that they don't cache the photos. I did observe that the Jabber watcher needs to do something on their end that forces a screen refresh in the UI. 

        The Fix

        There are basically two fixes that I am aware of. One is a method documented by Cisco and the other is something I experimented with in my lab and one production environment. 

        Wipe the Cache

        The method I have seen documented by Cisco is to basically delete the photo cache stored in the file system of the client OS. The process for Windows clients is documented here:

        For OSX clients, the photo cache is located here:

        /Users/<user>/Library/Application Support/Cisco/Unified Communications/Jabber/CSF/Photo Cache/

        The process is simple. Shutdown the Jabber client (completely exit the application). Go to the appropriate folder and delete all of the .jpg or .png files in the photo cache. Then reload the Jabber client and your photos should be refreshed.

        While functional, this method has an elevated suck factor in most environments. If you have an environment where everyone logs out of their workstation at the end of the day and when they log back in you have a method to run centralized administrative scripts, then maybe you don't care. I don't think this is as common a practice as it used to be. 

        Regardless, this is the only documented method that I have come across when searching for a resolution to this problem.

        Change Jabber Config File

        Consider this a candidate solution at this time. I haven't exhaustively tested it but I have tested in a lab environment and my corporate production environment. Basically, I pondered the question of whether the Jabber client would detect that the contact photo URL had changed and that it would respond by retrieving new contact images. My test results were positive.

        The process I used:

        1. Backup the current jabber-config.xml file. You can use a TFTP client to do this or you can go to http://<CMTftpHost>:6970/jabber-config.xml and save the XML document to a text file.

        2. Make the necessary changes in your environment to accommodate a change to the UdsPhotoUriWithToken attribute. For instance, in the example provided earlier, my contact photo URL was

        So, I added a new DNS "A" record: I then updated my web server (IIS) to add the appropriate bindings. If you are modifying the DNS then you should test the DNS resolution from your Expressway-C or VCS-C appliance. Go to Maintenance > Tools > Network utilities > DNS lookup and test DNS resolution.

        3. Update the jabber-config.xml file that you downloaded. Change the contact photo URL. 

        4. Upload the jabber-config.xml file to all of the TFTP servers in your CUCM cluster. 

        5. Restart the TFTP service on all affected TFTP nodes. 

        6. Go to http://<CMTftpHost>:6970/jabber-config.xml and double check your work. 

        Once this is done, I found that if I logout/login or restart the J4W or J4M clients that my contact photos refreshed. Interestingly enough, I also found that when the user with the new contact photo loads their client they still see the old contact photo. Even after the remedy is applied. If they view their own user profile then the contact photo refreshes. 


        The process I provide above ("Change Jabber Config File") isn't what I would call fully vetted. The concept worked for me and I may test it further but your mileage may vary. I am curious if this approach works for others. Please use the comments below.

        Thanks for reading. If you have time, post a comment!

        Heads Up, Issues with UCM SIP Processing on 10.5(2)SU2 and 11.0

        Users that recently downloaded Cisco Unified Communications Manager (CUCM) 10.5(2)SU2 [10.5(2.12900-14] or CUCM 11.0 [11.0(1.10000-10)] may have received an email from Cisco alerting them that these versions have been deferred due to some serious defects. If so, then good for you. If you ignored the e-mail then you receive the wag of the finger.

        Maybe, you are managing a system affected by the defects we are going to discuss but you weren't the one to download the files. Well, in that case, you wouldn't have been alerted. In all cases, if you have installed 10.5(2)SU2 or 11.0 and are running SIP then you want to heed the warnings and look at applying the appropriate fixes. 


        One of the nasty software defects that you could run into if you are using SIP trunks to ITSPs or other call processing systems is CSCuu97800. I have a customer that hit this defect recently. 

        The basic gist is that if a SIP message is received by the CUCM that causes the Call Manager process to resolve a FQDN via DNS the CallManager process will abnormally terminate. This means that route lists, hunt lists, phones, gateways, media resources, etc. will be immediately impacted. IOW, all hell breaks loose. 

        Are You Impacted?

        It is very easy to determine if you are encountering this issue. According to the defect notes, this defect (and related defect CSCut30176) affect specific versions of CUCM:


        So, check versions on your CUCM as a preliminary check. After this, check to see if your cluster nodes are generating core dumps. To do this, SSH to the console of any CUCM node running the CallManager service and execute this command:

        admin:utils core active list

        Size Date Core File Name
        254020 KB 2015-07-20 09:13:07 core.8414.11.ccm.1437397985
        249276 KB 2015-07-20 10:24:38 core.3855.11.ccm.1437402278
        244924 KB 2015-07-20 08:38:13 core.22333.11.ccm.1437395891
        248900 KB 2015-07-20 10:18:28 core.28705.11.ccm.1437401907
        244228 KB 2015-07-20 11:28:22 core.1786.11.ccm.1437406099
        242040 KB 2015-07-20 10:46:03 core.10190.11.ccm.1437403562
        245716 KB 2015-07-20 12:30:38 core.27040.11.ccm.1437409838
        253316 KB 2015-07-20 09:26:18 core.29177.11.ccm.1437398777
        246008 KB 2015-07-20 12:02:19 core.27950.11.ccm.1437408138
        246908 KB 2015-07-20 08:04:03 core.10454.11.ccm.1437393837
        252348 KB 2015-07-20 09:33:50 core.9115.11.ccm.1437399230

        To verify if you are affected by CSCuu97800 (or related defect CSCut30176) you should analyze one of the files. For example:

        admin:utils core active analyze core.8414.11.ccm.1437397985

        You are prompted with a warning that says that says core analysis will eat up CPU cycles. If you are doing this during core business hours then you will most likely not care about the CPU cycles because your system is already compromised. 

        Once the analysis is ran, scroll down until you find the section "backtrace - CUCM". Review the trace and compare it to the conditions provided in CSCuu97800. If they line up then you are definitely running into this bug.


        Simple. Follow the recommendations in the software defect. Take a couple of minutes to review the Readme file of the fix (for 10.5:

        Then install the patch. It will restart the Call Manager service. Again, given the nature of this defect I doubt it matters much that this patch requires a service restart. That said, in a mulit-node cluster environment, select specific nodes and patch them first. Then stop the CM service on the remain nodes (allowing devices to fail to the patched systems). After phones get to a stable node then patch the rest of the cluster.

        We applied the necessary patch and so far, so good. 

        Other Thoughts

        The nature if this kind of defect definitely separates the pro troubleshooters from the amateurs. On the surface, it will appear like the whole world is coming down around you. When that happens I follow a simple rule: Don't chase the symptoms. 

        If you have something like "All of my SIP calls are failing but everything else is working" then, by all means, follow the symptoms and try to isolate your fault domain. If you are seeing multiple, apparently unrelated symptoms, across different services and devices then following one symptom is going to waste time. Start from the "bottom up". 

        1. Check physical: network, compute resources, hypervisor management software (so, v-resources fall here)

        2. Check logical. Focus on LACP, Layer 2 failures/convergence status indicators, Layer 3 routing topology changes, etc.

        3. Check service logs on CUCM [ This is where we got our first clue of a service issue ]

        4. Check specific applications / features

        Once you find a clue, follow it. If you have a team working on the issue. Pick one resource and have them start trolling for software defects while other resources are applying a logical troubleshooting methodology. If you are an integrator, line up three people. One to manage the customer, one to focus on the basic troubleshooting / data gathering, and one to work the vendor angle (including TAC). 

        Thanks for reading. If you have time, post a comment!

        Using SQL To Survey Phone Station Line Appearances

        A reader comment on one of the entries in the SQL Query series asks the question:

        I have multiple lines associated to the same phone and i'm trying to write a query to get only Line [1] "main line" ,any help please ??
        We talked about querying line appearances associated phones in one of the early installments. Now we want to turn some extra knobs to focus on specific data views. I want to provide an example query to address the readers question and also touch on another, related query to show an example of how we can find anomalous data in our UCM solution. 

        There are lots of ways to look at Device/Line associations. Especially if you get into the business of identifying user/line and Directory URI associations. We won't get into all of that in this installment but I think it is a good thread to follow. So, let's consider this a "Part 1" for the time being.


        In this installment, we are going to kick around a few tables to render data views that will help us identify device and directory number relationships. We'll focus our attention on two primary tables along with a "mapping table". THE mapping table when you are dealing with directory numbers. 

        A brief primer is provided in the first blog of this series.

        The Tables

        The tables we are going to focus on are as follows:
        • Device: This table contains all devices (physical and logical) provisioned in the system. This includes phones, gateways, trunks, route lists, hunt lists, etc. 
        • Numplan: This table contains all digit patterns provisioned in the system. This includes directory numbers, route patterns, translation patterns, etc.
        • Devicenumplanmap: This is a "mapping" table. It is used to store device to numplan mappings. Lot's of magic happening here.

        The Queries

        Primary Line Appearance

        The inspiration for this entry comes from the following question posed by a reader:
        How do I write a query to only show the main line (Line 1) on my phones?
        Getting this information is fairly straightforward and it all comes down to the extra bits of data that are stored in the devicenumplanmap table. Here is a sample query:

        select as Phone,d.description,n.dnorpattern as PrimaryDN
        from device d
        inner join devicenumplanmap dmap on dmap.fkdevice=d.pkid
        inner join numplan n on dmap.fknumplan=n.pkid
        where d.tkclass=1 and dmap.numplanindex=1
        order by

        So, we are listing phones with descriptions and a single directory number entry. Since we are using an "inner join" method here, we will only list phones that actually have a primary DN. Listing phones that don't have a DN at all is also a bit of data one may want to see but we'll get back to that.

        Remember that the device table stores ALL devices (not just phones). So, we are limiting our query to only look at phones by using the tkclass field in the device table. The "magic" (if you will) comes from devicenumplanmap table. Specifically, the numplanindex field is used to identify the DN position on the device. A value of "1" means this is the first line appearance on the phone.  

        Null Line Appearance

        Let's go the other way with this. Assume you wanted to show phones where there is no primary line appearance. It happens all of the time on systems of all shapes and sizes. One of the things I look at when a customer asks us to do some optimization is to look for phones that are sitting there with no DNs. Sometimes this is legit and sometimes it is an absolute mess. 

        Here is one way to get the info we want:

        select as Phone,d.description
        from device d
        where d.tkclass=1 and
        1>(select count(dmap.pkid) from devicenumplanmap dmap where dmap.fkdevice=d.pkid and dmap.numplanindex=1)
        order by

        So, this query will list phones and descriptions for devices that do not have a directory number associated with their primary line appearance (i.e. first button on the phone). Again, we are filtering on type class of 1 (tkclass). This gets us "phones". We are also running an inner select query to get a count for the number of entries in the devicenumplanmap table where the phone exists AND the numplanindex is actually "1" (which means the first line appearance). 

        If the count returned from the inner select is "1" then are criteria fails and that phone isn't listed in the output. If the count is "0" then we know that we have a phone that doesn't have a DN association on the primary line.

        What Else?

        The queries provided in this installment are actually part of a "set" of queries I use when doing an assessment on a customer system. This query set has grown over time because there are many different ways to look at the data in UCM. Associations to specific route partitions, shared line appearances, DNs that aren't shared lines, external phone masks, calling search space associations, and etc., etc. etc.. 

        I have to get back to my regular J.O.B. So, I think maybe I will just throw a few of these in the mix from time to time. If you have a specific data view you were interesting in, let me know in the comments!

        Thanks for reading. If you have time, post a comment!

        CentOS Recovery Use Case 5: Downloading the Tomcat Certificate Private Key

        recently published a blog entry on how one could use the CentOS distribution and Recovery process to access the Cisco UCOS root file system. As noted in the initial blog, this isn't a new revelation. I originally was going to provide a group of use cases in the "primer" but decided that it was a little too long. 

        So, I am breaking the use cases out into individual entries. Who knows, over time this may become another series. For now, let's focus on one of the CentOS recovery use cases: Downloading the Tomcat Certificate Private Key.


        Unlike the other articles in this series, this particular entry is focused on a task that is more "pro-active" than reactive. Anyone who is worth their salt in this game knows that they have to strive for a deeper understanding of how things work if they are going to excel. Deeper than you are going to find in documentation provided by the software manufacturer.

        In our communications arena, this usually means: protocol analysis! Er mer gerd, Perkets! 

        Protocol analysis and packet "sniffing" is probably one of my favorite things. It is right up there with tinkering with the CUCM SQL DB and custom building scripts/apps to automate tasks. 

        The Challenge

        Getting the TFTP, SCCP, SIP, MGCP, etc. packet traces is easy. However, more and more of the communication transactions that Cisco UC applications are fulfilling rely on HTTPS. Transactions between IM&P and the UCM cluster and Mobile Remote Access (MRA) are only two of the more interesting things that some of us would like more visibility on. Also, let's not forget that using TLS for the aforementioned communication protocols could also make the underpinnings less transparent.

        So, what is one to do? Well, we can use an application like Wireshark to view the packet traces. Of course, since the HTTPS communication is encrypted, we need to have access to the private keys to decrypt the communication. 

        The process outlined herein covers how to download private keys for self-signed certificates.
          The Procedure

          The CentOS boot process is discussed in a separate blog entry (read that first). To access the private keys, do the following after booting into CentOS:

          Note: It is recommended that you are enabling the network boot option with CentOS recovery process.

          1. Go to the Tomcat cert directory using the following command:
          cd /usr/local/platform/.security/tomcat/keys/

          2. Execute the following command to create a format for use with Wireshark:
          openssl pkcs8 -nocrypt -in tomcat_priv.pem -out tomcat-rsa-private.key

          3. SFTP the file created in the previous step to your work station. I use Mac OSX, so SFTP is easily provisioned. If you are using a Windows OS then you can download a third-party application (maybe Filezilla server would work).

          Unlike CentOS 5, CentOS 7 doesn't give you a network configuration wizard during the Recovery initialization process. You can provision the network after CentOS 7 is booted up using a process similar to this one

          Using the Key in Wireshark

          I'll probably provide a more detailed discussion with examples in a separate blog. That said, it would be a little unfair if I failed to give at least a high-level overview of the procedures for loading the RSA key from CUCM into Wireshark. The following procedures work on Mac OS X.

          1. Launch Wireshark 

          2. Go to Edit > Preferences

          3. Go to Protocols > SSL

          4. Click on the configuration option "RSA Keys List"

          5. Click on New to add a new RSA key entry

          6. Enter in the parameters and point to the RSA key file. Click on OK.

          7. Click on Apply/OK

          You should be good to go.

          Thanks for reading. If you have time, post a comment!

          CentOS Recovery Use Case 4: Fixing Errors with Custom Announcements

          I recently published a blog entry on how one could use the CentOS distribution and Recovery process to access the Cisco UCOS root file system. As noted in the initial blog, this isn't a new revelation. I originally was going to provide a group of use cases in the "primer" but decided that it was a little too long. 

          So, I am breaking the use cases out into individual entries. Who knows, over time this may become another series. For now, let's focus on one of the CentOS recovery use cases: Fixing Errors with Custom Announcement Uploads.


          In this blog entry, we are (yet again) working around a software defect. I have seen the issue in UCM version 8.6 and 9.1(2). The problem arises when you are attempting to upload a custom announcement for use with the Hunt List queuing feature. When uploading the prompt, you receive the error "The .wav file could not be translated by the Audio Translator Application".

          The software defect is CSCua90744.

          The Fix

          As with the article covering TFTP customer ring tones, the root cause of this problem is permissions. I used the procedures outlined here to fix a UCM 9.1(2) system in my lab. I compared file permissions on a UCM 10.0 cluster (which had no issue uploading the prompt) with the UCM 9.1(2) system (that exhibited the issue). After applying the fix, I was able to upload the custom announcements.

          From what I could determine, the issue is the permissions setting on the "CustomAnn" directory:

          Broken Example:
          ls -ld /common/log/taos-log-a/cm/tftpdata/
          drwxr-xr-x 2 root root 4096 Mar 11 2014 CustomAnn/

          Working Example:
          ls -ld /common/log/taos-log-a/cm/tftpdata/
          drwxr-xr-- 2 tomcat ccmbase 4096 Mar 11 2014 CustomAnn/

          As you can see, there are three issues:

          1. The owner (user) is wrong
          2. The group owner is wrong
          3. The user/group permissions are wrong

          The Procedure

          The CentOS boot process is discussed in a separate blog entry (read that first). To fix the custom announcement upload issue, do the following after booting into CentOS:

          1. Go to the affected directory using the command: cd /common/log/taos-log-a/cm/tftpdata

          2. Change the ownership attribute using the command:
          chown tomcat:ccmbase CustomAnn

          3. Change the permissions using the command:
          chmod 754 CustomAnn

          4. Check your work:
          ls -ld */

          4. Type exit at the prompt to reboot the VM.

          5. Disconnect the CentOS ISO from your VM guest.

          You will need to repeat this process on the Publisher node and on all Music On Hold nodes in your cluster.

          Thanks for reading. If you have time, post a comment!

          CentOS Recovery Use Case 3: Fixing TFTP Custom Ring Tone Issues

          I recently published a blog entry on how one could use the CentOS distribution and Recovery process to access the Cisco UCOS root file system. As noted in the initial blog, this isn't a new revelation. I originally was going to provide a group of use cases in the "primer" but decided that it was a little too long. 

          So, I am breaking the use cases out into individual entries. Who knows, over time this may become another series. For now, let's focus on one of the CentOS recovery use cases: Fixing the TFTP Custom Ring Tone Issues.


          This issue we are going to focus on in this entry is a bug where IP phones are unable to download ring tones. The issue arises as a result of an underlying permissions issue in the OS. We came across the problem when using PCD to upgrade a Cisco UCM cluster from 8.5 to 10.5. 

          The software defect is CSCui42799

          The Fix

          The fix is to change permissions on specific TFTP files. More specifically, the issue is the file ownership.

          For example:

          Broken Example:
          ls -l /usr/local/cm/tftp/Ringlist.xml
          -rwxrwx---. 1 adminsftp download      2657 Apr  2  2008 /usr/loca/cm/tftp/Ringlist.xml

          Working Example:
          ls -l /usr/local/cm/tftp/Ringlist.xml
          -rwxrwx---. 1 ctftp ccmbase      2657 Apr  2  2008 /usr/loca/cm/tftp/Ringlist.xml

          In the "broken" example, the owner is "adminsftp" but the owner is supposed to be the user "ctftp". You can fix this by adding "rwx" permissions for other users by using "chmod" but a more correct procedure is to change the owner using "chown".

          The Procedure

          The CentOS boot process is discussed in a separate blog entry (read that first). To fix the TFTP issue, do the following after booting into CentOS:

          1. Go to the appropriate directory using the command: cd /usr/local/cm/tftp/

          2. Change the ownership of Ringlist.xml using the command:
          chown ctftp:ccmbase Ringlist.xml

          3. Change the ownership of DistinctingRingList.xml file using the command:
          chown ctftp:ccmbase DistinctiveRingList.xml

          4. Change the ownership of the raw ring tone files using the command:
          chown ctftp:ccmbase *.raw

          5. On my system, there is one ring tone that has different ownership (and I am not affected by the software defect). So, you probably want to fix that, too:
          chown ccmbase:ccmbase CallBack.raw

          6. Type exit at the prompt to reboot the VM.

          7. Disconnect the CentOS ISO from your VM guest.

          You will need to repeat this process on all TFTP nodes in your UCM cluster.

          Thanks for reading. If you have time, post a comment!

          CentOS Recovery Use Case 2: License Expiry Issue

          I recently published a blog entry on how one could use the CentOS distribution and Recovery process to access the Cisco UCOS root file system. As noted in the initial blog, this isn't a new revelation. I originally was going to provide a group of use cases in the "primer" but decided that it was a little too long. 

          So, I am breaking the use cases out into individual entries. Who knows, over time this may become another series. For now, let's focus on one of the CentOS recovery use cases: Fixing the License Expiry Issue.


          I came across this issue a couple of times when doing the "Jump Upgrade" for Cisco UCM clusters. There are a couple of software defects that could bite you if you aren't paying attention:

          Originally, I ran into this issue when there was no fix. Of course, I later ran into it after there was a fix but before I knew about the fix! Which does highlight the most important lesson: it is better to RTFM then it is to CentOS your install!

          Anyway, the issues arise when you are doing the Jump Upgrade and you fail to install the "refresh upgrade" (RU) COP file before doing a DRS restore of a production CUCM 7.1 system. The issue manifests itself with the extremely helpful error message: "Upgrades are prohibited during Licensing Grace Period".

          If you find yourself running into this issue then you can simply delete the text file that the system is using to "detect" the license problem and go about your business. You only need to do this on the Publisher node.


          The CentOS boot process is discussed in a separate blog entry (read that first). To fix the License Expiry issue, do the following after booting into CentOS:

          1. Make a copy of the license expiry file using the command:
          cp /usr/local/platform/conf/licexpiry.txt licexpiry.txt.backup

          2. Remove the license expiry file using the command:
          rm /user/local/platform/conf/licexpiry.txt

          3. Type exit at the prompt to reboot the VM.

          4. Disconnect the CentOS ISO from your VM guest.

          Thanks for reading. If you have time, post a comment!

          Sometimes You Have to Use the Backdoor: Using CentOS to Access Cisco UCOS

          It is the middle of the night and you are in the midst of a change control when you run into a brick wall. The kinda wall that can ruin your entire weekend. At a minimum, you have added at least a few hours to the process and boy you are not happy about that. 

          Sometimes you just need more access than "the man" wants to give you and you don't want to wait for some tech support engineer to get on the phone to do something you can damn well handle on your own. Yes, sometimes you have to reach into the unconventional pocket of your tool belt and break off a little somethin'-somethin'. This series provides the necessary tools to get access to the Cisco UCOS root file system so that you can get the job done. 


          I won't pretend that I am the first one to find a way to "hack" into the Cisco UCOS because I am most certainly not that dude. I am also not the first guy to blog about the process I am getting ready to present. What I am trying to do in this series is consolidate information and present some of the scenarios where one could apply said information. 

          The Issues

          The solution we are going to discuss can solve several issues and I plan to go through a handful of actual "real life" scenarios to demonstrate the same. However, I like to state the "business issue" before I get all self-righteous about a solution. So, the issue (or need) is that sometimes you need to get access under the hood to fix a problem with your CUCM, Unity Connection, UCCX, etc.. 

          There are times when the UC application portal (e.g. CCMAdmin, CMPlatform) is inadequate and the limited UCOS shell falls short of addressing your woes. If you have spent any time in the driver seat of a Cisco UC deployment then I don't need to explain the kind of problems that warrant bigger guns. 

          The Disclaimer

          I have to include a disclaimer here because I don't want anyone whining to me about a broken Cisco UCM cluster in the event they do something to shoot themselves in the foot. The disclaimer is: don't shoot yourself in the foot! More to the point, Cisco does not in any way approve of the methods I am going to document in this entry. Further, I am not in anyway claiming that everything I am going to present will work in every scenario. If you follow the processes and/or methods provided herein then you, and you alone, take responsibility for any issues that may arise. 

          Look at the bright side - if you save the day, you can also take all of the credit. Yay you!

          Seriously, be careful. I have used these methods on production systems but MOST of the time I use them on lab systems or temporary staging clusters and I am much more cavalier about those systems. If you have a problem, I can't guarantee I can help you out of it. 

          The Process

          We are going to cover the following:
          • Using CentOS to access the root file system of a UCOS application host
          • Some scenarios that demonstrate why you would do such a thing (separate, follow up entries throughout the week)

          Using CentOS

          CentOS (Community ENTerprise Operating System) in a community-supported Linux distribution derived from sources that Red Hat provides for Red Hat Enterprise Linux (RHEL). CentOS aims to be functionally compatible with RHEL and, since Cisco's UCOS is based on RHEL, it is the perfect distro for our purposes. 

          Downloading CentOS

          To do our do, we are going to download ISO images for the purpose of mounting them on virtual machines (VMs) in ESXi (4x/5x). The ISO images we need are located here:

          Now, you will need to pay attention to the distribution versions. That is pretty important, particularly since the latest releases of the Cisco UC applications leverage RHEL x64 architecture. So, if you try to use a CentOS distribution that is built for i386 architectures on a UCM 10.x system (for example) then you'll be a little disappointed.

          I have used the following:
          • CentOS 5.10: I have used this for UCM 6x, 7x, 8x, 9x
          • CentOS 7.0.1406: I have used this for UCM 10.0, 10.5

          Using the CentOS ISO means that we have to shutdown the running UCOS VM and then boot from the ISO. So, the first step is to clear the room. Just kidding, the actual first step is to make sure you schedule a proper outage before you do anything. Then, during the outage, you will shut down your VM.

          If you are doing this on a production system then I would give serious consideration to taking a snapshot if you are uncomfortable with the process. Most of the time, I find that tasks where I have had to use this method usually coincide with a lab or off-production staging process. So, I don't bother with snapshots in those instances. 

          You will want to download the CentOS ISO and then upload it to a SAN, NAS, NFS share, or DAS as appropriate for your environment. Make sure you use a datastore that has been added to your ESXi environment.

          Finally, you will want to modify the settings of your VM guest so that you can mount the CentOS ISO on boot. Depending on your environment, you may need to:

          • Set the DVD vHardware to use the datastore ISO
          • Select the option to "Connect at Power On"
          • Modify the VM guest bios boot order to prefer DVD over vHDD

          Booting Up CentOS 5.x

          Once you have your environment set up, use the following process:

          1. Power on the VM guest.

          2. You will be greeted with the CentOS splash screen:

          3. Type in "linux rescue" (without quotes) at the boot: prompt.

          4. You are prompted to choose a language, select English and click OK.

          5. You are prompted to specify your keyboard type, select us and click OK (with language and keyboard, use whatever works for you, I have only tested en_us).

          6. You are asked whether you want to enable network interfaces or not. If you have a need to pull files off of the UCOS host then I recommend enabling the network. For example, later in the week we are going to discuss how to download the private keys (for the purpose of decoding communications to/from the UC host) and it is much easier to SCP/SFTP the files from the host. If you select No then move to Step 7.

          6a. If you are configuring the network, you will be prompted to configure "eth0", select Yes

          6b. At the network configuration screen, disabled IPv6 and enable IPv4 then select OK

          6c. When prompted to specify the IP configuration, choose "Manual" and specify a usable IP address before clicking OK

          6d. Set the default gateway and click OK

          7. You will receive a dialog where the Rescuer says it is going to attempt to find your Linux installation and mount the file system. Click on Continue.

          8. If the Rescuer was successful, it will mount your Linux installation under 'mnt/sysimage' and it will provide some instructions on how to access the UCOS file system. If an error occurs then you will receive an error message and, most likely, it will be completely uninformative! Click on OK if all is well. If not then go ahead and shut down the VM and work your way over to Google to do some research.

          9. If all is well then you will be at a command prompt (e.g. sh-3.2#). Type in the command: chroot /mnt/sysimage and hit enter. The command prompt may change (to sh-<ver># or bash-<ver>#).

          10. Use the command ls /mnt/sysimage/ to see if your UCOS file system has been mounted. If you see content in the output then you are in business.

          Booting Up CentOS 7.x

          If you are using RHEL with a x64 architecture then you will want to use CentOS 7x. The boot up process is different for this CentOS version. Once you have your environment set up, use the following process:

          1. Power on the VM guest.

          2. You will get the CentOS 7 boot menu. Be quick like a bunny here because the boot menu will time out (unlike 5x).

          3. Select the "Troubleshooting" menu option and hit Enter on your keyboard.

          4. Select the "Rescue a CentOS System" from the Troubleshooting menu and hit Enter.

          There will be a pause and the system will ask you to hit Enter to start the installation DO NOT HIT Enter!! Just wait for the Rescue Dialog (step 5) to display.

          5. You will receive a dialog where the Rescuer says it is going to attempt to find your Linux installation and mount the file system. Click on Continue.

          6. If the Rescuer was successful, it will mount your Linux installation under 'mnt/sysimage' and it will provide some instructions on how to access the UCOS file system. If an error occurs then you will receive an error message and, most likely, it will be completely uninformative! Click on OK if all is well. 

          7. At this point, you will be at a command prompt (e.g. sh-3.2#). Type in the command: chroot /mnt/sysimage and hit enter. The command prompt may change (to sh-<ver># or bash-<ver>#).

          Where Do We Go From Here

          There are several reasons that one would need to use CentOS to "hack" into the UCOS system. I wanted to provide a few examples but the scenarios will vary and change over time. The good news is that the issues that cause you to go down this road are uncommon. Unless, of course, you are an integrator then you may have to do this every couple of installs. 

          To keep these blog entries from getting too long, I am going to provide the individual use cases over the course of the week. Hopefully, that isn't too annoying for anyone! I'll update the links as the articles are published.

          Use Case #1: Modifying the License Mac on UCM

          Use Case #2: License Expiry Issues during Jump Upgrade Process

          Use Case #3: TFTP Custom Ring Tone Issues

          Use Case #4: Fixing Errors with Hunt List Queuing Announcements

          Use Case #5: Downloading the Tomcat Cert Private Key

          Thanks for reading. If you have time, post a comment!

          CentOS Recovery Use Case 1: Modifying License MAC Addresses

          I recently published a blog entry on how one could use the CentOS distribution and Recovery process to access the Cisco UCOS root file system. As noted in the initial blog, this isn't a new revelation. I originally was going to provide a group of use cases in the "primer" but decided that it was a little too long. So, I am breaking the use cases out into individual entries. Who knows, over time this may become another series. For now, let's focus on one of the CentOS recovery use cases: preserving a license MAC in your lab or staging area.


          With UCM versions prior to UCM 8x, the licenses were keyed to the MAC address of the Publisher (i.e. First) node. I have had instances where I am helping a customer do a "Jump Upgrade" and I am loading their UCM 6x or 7x system in my lab (or theirs, whatever works). Sometimes I need to do some interim upgrades/patches and if I don't have a valid license I could run into trouble. 

          By using CentOS to gain access to the UCOS root file system, you can adjust the MAC address used by your VM so that it matches the production Publisher node. This should clear any licensing roadblocks out of your upgrade path. 


          The CentOS boot process is discussed in a separate blog entry (read that first)Use the following steps to resolve the MAC address licensing issue:

          1. Edit the eth0 configuration file using the following command: vim /etc/sysconfig/network-scripts/ifcfg-eth0

          (1a) If you are unfamiliar with VI, use this command reference

          (1b) Add or change the line: MACADDR=DE:AD:BE:EF:00:01 (substitute your MAC address using ":" delimited fields)

          (1c) Save and close the file using the key sequence ":wq" (without quotes)

          2. Edit the hardware config file using the command: vim /etc/sysconfig/hwconf

          (2a) Find the line that starts with "network.hwaddr" (without quotes)

          (2b) Edit the mac address line to match the new MAC address: network.hwaddr: DE:AD:BE:EF:00:01

          (2c) Save and close the file (:wq)

          3. Type in the exit command.

          4. Type exit in CentOS to reboot.

          5. Disconnect the ISO from your VM guest.

          6. Your system should come on line and licenses (either already restored via DRS or added by you manually) should be valid.

          License MAC In Later Releases

          Starting with UCM 8x, things get a little more interesting. Cisco has the concept of the "License MAC", which is automatically generated by the system and linked to several system parameters. I haven't had to mess with this in conjunction with the CentOS recovery process but my understanding is that there is a script that controls the generation of the license mac. Further, that script is a file you can access if you are using the Rescue mode as we have described above.

          I think the script is located in /usr/local/bin/base_scripts/ and is called The line you want to look for is: FinalString='expr substr "$SHA1sum" 1 12'

          Changing "FinalString" to a literal string (e.g. "deadbeef0001") should do the job. Again, I haven't tested this. I have used the previously described method.

          Thanks for reading. If you have time, post a comment!

          Software Defect Could Affect Custom IP Phone Service URLs

          This is just a quick note on a software defect on Cisco 8800 series IP phones that could break normal operations for custom Cisco IP Phone Service URLs. The issue is documented in Cisco software defect CSCur13256 and may break IP Phone Services running co-resident on a web server (such as Microsoft IIS).


          A few months ago, my team (NetCraftsmen UC&C) ran into an issue with a custom Corporate Directory application that I built for a customer running Cisco Unified Communications Manager (UCM) 8.5. At the time of the original implementation the customer hadn't deployed any Cisco 8800 series phones (i.e. they weren't shipping yet). Over time, the customer procured and deployed several phones. Unbeknownst to us, the Corporate Directory application was not accessible from these phones.

          We became aware of the problem after performing an upgraded of the UC 8.5 system to UC CSR 10.5. After every upgrade, NetCraftsmen runs a standard validation process where we test a full range of functionality on all systems. During this validation process, we found an issue with accessing our custom Corporate Directory application.


          The environment where we encountered the issue:
          • CUCM: 10.5 (but this issue is independent of CUCM version)
          • 8800 Series Phone Firmware: 10.2(1.16)
          • Microsoft IIS (version not relevant)
          • IIS server hosted multiple web pages
          Issue Details

          When we provisioned the custom Corporate Directory, the customer had us deploy the web site on an existing IIS server that was hosting multiple sites. Our application was leveraging a binding based on the FQDN presented in the URL, for example:

          So, in IIS, we created a binding on port 80 (default) to the "" URL. Whenever a client presents this URL in the HTTP GET request, IIS serves up the appropriate application. 

          Unfortunately, the Corporate Directory would never render on the client device. Using IIS W3SVC logs, we were able to determine that the request was never making it to the IIS server. We know the logs were "good" because requests from other Cisco phones and clients were working and generating log entries.

          The issue became clear when looking at the console messages on a Cisco 8800 series phone:

          1893 NOT 19:14:36.524910 CVM-System P5-traceManager MQThread|HttpClientTask:? - Current State = 0
          1894 NOT 19:14:36.526738 CVM-System P5-traceManager MQThread|cip.http.HttpClientConnection:? - Check if #DEVICENAME# or #EMCC# is present in the URL
          1895 DEB 19:14:36.548370 Nov 11 01:14:36 dnsmasq[451]: query[A] from
          1896 DEB 19:14:36.548681 Nov 11 01:14:36 dnsmasq[451]: cached is
          1897 DEB 19:14:36.550023 Nov 11 01:14:36 dnsmasq[451]: cached is
          1898 NOT 19:14:36.560022 CVM-DNS LOOKUP|HttpClientTask:? - Current State = 5
          1899 INF 19:14:36.683356 Nov 11 01:14:36 mtlog: _daisychain_cmd.c 243:register cmd: connected (1)
          1900 INF 19:14:36.712840 Nov 11 01:14:36 mtlog: _daisychain_cmd.c 221:DCU cmd: disconnected (0)
          1901 NOT 19:14:36.828542 CVM-HttpClientThread|cip.http.HttpClientConnection:? - listener.httpSucceed:

          Most of this looks normal but the issue is clearly seen in the last line of the console log excerpt. The phone is using the cached IP address and not the actual FQDN (the way it is supposed to). This means that the bindings created on the IIS server won't engage and the web server will either render the wrong page or simply error out.

          The Fix

          I didn't post this issue when it happened because I got distracted by other things. I recently checked on the software defect and saw that there is a fix available via an Engineering Special (ES). So, I figured I might as well post my notes just in case it helps someone. Obviously, we actually believe that customer service is a "real thing". So, we didn't wait for a software fix on the phones. The workaround we applied was to modify the bindings in IIS.

          We went ahead and preserved the FQDN binding. In addition, we added a customized port binding (e.g. 8080). First, we checked the the IIS server using netstat and a port scanner to find an available port. Then we configured IIS bindings to use this port. Finally, we configured the IP Phone Service in UCM to use the port (resetting all 8800 series phones in the process).

          Thanks for reading. If you have time, post a comment!

          Checking Peer Firmware Sharing using SQL

          For this installment of the SQL Query Series I am going to keep it short and sweet. I was recently doing implementation planning for a project where we need to update the firmware on a few thousand phones. One of the things we like to do is leverage Peer Firmware Sharing to shorten the time needed to push out firmware upgrades. 

          One of the pre-requisites to leverage Peer Firmware Sharing is to actually verify it is enabled. This is the perfect job for SQL.

          If you aren't sure what Peer Firmware Sharing is then I recommend taking a look at a write up I did on the NetCraftsmen site covering options for distributing phone firmware. It is the most optimal way to manage firmware upgrades when you have a need to update firmware for a large number of phones distributed across many WAN sites.

          Whenever I work with customers on UCM upgrades, I always upgrade the phone firmware on the existing cluster as preparatory task. I find that many customers are cautious when it comes to firmware upgrades. So, we have developed a process to keep the risk and impact relatively low. The Peer Firmware Sharing feature is a fundamental tool in that process.

          By default this device-level feature is disabled. I will validate this parameter as part of my discovery process for upgrade projects. The interesting thing about this parameter is that it is one of the product (or model) specific parameters on the device page. This makes getting at the data from BAT (or other native interface) a challenge.

          It isn't a huge challenge because you can leverage SQL to get a quick report of phones that have this parameter enabled or disabled.

          [Edit: (10/4) Added version-specific information. Thanks to reader G.R. for pointing out the oversight.]

          The Tables

          The table(s) we need to dissect will depend on which version of CUCM you are running. In all instances, we are dealing with one of the more "funky" fields in the database. The CUCM database has a few fields strewn about that have more complex data structures than you see in most fields. Today, we get to play with one of them. Lucky us!

          Pre-8x CUCM Versions

          For versions prior to CUCM 8.x, we only have to deal with one table: device. Further, we only need to look at one field in the device table to answer our burning question. 

          The field we are interested in is the xml field. When you edit a device in CUCM, you will see a bunch of product specific parameters at the bottom of the page. The parameters rendered on a specific device configuration page is based on the hardware model you are attempting to configure. From a database perspective, these parameters are stored as an XML data structure in the device table.

          CUCM 8x and 9x

          Starting with CUCM 8.0, Cisco did some house cleaning and decided to remove the xml field from the device table. Not to worry. The field wasn't completely tossed away. Instead, it was relocated to another table. Actually three tables.

          Presumably, the DB developers were trying to optimize IDS replication (or fix a IDS repl issue) and took an already funky data structure and slapped it into three different tables:

          • devicexml4k
          • devicexml8k
          • devicexml16k 

          The field structure of all three tables is the same: pkid, fkdevice, xml. The only difference is with the xml field data type. In all three tables, the xml field is of type string. The difference is the max allocated size for that field: 4000, 8000, 16000. 

          The Queries

          The XML data is essentially a string and, as a string, it can be parsed using the various wild cards in a where clause. The XML child node that we are interested in is 'peerFirmwareSharing'.

          Let's start with a simple count query. The following two queries will give us a count of device that have peer firmware sharing enabled or disabled.

          /*Query for count of devices with peer firmware sharing is enabled*/
          select count(pkid)
          from device
          where xml like '%peerFirmwareSharing>1%'

          /*Query for count of devices with peer firmware sharing is disabled*/
          select count(pkid)
          from device
          where xml like '%peerFirmwareSharing>0%'

          /*Sample query for CUCM 8x/9x*/
          select count(d.pkid)
          from device d
          inner join devicexml4k dxml on dxml.fkdevice=d.pkid
          where dxml.xml like '%peerFirmwareSharing>0%'

          You may find you want to dig deeper. It really depends on the query results and if/how they line up with your expectations. For instance, you may want to see a distribution of devices with peer firmware sharing disabled sorted by device type. In this case, the following query would do the job:

          /*Pre-8x example*/
          from device d
          inner join typemodel tm on d.tkmodel=tm.enum
          where d.xml like '%peerFirmwareSharing>0%'
          group by

          /*Sample query for CUCM 8x/9x*/
          from device d
          inner join devicexml4k dxml on dxml.fkdevice=d.pkid
          inner join typemodel tm on d.tkmodel=tm.enum
          where dxml.xml like '%peerFirmwareSharing>0%'
          group by

          Now, let's suppose you have a situation where most of the device have peer firmware sharing enabled but there were a few stragglers. In this case, you want a list of devices. The following query is an example of how you can get a list of devices (tweaking fields to suit your needs is fine):

          select,d.description, as phoneModel 
          from device d
          inner join typemodel tm on d.tkmodel=tm.enum
          where d.xml like '%peerFirmwareSharing>0%' and like 'SEP%'
          order by,

          I am adding the " like 'SEP%'" to the where clause to avoid dumping device templates. There are other query variations that you may want to tinker with. I think that the above queries express the general ideas. 


          With CUCM 8x and 9x there is a twist. The XML values stored in devicexml4k won't necessarily have a child node for peerFirmwareSharing. At least, that is the case on my 9.1(2) cluster. This is because of the Common Phone Profile configurations. Default values in the phone profile are not duplicated in the XML field of the devicexml4k table. This makes sense. Unnecessary duplication of data is always a poor design choice.

          What to do? Well, if you only have one common phone profile then you can run the queries above and then also inspect the configuration on the phone profile. Just understand that the device level configuration overrides the common phone profile

          OK. Time to get back to my 9.1 upgrade.

          Thanks for reading. If you have time, post a comment!

          CollabCert's CCIE Collaboration Bootcamp

          Around a year ago I completed a 9 month journey to attain the CCIE Voice. There were a lot of ingredients that contributed to achieving this goal. One of the most valuable ingredients was the CCIE bootcamp program I attended. I am of the opinion that incorporating a bootcamp program into your IE training plan goes a long way to ensuring success. Moreover, my belief is that the effectiveness of an IE bootcamp program is primarily rooted in the abilities and effectiveness of the instructor not the company selling the program.

          The instructor of my IE Voice bootcamp was Vik Malhi. Vik recently launched his new venture, CollabCertwhich is a training company specializing in the Cisco CCIE Collaboration track. I was invited to participate in the inaugural bootcamp of this IE training program. The following provides my thoughts on the bootcamp experience.
          The Format

          CollabCert is offering two bootcamps, the CollabCert ILT and the CollabCert Workshop. The ILT is a 5-day bootcamp that is approximately 50% lecture and 50% hands-on. The Workshop is a 5-day immersive experience that is primarily hands-on and geared to helping the candidate work on their speed and efficiency. The Workshop is designed to target candidates who are ready to sit for the exam in the near future. CollabCert also positions the Workshop as a good option for those of us who recently achieved the IE Voice to transition into the world of the IE Collab track.

          I really appreciate the fact that CollabCert is offering two separate 5-day bootcamps as opposed to a single 10-day bootcamp. I think the ILT fits well into a training program where the candidate is 1 - 3 months into their studies. I view the Workshop as a finishing school and should be used 2 - 6 weeks prior to making an attempt at the real exam. 

          The bootcamp I sat in was the ILT. Since most everyone in the bootcamp was already an IE Voice, the lecture time was focused more on what has changed in the blue print. This meant we had more hands-on time. Which is always welcome. 

          Class started at 8:30am PST and "officially" ran to 5:30 or 6pm. However, we all stayed until 8 or 9pm. This is what happens when you gather a group of people who like to dig into protocol traces, debugs, and log files to see what is really happening. Bootcamps are where all the cool nerds hangout!

          The Facility

          The CollabCert facility is located in downtown San Jose, CA. It has convenient access to the VTA so you can easily get to the facility from the airport and surrounding areas. Caltrain is nearby, which is just awesome (I discovered Caltrain on this trip, very happy). If there is a gap in your commuting coverage, you can always do "the Uber".

          There are plenty of hotels, restaurants, and grocery stores in the area. The pre-requisite Starbucks located 1/2 block away from the CollabCert office. To unwind after a hard day of labbing it up, you can drop by any of the numerous pubs in the area. The way of the IE ninja: Caffeine, Lab, Alcohol, Repeat. Throw in some bacon and 99% of the IT population would mistakenly think they died and went to IT heaven. 

          The class room is set up to support 8 student work areas. There is plenty of room for each student to spread out and be comfortable. The phones are all 9971 and 7965 models. Each 9971 comes with a USB camera so you can host a video call between any two lab "sites", as well as calls between the backbone (emulating a B2B scenario) and any lab "site".

          There are 8 student pods and one instructor pod. The gear in the pods (including the backbone equipment) is built to replicate the official blue print. One of the first things I noticed was that the UC hosts are incredibly responsive. When I started my studies for the IE Voice I used remote rack rentals and one of the serious downsides was performance. That didn't seam to be a factor with the CollabCert pods. These pods scream and they are accessible remotely. Vik built the pods using the Cisco OVAs. He made sure he isn't oversubscribing compute resources, and he decided to use all SSD disks. 

          The Content

          At our bootcamp we worked through a full lab covering a lot of the core concepts you will find in the new blue print. The lab I had went through EMCC, homogenous video conferencing, CUBE, point-to-point video, Jabber, URI dialing, and the other standard components (CCX, UCMx2, Unity Connection, QoS,  etc.). The lab was also "trunk heavy". There were SIP trunks of various flavors all over the lab. I found the labs interesting and challenging. The lab was clearly relevant to the current blue print. 

          For those that have attended Vik's bootcamps in the past, his lecture style hasn't changed much. He likes to use an electronic whiteboard so that he can save everything that comes up during the discussion. His presentation style is very organic and the content flows based on the questions posed by the students. Vik's style is best suited for candidates who are willing to participate. If you want to sit and have someone lecture at you then you will probably get lost rather quickly. There are no powerpoint slides and all lecture topics come with a combination of live demo and white board illustrations.

          The Wrap

          As I stated earlier, I believe that a bootcamp should be a key ingredient to any IE candidate's recipe for success. What makes a good bootcamp is relevant content, responsive equipment, and an environment that is conducive to learning. A great bootcamp is all about the instructor and his or her ability to help students identify and address their knowledge gaps. 

          I think CollabCert has all the necessary ingredients to be a great learning tool for the IE Collab candidate.

          Just in Case You Were Wondering...

          It is possible that at least one person will ask if I plan to take the IE Collaboration lab. All I can say is that there is something about sitting in an IE bootcamp that just pulls you into the zone. Preparing for the IE was nothing short of addictive. I remember going through withdrawal following my passing attempt. Given how close these two tracks are to each other and how easily I slid back into the rhythm, I would be lying if I didn't say I was tempted.

          However, doing the IE Collab lab is not on the radar today. I have other prof dev goals that I'd rather not put on hold. 


          In the interest of full disclosure, CollabCert did pay for my seat in the bootcamp but I received no payment to write this review nor was I even asked to write a review. All opinions are mine and if I didn't dig on the bootcamp, I would have said so. 

          Thanks for reading. If you have time, post a comment!

          Using SQL to Reconfigure a Dial Plan - Updating Directory Numbers

          I have had this blog entry in the draft folder for quite some time. I decided to dust it off and bring it to the front of the queue after receiving the following query on Twitter:

          @ucguerrilla got one for you.  Trying to update 1xxxx and 3xxxx in pt-Internal to 401xxxxx and 403xxxxx... any idea of sql query? :)
          Can you accomplish this via SQL? Why, yes you can. About a year ago I completely rebuilt a customer's dial plan using 100% SQL. While I won't be discussing the ins and outs of all of that in this entry. I do plan on getting into the mechanics of doing broad changes to digit patterns using SQL.


          A brief primer is provided in the first blog of this series.


          That's right, there is a definite need to provide a disclaimer upfront. The stuff we are going to get into in this blog entry and others associated with this "sub-series" affect the entire system. If you do not know what you are doing or have any doubts in what is actually happening then you should consider pulling someone in who does have the appropriate comfort level. You could really foul up your system if you aren't careful and, as nice as I am, I am not going to claim responsibility if you do foul up your system. That will be 100% on you. 

          If you aren't comfy with the queries we are going to crack open here then you can always rely on the Bulk Provisioning capabilities in UCM (particularly 7x and later). 

          That said, if you have a good handle on what is happening then the methods I am going to discuss are very, very efficient. 

          This Weeks Query

          As I mentioned in the intro, I used an approach based 100% on SQL to convert a customer's dial-plan (8,000 phones / 16,000 DNs). We can't cover all of that in this one entry. So, we are going to jump right into how one can modify digit patterns on a global scale. We are going to use the example I received from Twitter as a base but will add a few other examples to provide some additional context.

          The Moving Parts

          If you have followed the SQL Query Series then you are probably familiar with the tables we are going to leverage:

          • Numplan: This table contains ALL digit patterns that can be programmed on the system. This includes directory numbers, route patterns, translation patterns, hunt pilots, etc.
          • RoutePartition: This table contains route partitions (go figure). This is a table referenced directly from numplan that can be used to help you narrow your SQL update command.
          • DeviceNumPlanMap: This table maps the device table to the numplan table. We will use this in a workaround to a common problem with the Informix Update syntax.
          • Device: This table contains all of the information concerning devices provisioned on the system. We will use this in a workaround to a common problem with the Informix Update syntax. Caution: This table contains more than phones, gateways, etc.

          Measure Twice, Cut Once

          If you have ever done work in a trade industry then it is likely that you have heard the term "measure twice, cut once". It means that if you make a cut without being positive about your measurements then you may have to throw away a whole lot of material. 

          The same logic applies with the queries we are going to discuss. It is always a good idea to use a "select" query to double check your data set before you execute an update. You may need to run a couple of different "select" queries so that you are sure you can cleanly and uniquely quantify the dataset you want to update. In general, the from and where clause from the "select" queries you use to measure the dataset will likely be used when you use the "update" query(ies).

          Limitations to Be Aware Of

          Update queries are a little picky. Let's clear up one of the first issues you may come across when attempting to do complicated "update" queries. You may be inclined to use syntax similar to the following:

          UPDATE table1
          SET a.column1 = 'value'
          FROM table1 a, table2 b, table3 c
          WHERE a.key = b.key AND a.column1 like 'value%'

          The objective being to use a composite dataset to uniquely identify the records you want to update. This will NOT work. The reason is due to the fact that the Informix DB used by Cisco is not the Extended Parallel Server (XPS) edition. The XPS edition does allow you to use the "from" clause in the manner shown. This would be really handy but, alas, it is not available to you. However, there are ways to reference table values. Though, they are not as clean and you have to be extra careful. We'll cover an example later.

          The second thing to be aware of is that there is an upper limit to the number of records you can modify using an "update" query. Here is where I fail you because I don't know exactly where the cap is. I know that I was running an update query which should have affected 2,541 records and the query timed out with an error.  I have ran update queries for chunks slightly larger than 900 records without issue. What I haven't done is try to narrow it down. As a rule of thumb, I'd say that if the "select" query you use to measure is >1,000 records then you will want to come up with a way to deal with the data in smaller chunks. We'll cover this, too.

          Example: Update as a Function of Route Partition

          Going back to the tweet I mentioned at the beginning of this article we see that the request is to update patterns in a specific partition. In subsequent tweets exchanged with my colleague it became clear that there were actually multiple partitions that were of interest, the patterns had a predictable length, and the patterns all started with the same digit pattern. 

          Before we dissect this further, I'd like to make a comment about "good design". It is a good thing that the requestor is able to easily articulate what parameters should be used to uniquely identify a set of patterns. Usually, this isn't the case and you may need to run several different "update" queries to get the job done. So, having a well thought out dial plan (not just numeric patterns but partitions, css, etc.) can go a long way to optimizing the number of moves you need to make. Food for thought.

          Anyway, for our example let's assume the following:

          • The patterns we care about are 5-digits in length
          • The patterns start with either 1xxxx or 3xxxx
          • The patterns are in one of the following partitions: HQ_Lines_PT, DC_Lines_PT, or NYC_Lines_PT 

          Let's "measure" this first by using a select query:

          select, n.dnorpattern
          from device d
          inner join devicenumplanmap dmap on dmap.fkdevice=d.pkid
          inner join numplan n on dmap.fknumplan=n.pkid
          where LENGTH(n.dnorpattern) = 5 and n.dnorpattern MATCHES '[13]*' and n.tkpatternusage=2 and n.iscallable='t'
          order by, n.dnorpattern

          This is just one way to measure. My logic here is that I want to see the output and pay particular attention to the device names that are dumped. Let's say, for instance, that the DNs I care about are assigned to CTI Ports or CTI Route Points for use with a contact center application. I should recognize patterns that deviate from that device naming convention. If I see other devices (like real IP phones) in my query then I know that my selection criteria is flawed and I should not use it to do an update. 

          Note that I am using two additional parameters in my "where" clause: tkpatternusage and iscallable. The former is a good way to avoid updating patterns for things like route patterns, MWI, translations, etc.. A tkpatternusage of "2" means that the pattern is assigned to a device. The "iscallable" means that the directory number is marked active. Not critical just a point of interest. 

          Another measurement:
          select, n.dnorpattern
          from device d
          inner join devicenumplanmap dmap on dmap.fkdevice=d.pkid
          inner join numplan n on dmap.fknumplan=n.pkid
          inner join routepartition rp on n.fkroutepartition=rp.pkid
          where LENGTH(n.dnorpattern) = 5 and n.dnorpattern MATCHES '[13]*' and in ('HQ_Lines_PT', 'DC_Lines_PT', 'NYC_Lines_PT')
          order by, n.dnorpattern

          Here we are also checking the partition assignments. Based on our assumed criteria, the above should basically be the dataset we are interested in updating. Again, I am showing the device to see if there are any anomalies. 

          Other ways to measure would be to use a couple of "select" queries where we are dumping the record counts. This is handy when you are trying to optimize your query (i.e. remove unnecessary criteria in the where clause). You can also use the expected device values for the patterns you are interested in to see if you have something other than expected. For instance, let's say that all of the patterns you are interested in are assigned to CTI ports or CTI route points. If this were the case then you can use the following to check your data set.

          /*First, get a count of patterns matching the criteria*/
          select Count(n.pkid)
          from device d
          inner join devicenumplanmap dmap on dmap.fkdevice=d.pkid
          inner join numplan n on dmap.fknumplan=n.pkid
          inner join routepartition rp on n.fkroutepartition=rp.pkid
          where LENGTH(n.dnorpattern) = 5 and n.dnorpattern MATCHES '[13]*' and in ('HQ_Lines_PT', 'DC_Lines_PT', 'NYC_Lines_PT')

          /*Next, get a count with the same query but focus on CTI devices*/
          select Count(n.pkid)
          from device d
          inner join devicenumplanmap dmap on dmap.fkdevice=d.pkid
          inner join numplan n on dmap.fknumplan=n.pkid
          inner join routepartition rp on n.fkroutepartition=rp.pkid
          where LENGTH(n.dnorpattern) = 5 and n.dnorpattern MATCHES '[13]*' and in ('HQ_Lines_PT', 'DC_Lines_PT', 'NYC_Lines_PT') and (d.tkmodel=72 or d.tkmodel=73)

          /*Finally, let's use a query that is more in-line with our anticipated Update query syntax*/
          select Count(n.pkid)
          from numplan n
          inner join routepartition rp on n.fkroutepartition=rp.pkid
          where LENGTH(n.dnorpattern) = 5 and n.dnorpattern MATCHES '[13]*' and in ('HQ_Lines_PT', 'DC_Lines_PT', 'NYC_Lines_PT')

          Basically, we are looking for anomalies in the counts. Especially in the last query which is "closest" to what we want to use when performing the actual update. There are many ways you can measure your dataset. Some general rules of thumb:

          • Try to find a query that has the fewest number of inter-table dependencies. For instance, in our last "Count" query we are only focusing on numplan and routepartition.
          • When dealing with route partitions, make sure you don't have patterns that are assigned to the "none" partition. Inner joins will exclude these patterns (which means you should probably check specific route partitions in your queries).
          • Similar to the 2nd rule, inner joins assume there are reference values shared between the joined tables. A pattern not assigned to a device won't show up in an inner join. Which means you could accidentally update it in an "update" query (where you can't rely on inner join to protect you).


          Enough of this damn measuring, let's get to cuttin'. For sake of argument, let's assume our measuring has proven that our assumptions are accurate. This means we only need to test for pattern length, the first digit of the pattern, and the partition assignments. The following gets us where we need to be:

          UPDATE numplan 
          SET dnorpattern = (CONCAT('24010', dnorpattern))
          WHERE ((fkroutepartition IN (SELECT pkid FROM routepartition rp WHERE IN ('HQ_Lines_PT', 'DC_Lines_PT', 'NYC_Lines_PT')) AND (LENGTH(dnorpattern)=5) AND (dnorpattern MATCHES'[13]*'))

          So, what is happening here. The "set" command is merely prefixing "24010" in front of the existing dnorpattern. So, a pattern of 10500 becomes 2401010500 and 30500 becomes 2401030500. That is the purpose of the CONCAT function it concatenates strings. The "where" clause is helping us control the dataset that gets updated. We are using the fkroutepartition field (which is in the numplan recordset) and are testing for a specific set of pkid values in the actual routepartition table. That is the function of "IN". It looks in a delimited set of values.

          Things get weird here. We then have a select in the middle of a where clause. Totally permitted. Again, we are using the IN clause (cuz we are lazy) and the human readable names of the target partitions. We don't stop there because we have other criteria (hence the various "AND" operators). We are checking the length of dnorpattern because we were told that is somehow special and we are using the MATCHES operator to do a wildcard search. You could also use a logical OR with the LIKE operator, for example:  AND (dnorpattern LIKE '1%' OR dnorpattern LIKE '3%') .

          Example: Update as a Function of a Device Field

          As noted earlier, the version of Informix we are using on Cisco UCOS platforms does not have the XPS feature set. Therefore, "update" queries are unable to use a "from" clause. This handicaps us a bit. In most cases, I find that using the pattern length, pattern wildcard, and the routepartition covers most of my needs. However, sometimes it may be easier to use a field in a table that is not directly linked to the numplan table. For example, what if we wanted to use the device table for our criteria?


          I am not going to go into the "measuring" approach here. The point should be solidly implanted in your noodle by now. Just make sure you use "select" queries to test your dataset before you update anything.


          Let's continue with our example above but let's change the assumptions a bit:
          • The patterns we care about are 5-digits in length
          • The patterns start with either 1xxxx or 3xxxx
          • The patterns are all assigned to devices where the name starts with LCP555
          A possible solution to this problem would be:

          UPDATE numplan 
          SET dnorpattern = (CONCAT('24010', dnorpattern))
          WHERE (pkid IN (SELECT n.pkid FROM numplan n, devicenumplanmap dmap, device d WHERE n.pkid=dmap.fknumplan AND d.pkid=dmap.fkdevice AND LIKE 'LCP555%')) AND (LENGTH(dnorpattern)=5) AND (dnorpattern MATCHES'[13]*')

          Like the previous example, we are using the "IN" operator. Only this time we are checking the pkid from the numplan record against a list of numplan pkid values in a composite recordset. Probably not the fastest query in the world but still pretty efficient when compared to relying on the GUI.

          You could also drop the pattern tests (length and matches) if you were 100% certain about the device name association. In the spirit of full disclosure, I have not used this particular method on a live data set (I have used it in lab clusters). So, I don't know if there is a different recordset size constraint than what I have previously seen. I find that I can usually leverage the routepartition table to get where I want to go.

          Example: Large Dataset

          Earlier I noted that I have ran into a ceiling when running update queries. The first time that happens I guarantee you that your inner Homer will scream "Doh!". If you are running this from the CLI, you will see a blinking cursor for what seams like forever. If you are using an application that leverages the AXL/SOAP API then you see whatever hourglass, beach ball, or other cute 'wait for it...' icon is displayed. Then you will see an error complaining about too many rows in resulting record set (or something like that). 


          Let's emphasize a previous guideline: definitely check the count of the records returned by your "select" queries. Adjust if your recordset is larger than 1,000 records. Note that this count is just a "best guess" on my part (see my notes earlier in this article).


          Not all is lost. There are many ways to break record sets into smaller chunks. In the dial plan change I am referencing, I had to modify every Directory Number from a 5-digit pattern to an E.164 pattern. The customer owned the entire 10,000 block of a DID range. 

          One way to tackle the data is to go after the "even" numbers first and then the "odd" numbers:

          UPDATE numplan
          SET dnorpattern = (CONCAT('\+120255',dnorpattern))
          WHERE dnorpattern like '5%' AND LENGTH(dnorpattern)=5 AND SUBSTRING(dnorpattern FROM 5) MATCHES '[02468]' AND tkpatternusage=2

          UPDATE numplan
          SET dnorpattern = (CONCAT('\+120255',dnorpattern))
          WHERE dnorpattern like '5%' AND LENGTH(dnorpattern)=5 AND SUBSTRING(dnorpattern FROM 5) MATCHES '[13579]' AND tkpatternusage=2 AND iscallable='t'

          If that doesn't get it for you then you can break it down by looking at the second digit. For instance (note that we are looking at patterns like '51%' in the following examples):

          UPDATE numplan
          SET dnorpattern = (CONCAT('\+120255',dnorpattern))
          WHERE dnorpattern like '51%' AND LENGTH(dnorpattern)=5 AND SUBSTRING(dnorpattern FROM 5) MATCHES '[02468]' AND tkpatternusage=2

          UPDATE numplan
          SET dnorpattern = (CONCAT('\+120255',dnorpattern))
          WHERE dnorpattern like '51%' AND LENGTH(dnorpattern)=5 AND SUBSTRING(dnorpattern FROM 5) MATCHES '[13579]' AND tkpatternusage=2

          Basically, we are using the SUBSTRING function to look at the last digit in a 5-digit number. More accurately, we are looking at the last character in a 5 character string because that is how digit patterns are stored in the numplan table (as strings).

          Closing Notes

          One of the questions you may ask is "how long does this take". On my lab system, which is a UCS 200M1, record sets of 300 - 400 records take 1.5 - 2 minutes. Record sets of 800 - 900 records take about 8 minutes. I also tested on a UCS210M2 and the 300 - 400 records took about 1 minutes. The larger set (800 - 900) took about 4 minutes. 

          The number of permutations around updating records in the manner described are many. So, you must be extremely careful before you pull the trigger on an update query. If at all possible, test your queries on a lab system until you are comfortable with what you are doing. When doing multiple changes (like updating an entire dial plan) I recommend having your queries thought out before hand. Then you can run them as a batch.

          Thanks for reading. If you have time, post a comment!

          Cisco Live 2014 Experience

          I can't tell if time moves faster before, during, or after Cisco Live. It has been two weeks since Cisco Live 2014 in San Francisco, yet it feels like I was at Lefty O'Doul's just yesterday. 

          Before it got too far away from me, I wanted to recap my experience. For no reason other than I feel "wrong" if I don't. This year, I think the best way to sum up my Cisco Live 2014 experience is to focus on how, over time, my connection with the Live! event has evolved from being a member of the audience to becoming part of the event.

          That Just Happened

          As the sparkly, tiara wearing and bat-wielding @amyengineer says: "In case you were living under a rock in the networking world, Cisco Live 2014 happened ...". It actually happened two weeks ago and, yes, I feel like a slacker for waiting to write up this blog entry. But, that is how it goes sometimes.

          This year Live! was even bigger and more exciting than last year. Honestly, I was expecting the event to have a lower attendance since it was being held in May. Boy, was I wrong. The "Nerd Herd" was north of 25,000 people. That is an amazing stat and is more than twice the number of attendees in 2009 (the last time we were in San Francisco). I attended in 2009 and it was painfully obvious that in 2014, Live! has outgrown the Moscone Center. The crowds were unreal. 

          As much as I tried to hold on to the moments, Live! came and went in a blink of an eye. At the end of the week, I was lingering around the Social Media Hub watching the crew break down the event with a touch of melancholy. I think Bob McCouch (@BobMcCouch) said it best in a tweet: "Yeah, post-#clus blues. It's a thing." I had the post-CLUS blues for about a week following the event. I experienced the same in Orlando. What is interesting to me is that I used to not give it a second thought. It was done and so was I. 

          Evolution of an Experience

          The first time I went to Live! it was the last year it was called Networkers and my experience was all about the technical sessions. It was the first time I was in Vegas and the only people I knew were my colleagues and Cisco account team members. It was a good time and I immediately saw the value in this event but it wasn't what I would call an "experience". It was just something that happened in my general proximity. I was part of the audience not part of the event itself.

          Community and the CSC

          Over time, my interests at Live! evolved into a more balanced focus on establishing/maintaining relationships and the technical content. In 2011 my Live! experience started to revolve around several "Communities". Especially the Cisco Support Community (CSC, formerly "NetPro").

          In 2011, 2012, and 2014 I was fortunate enough to be selected as a member of the CSC Designated VIP program. One of the perks of membership was that CSC sponsored my attendance at Live!. I totally appreciate that but it isn't the coolest part of the experience. One of the things I look forward to when coming to Live! now is meeting the other VIPs. They are very interesting and talented people. Many of them are low key individuals who simply like to help others solve problems. It is a great community and I am happy to be a part of it. Even if life sometimes affects the degree of participation from time to time.


          Live 2014 - CCIE/NetVet Reception - Only 1/2 the peeps
          The NetVet program is a program extension to the Live! conference where addicts get some added bennies. At least, that is how I characterize it. You get access to a special lounge area (which was really nice this year), a free e-book (that I usually forget about, doh!), priority schedule (really, really key), and you get a nifty lanyard for your badge that lets everyone know that you are an addict. I wear it proud.

          Live 2014: Old Friends, Yeah right!

          Last year I was also able to participate in the NetVet reception with John Chambers. This event requires NetVet status and an active CCIE/CCDE cert. I really enjoyed the experience in 2013. In 2014, it was sort of a bust for me. This has more to do with the number of attendees and the size of the space than anything else. Regardless, being a NetVet certainly enhances the Live! experience. I mean, you get a special lanyard. What's not to like about that! 

          Tech Field Day Roundtable

          This year marked the first time I was selected to be a delegate at the Tech Field Day roundtable at Cisco Live. Let me just say, wow. It is certainly an honor (from my PoV) to be asked to participate. I have a simple rule that ties into the overall theme of this blog entry: Any time you are asked to participate in an event like Tech Field Day, your answer is "yes". Don't even think about it. Sessions scheduled? Who cares, say yes. Not sure if you have anything to contribute? Suck it up, say yes. Feeling sleepy? Are you kidding me? say "YES".

          I wasn't sure I would have anything to contribute and I was fine with that. Just getting the opportunity to sit in on a session with some of the standard delegation is worth it. Catching up with Tom Hollingsworth (@networkingnerd), bonus. Getting to meet Stephen Foskett (@SFoskett), double bonus. Getting to sit side by side with some of my favorite bloggers, twitter rockstars, etc.  Forget about it. 

          I enjoyed the opportunity immensely and I am impressed with how they run the program. If you aren't familiar with Tech Field Day, go check it out ( You'll be glad you did. 

          Social Ecosystem

          I consider the communities I belong to or have participated in as part of an overall social ecosystem. Communities or groups operate independently of each other, sometimes they intersect, sometimes they cross pollenate, and sometimes they collide. A few years ago, there used to be a small group of Twitter folks who would congregate in an ad-hoc space dynamically spawned in the middle of the Live! event. 

          I started to actively participate in the Tweetups and associated social media gatherings in 2012 (IIRC). I enjoy having a real face-to-face conversation with people where the majority of my interaction prior to this human touch was via their tweets or blog or podcast, whatever. That is cool enough, but when you discover that they know who you are and/or follow your blog/tweets/etc.? Well, that is just awesome.

          In 2013 (Orlando) there was a noticeable evolution in the social media experience at Live! The change was so dramatic that it entirely changed the way I view the event. This was even more pronounced this year. In just three (?) years, the thing that started as "Tom's Corner" and was just a small satellite circling a large planet gained incredible mass and became damn near central to the entire experience. I am not kidding.

          Cisco Live 2014 Social Media Group Pic
          I think it is pretty awesome myself. Of course, one can't help but feel small by comparison. Because it is no longer a single Social Media community built around twitter. No, it is now a living Social ecosystem that is comprised of numerous communities spun from the thread of various media channels, thoughts, disciplines, and ideas. Nothing worth while is ever about a single individual or group. The reason the Social Media ecosystem you heard about or experienced at Live! has a deep and lasting impact is because it is organic, alive, and very human.

          Wrap It Up...For Now

          Over time, my Live! agenda shifted from focusing on the event itself to focusing on the people at the event. I am no longer a member of the audience. I am part of the event. Connected, in a very real way, to the others in my industry. My part is a small part but it is still more rewarding than any previous experience I have had at the event. 

          Thanks for reading. If you have time, post a comment!

          My Latest Project - Guerrilla Tools Sneak Peak

          I came into 2014 with a goal to blog more frequently than I did in 2013. While I didn't have as lofty a goal as Tom over at, I was, shall we say, inspired. I planned on trolling through my "blog ideas" list to churn out some content.  Well, clearly the universe (or fate or whatever) had different plans and I had to adjust priorities. Free time was at a premium and I opted to work on a side project more often than adding content to the blog. Both would have been nice but coding has a calming affect. 

          Yeah, I am that breed of nerd that finds solace tinkering with things like coding to center myself. We all need hobbies. Anyway, the side project I have been working on is starting to evolve into the real boy I hope it to be some day. I think it is far enough along to share with readers.

          Right now, I am just calling the project "Guerrilla Tools". I'll probably rename it but that is a decision for later. The initial version is focused on functionality that complements one of the prominent series in this blog: the SQL Query Series.


          Those who have known me the longest, probably know that I started out doing some software development. In retrospect, it was a pretty short lived stint. I got into protocol analysis and then routing/switching for several years. I still kept my programming tools in the ol' tool belt. I just stepped sideways into scripting languages and building small automation tools. I carried that "skill" (if I may be so bold) into Cisco voice.

          I have several tools that I have built for process automation with Cisco Unified Communications Manager (CUCM). They are functional but they are definitely hacks and not something I would share with the broader community. Shortly after starting this blog, I grabbed onto the idea of taking those tools and porting them to something a little more consumable. 

          The "alpha" version of that idea is taking shape and I have recently crossed several development milestones. There is still a lot to get through but the cool thing about this road is that it is long and twisted.


          Currently, the project is focused on providing functionality for writing and testing SQL queries. It has the ability to load, merge, and save libraries of SQL queries. Syntax highlighting is built in today and I may add a schema browser function later (yeah, definitely later).

          Basic functions like running an ad-hoc query, creating new queries for the library, copy, delete, etc. are present today. 

          Another tool I built years ago had similar functionality but with a very limited UI. One of the "features" of that tool was that I dumped the raw XML data structures (sent and received) to the user interface. I found that useful and add it to this new project.

          Since I am a consultant by day, current and future functionality will be built around the idea of supporting multiple UC environments / clusters. Accounts are created and can be attached to SQL entries in the library for sorting/filtering. 

          Presently, I am developing the application for OS X because that is my platform of choice and that's just how these things go. However, I did pick a development environment that is reasonably positioned for cross-platform builds. So, the intention is to have a Windows flavor as well. 

          What's Next?

          Over time, this project will evolve to incorporate a good portion of the independent scripts and tools I previously built. I am sure I will also add other functionality that I have been wanting to tackle. As I am a one-man show, that will take time. 

          But that is "later". The "next" step is working on some UI tweaks for the current version and beefing up the error routines. Then I want to move  to a "Beta" test. Hopefully, I will get some of my colleagues and contemporaries to be willing victims.

          From there? Well, feedback I receive during Beta will determine whether I go with a more broadly available app or if I take it out into a field and shoot it (Doubtful).

          Thanks for reading. If you have time, post a comment!

          Cisco Live 2014, Is That Really You?

           I am amazed at how fast May has arrived. I am definitely not what I would consider prepared for the annual pilgrimage to the convention that is Live! May is a tough month to be doing the convention thing. Graduations, kids in school, etc. etc..

          Fortunately, I don't need to waste energy thinking about it. This thing is going off whether I am ready or not, and I am looking forward to the ride. I think this will be my 6th or 7th Cisco Live event (formerly Networkers). I guess I could pull out hats and do an official count but I am sure no one cares. 

          This year will be my 2nd Live! event in San Francisco. I am sure this time will be far more rewarding than the first go around. I am really looking forward to seeing some of my colleagues that I haven't seen since Orlando 2013. I am also going to get to spend time with some NetCraft folks that I don't get to see often. We work together but, well, we work and that falls under the category " 'Nuff said".

          The absolute best thing about Live is the first day. The nerd herd, in all of its triumphant glory descending upon the Moscone Center like we own the place. That is what I am looking forward to because it is the sign that Live! has started. A signal that sets the train in motion, gets the gears going -- mind alignment complete. A trumpet sounding an off-key tune that blasts a warning: "THIS house is OUR house.... for about a week, and then we'll leave you be. We promise...."

          Before I get back to the grind I'd like to send a special thanks to Dan Bruhn and the team that runs the Cisco Support Community and the Cisco Designated VIP program. I appreciate the invite to participate in the program and the support in getting to Live! this year. You are all tops in my book.

          If anyone wants to hookup at Live! send me a tweet (@ucguerrilla). I will be there starting Saturday.

          Thanks for reading. If you have time, post a comment!