Planet Collab

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayWarcop

Airvine WaveTunnel 60Ghz Indoor Backhaul

By Warcop

Airvine comes to life at Networking Field Day 23! They’ve been operating for quite some time and it was NFD23 where we see their big idea moving towards the public stage.

Their big idea is pretty simple, but it won’t be an easy execution. How do I get a network backbone to places where wiring isn’t feasible or even possible? In some scenarios you could force the issue of running wires, but if I can get nearly the same with wireless, then why not do that?

The Airvine WaveTunnel product will be positioned to do exactly that. Get an indoor up-link with 60ghz radios in a dual ring configuration. It’s not mesh Wifi and in fact it’s barely Wifi at all. WaveTunnel may have more in common with Ethernet and SONET than anything else. This indoor wireless system has an open range of 100m and it should easily beam through commercial drywall. I can think of some manufacturing floors where I needed to reach remote IDFs and this would’ve made a lot of sense. Airvine should also be able to build a use case for sports arenas.

Airvine still has challenges to solve before their first customer ship. They have an RF and Ethernet stack to complete while vendor interoperability will become a sticking point. It’s going to have to interact with spanning tree at some point and that’s just one place it will get interesting. We’ll find out those details down the road.

Beaming 60ghz through materials is more of a “Science!” problem than a networking problem. I think Airvine needs to invest in some materials science quality assurance and publish many real world test cases. The customer education cycle will need real world examples to immediately push through “Can it beam through X?”. Otherwise their entire sales cycle will be talking about concrete, metal, and water.

It’s certainly an interesting road ahead for Airvine and I’m looking forward to following their journey.

Learn more: https://airvine.com/
Youtube: https://youtu.be/-AsaijS-RGM

Getting Lost at Networking Field Day

By Warcop

This year is certainly one for the record books and especially so on the personal front. 2020 has been full of twists, turns, overuse of the word “unprecedented”, crisis management, and modern day urban survival.

What’s in store around the corner? Networking Field Day 23! I’m super excited this go around for #NFD23 and the opportunity to be fully engaged. It’s been easy to get lost this year with everything going on around us. I’m hoping to get my brain back into high gear and NFD23 is the super-fuel rocket boost out-of-this-world content explosion we all need. Well, I need anyway, your mileage will vary.

So what in the world does this have to do with getting lost? A short while ago I update the title of this blog to Foggy Bytes. It’s somewhat odd, but a play on a few things we deal with in the context switching world of networking, information technology, and our brain-fog aftermath. Human context switching is a productivity killer and generally speaking we’re having to do it on a hourly basis. We’re dealing with new projects, support issues, business changes, technology changes, and I overheard you shout “network automation” on the other side of this screen. So we’re left in this constant fog, if you will, of bytes-in and bytes-out through the fog of it all. I also relate to the blog title with a backdrop of imposter syndrome. We get the bytes moved, but not exactly sure of all the steps we took to get there or why we were chosen.

I’m also happy to say that my former “no media” policy has expired! Yay! I can have an opinion again! Not that I ever stopped having one, but now I can blog them again. If you’re changing jobs and you’re into any sort of content creation, even casually, be sure to understand any policies you may be subject to at your new position. I may even take some time to revisit NFD21 content and do some “where are they now” hot takes.

Networking Field Day 23 has an impressive line up and I’m looking forward to hear from everyone.

NFD23 is like a fresh Computer Shopper so be sure to follow along with us September 29–October 2, 2020 https://techfieldday.com

Subscribe to the channel! https://www.youtube.com/c/Techfieldday

Follow me on Twitter for live hot takes during #NFD23

https://twitter.com/Warcop/

Cisco Collab and Open VMware Tools

By Warcop

Hi! I’m back with a quick take on Cisco Collaboration UCOS 12.5 and switching to open VMware Tools.

There is a long and bumpy history with native VMware tools on Cisco UCOS collaboration applications.  If you have had your UC solution for many years, then it is likely you have bumped into issues along with the way.  Keeping the native VMware Tools upgraded, installed, and working can be a challenge during upgrades.  So I was very excited in 12.5 you now have the option to move straight to Open VMware Tools.

What are these Open VMware Tools? In short – the better VMware Tools.  The open-vm-tools package is 100% supported by both Cisco and VMware. Moving to open-vm-tools will not take you out of any ‘compliance’ or get you yelled at by TAC.

The first advantage is that you’re de-coupling the ESXi version of tools from the UCOS application.  This means you’ll no longer need the “Check and upgrade VMware Tools before each power on” setting on the guest machine.  The open-vm-tools package is built into CentOS6/7 (and many others) by default, so you’ll no longer rely on the ESXi side of the house to get this right.  For example, if the systems team upgrades ESXi underneath your collaboration application you won’t have the additional worry of VMware Tools staying in sync.

For Cisco this means they simply keep the open-vm-tools package in CentOS6/7 UCOS and can keep it in line during application maintenance. I think it’s a win-win for both sides.

So how do we get there? EASY – but you do require a REBOOT. WARNING! You need to ensure that the native VMware Tools are operational prior to switching to open-vm-tools.  You can check the status of VMware Tools from ESXi. If there is a warning about version or operating system selection you need to fix that first. Also, please be sure you’re working with the latest patched version of the UCOS software release if you’re doing VMware Tools maintenance. There are some bugs and field notices that may get you stuck. Patch stuff!

This is primarily geared for 12.5 so as of this post I’m assuming you’re working with 12.5 SU2. VMware will give you the installed and running status for the tools.

Check that you have native VMware Tools operational

admin:utils vmtools status

Version: 10.3.10.10540
Type: native VMware Tools

Now prior to making the switch and the reboot make sure you or someone else has UN-checked VM Options “Check and upgrade VMware Tools during each power on”.

Move the system to permissive.  This will relax Linux with a setenforce command.  This isn’t called out as required in all locations, but I’d certainly put it in the “recommended” category when making this switch.  You’ll easily be able to move the system back to enforcing.

admin:utils os secure status
OS Security status: enabled
Current mode: enforcingadmin:utils os secure permissive
OS security mode changed to Permissive

Make the switch to open-vm-tools package which will remove the native.

admin:utils vmtools switch open

This will uninstall the native VMware Tools and install the open-vm-tools.
The system will be rebooted automatically.
Do you want to proceed (yes/no) ? yes

The UCOS server will reboot and switch out the native for the open-vm-tools package. VMware will now show both installed and status, but it will read “VMware Tools is not managed by vSphere”.  You can also check at the UCOS CLI again with

admin:utils vmtools status

Version: 10.1.5.59732
Type: open-vm-tools

Don’t forget to change back to enforcing!

admin:utils os secure enforce

In conclusion I think this was a good move by Cisco to bring the VMware Tools swap exposed natively with a UCOS CLI.  Getting away from those native tools and just getting it managed within CentOS is great.

I believe this will remain ‘Optional’ for quite a while, but it’s possible this will become a required changed for CSR14.

For more information from VMware about Open-VM-Tools check out the repo https://github.com/vmware/open-vm-tools

Hit me up @Warcop on Twitter – Thanks!

_______________________________________

Extra VMware Tools troubleshooting.

Got root? (Recovery ISO | Atl+F2) Make sure you’re working in the active partition by checking timestamps on /mnt/part1 or /mnt/part2.

/usr/bin/vmware-uninstall-tools.pl will remove the tools

If you’re in a situation where you don’t have VMware tools in the active partition, then copy them from the inactive side. It’s likely they’re still over there. Find ‘vmware-tools’ directories on the inactive side and copy them over so that you can run the /usr/bin scripts.

Did you get stuck on a reboot and the only thing on the console is “Probing EDD”. Welcome to Field Notice FN70379 where you’ll need to generate new initramfs with “/usr/bin/vmware-config-tools.pl -d”

https://www.cisco.com/c/en/us/support/docs/field-notices/703/fn70379.html

Network Sausage Automation

By Warcop

It’s not hard to debate that sausage has conquered the culinary world. The global popularity of sausage is undeniable. Chorizo, linguica, luganega, wurst… cocktail, weiner, franks, brats… Lamb, pork, chicken, goat *gasp*… Broiled, fried, grilled, smoked, cold, hot… You get the picture? Lots of sausage, lots of varieties, and lots of methods.

Sausage has a primary defining characteristic. The meat must be chopped up. That’s it! This is where I draw an initial parallel to a network. A network moves data. After that, anything goes with sausages, or networks. Maybe you have a network of sausages? Maybe you’re a network engineer at a sausage company?

Sausages are also very regional, sort of like networking vendors. Your home tribe may consume just one type of sausage. Maybe your tribe likes a wide variety of sausages. There is nothing wrong with either food choice, that’s just your taste.

The networking world can easily draw many parallels with sausage making and I could go on for hours about how the “sausage gets made”. Our presenters at Networking Field Day 21 talked a lot about one specific aspect of sausage making — network sausage automation.

So, if you like sausage, do you prefer sausage from a package or fresh sausage from a butcher? Network automation parallels this quite well. I like sausage from a butcher, but if I’m cooking for 500, I may use sausage from a package. It doesn’t mean I’m going to eat sausage from a package 100% of the time. I want both, but for different reasons. So give me network automation for the time I want to do something 500 times, but I don’t want automation for the design. I’m looking at you, [insert vendor here] that’s going to “green field” my data center with your “automation”. No thanks click-button-get-fabric.

Is network automation going to take your job? That depends. Are you a sausage stuffer? If you’re going to stand there and stuff someone else’s sausage all day long, then you ARE replaceable. According to Replaced by Robots – Sausage Stuffer, there is a 98% chance that “Sausage Stuffer” will be replaced by robots. Those automation machines exist and those packaged sausages are likely never touched by human hands. Life pro tip – don’t be a sausage stuffer. Create a recipe, be the chef, get your hands dirty, touch the sausage.

Also, according to Replaced by Robots – Network Engineer, there is a 3% chance of Network Engineers being replaced by a robots. I believe that statistic is very accurate. There are to many different kinds of sausage to believe a machine is going to automate your recipe. Unless you build the sausage automation machine. However, that isn’t an easy thing to do. Networking vendors spend millions on scalable platforms to perform all the necessary layers of automation. I hate to say it, but your *nix box with a few playbooks is not a ‘platform of automation’. It will help you package some sausage, but you made that sausage, created the recipes, and hold them dear, and call them George.

Some Networking Field Day 21 presenters showcased their automation platforms specific to their own solution. Some presenters showcased their multi-vendor approach to a platform. Either way the point was very clear, automation platforms are incredibly hard to build, maintain, resource, and pay for. Entire companies are being built around platform creation and very few, very very few companies are internally well equipped to build a platform. Some companies think they can make their own sausage, package it, sell it, and eat it, but that is a long way from reality. If you’re not already talking the language of cloud native application architecture, maybe stop now and budget a purchase.

I once received some advice from a rocket scientist, “Configure, don’t customize, a boxed product. The moment you customize then you become the owner, the developer.” Wouldn’t that also translate to trying to automate all the ‘boxes’ on your network? Engineers in the field are barely able to keep up with the rapid pace of change from one vendor and you’re going to create a platform that keeps up with the API changes of one vendor, two vendors, three vendors??? Seriously, don’t choke on your own network sausage automation.

I’ve always enjoyed seeing different visions of the sausage making future. Networking Field Day 21 showcased several of those visions and I have some more specific posts on the way. Will network sausage automation become a reality? Well, again, that depends on your tribe, your tastes, and your budget.

Alright, I need to wrap this up. 😉 I hope you’ve enjoyed networking sausage automation. Is wasabi sausage a thing? #sooory

—– A Networking Field Day 21 Reflection —- #NFD21 @TechFieldDay

 

Firefox DNS-over-HTTPS for the Enterprise

By Warcop

A little bit of technology buzz is being generated around Firefox and the “default” implementation of DNS-over-HTTPS (DoH). Mozilla has decided that at the end of this month (Sept. 2019) they will start rolling out DoH as default in the browser. I’m assuming you’re familiar with DoH.

Let’s address the most obvious problems for enterprises. DNS represents a wealth of information gathering within the four walls of a business. Content control, security protections, and split DNS are just a few things to mention. Breaking host level DNS resolution of browser is a threat against these protections. The security team should already be accounting for this in their SWOT as malware has already adopted this for C&C.

Mozilla has a plan for this and has set up ways Firefox will fall back to host DNS resolution, per network session (coffee shop, home, vpn), with a few specific checks.

When does it fall back? (I don’t know all the methods and I need to get back on nightly)

  • A canary domain listed in content control systems. If the lookup is triggered Firefox realizes the host is within a content control DNS system. OpenDNS/Umbrella immediately comes to mind with their operation of exampleadultsite.com. Not actually an adult site.
    • The problem for users- blacklists are a mess, standards, and are they really going to get worldwide participation to have this canary domain created? You could write a whole blog about the gaps in these systems.
    • By the way – DoH will not be enabled in the UK because of this exact issue.
  • Safe search redirection is triggered, in the example www.google.com resolves to forcesafesearch.google.com
  • Non standard TLD detection and RFC1918 responses. This isn’t a bad attempt at triggering the fall back, but many enterprises focus on split dns which is a valid TLD. IPv6 also throws a wrench in this logic.
  • Windows and macOS, detect parental controls enabled in the operating system

So what can you do inside your enterprise?

Blocking Cloudflare is not a good solution even though that may be the initial reaction. Cloudflare obviously does a lot more than DNS and keeping up with blocking a CDN is just a painful management point. Don’t do this.

Today, Firefox may not be included in your desktop system management and this will likely become a future requirement. Users are able to manually enable DoH and specify a custom provider that isn’t Cloudflare.

You can signal non-managed Firefox installations with the canary domain, which would work while machines are on-network. This canary domain “use-application-dns.net” needs to return NXDOMAIN instead of the A or AAAA IP address. Again, to reiterate this will not block a user manually enabling DoH.

Running a proxy and/or edge decryption clearly presents you with additional options to block this configuration.

So what’s the most logical step? Welcome to Firefox for Enterprise!
… Sorry, more management, but that isn’t all bad. Consider adding Firefox as an allowed package alongside an enterprise policy.

If you’re doing psexec checks to block Firefox or SWIM then you’re controlling things differently. If you want to allow Firefox then you need to give it a policy. Firefox will check for enterprise policy to disable DoH.

  • Is the Firefox security.enterprise_roots.enabled preference set to true?
  • Is any enterprise policy configured?

Also, if you’re allowing Firefox in the enterprise I hope you are already controlling security.enterprise_roots.enabled. Firefox runs their own root program and you should be controlling those circles-of-trust. Did we not clearly explain the circle of trust to you, Greg?

What would be my recommendation? Deploy Firefox with Windows group policy ADMX templates, macOS/Linux file distribution.

Firefox ADMX templates can be found at https://github.com/mozilla/policy-templates/releases

DNSOverHTTPS – Locked

Configure DNS over HTTPS.

Enabled determines whether DNS over HTTPS is enabled

ProviderURL is a URL to another provider.

Locked prevents the user from changing DNS over HTTPS preferences.

https://github.com/mozilla/policy-templates/blob/master/README.md#dnsoverhttps

I hope I covered this information accurately. If I got anything blatantly wrong feel free to call me out and I’ll edit.

Thanks!

________________________________________________

The rabbit hole.
https://github.com/mozilla/policy-templates

Source link: https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https

Source #1: “If a user has chosen to manually enable DoH, the signal from the network will be ignored and the user’s preference will be honored.”

Source #2: “When any of these checks indicates a potential issue, Firefox will disable DoH for the remainder of the network session, unless the user has enabled the “DoH always” preference as mentioned above.

Source: Pi-hole is already on it — Within the past 24 hours of this post this canary domain was merged into development https://github.com/pi-hole/pi-hole/pull/2915

If you’re looking to fall deep down the rabbit hole of DNS-over-HTTPS issues then I have the link for you. The IETF has published an extensive look at DoH risks and issues at https://datatracker.ietf.org/doc/draft-livingood-doh-implementation-risks-issues/

Information Silos – Hire a Floater

By Warcop

I rather dislike information silos and everything they stand for. I like collaboration, knowledge sharing, and that is likely evident by the fact you’re reading my blog post right now. The “information silo” cuts against my personality and if I keep writing in this paragraph I’ll tell you how I really feel.

So why do they exist? Scale and functional teams will create these silos. Which is necessary to churn productive work in a large business. Team X does X, Team Y does Y, and Team Z does Z. If every day at work, XYZ had to get together and discuss their responsibilities then they’d never get anything done. We all need to produce! The traditional management hierarchy makes this even worse with vertical up-down pay, titles, and promotion systems. So if you join X, Y, or Z then you may exist in that silo for a very long time.

Why these silos exist and grow hair becomes an accounting problem. To overcome the information sharing gaps you need to hire an X-Y translator, an X-Z translator, etc… and all those translation hires become expensive. For every layer and silo you create, without the translator you may get stuff done and grow, but agility shrinks.

So this is something that I picked up from way back when. It’s the concept of a “floater” which is used on manufacturing floors, assembly lines, or production lines. I once acted as a floater on a credit card statement production floor. Back before e-statements, the credit card processor was likely responsible for statement production. There was the printing silo, the warehouse silo, the production silo, and the mailing silo just to name a few. It was the “floater” that kept these silos fed. If the envelop stuffing machines needed more envelops or more advertisements then you’d go fetch it and share that information quite physically. The key point I’m trying to make is that each silo performed a single production focused task and that kept itself moving. The printers printed statements, the warehouse organized everything needed, and the mail room silo kept everything moving out to the postal service. It was indeed quite an impressive operation.

What would happen if we expected that production system to operate without floaters or feeders? The warehouse system couldn’t keep things organized if their job was also to accept and run materials to the floor. The stuffing machine operators couldn’t expect to keep their machine running if they had to get their own materials, discuss what they needed, or take the envelops to the mail room. The mail room couldn’t run at full capacity if they had to go get their own stuff or move their own mail.

The key was the “floater” who could move and operate within each of these silos. They understood key elements of each of these production silos and helped keep the larger machine running. The floater didn’t need to know exactly how to run the stuffing machine, or the printing machine, or the mailing machine, but knew _enough_ to keep it running.

The floater concept is lost among traditional IT silos. We expect everyone in their silo to know what the other silo is doing. The best we can hope for is people in one silo toss and spit out enough information so that things don’t grind to a halt. In full speed production, the teams barely have enough time to tackle their own work, much less help the other silos understand what’s going on.

Things get bad when things break down. If one silo starts to slow or fails to operate it can have a severe impact on the rest of the business. The floater can help pace things by relaying information and key pieces of details to keep things on track, slow things down, or speed things up. In the credit card processing line this was a very in-the-face approach. Quite a bit of yelling was involved.

In a small to medium sized business this is usually handled relatively easily within IT. The IT team may be just a few people and the day-to-day of working and meeting together is enough to keep information sharing afloat. When we start getting into small enterprise is where the problems begin to surface. We expect the producers in the security team, network team, and compute team to both produce and float. This results in inefficiencies and the production line starts to get messy.

Hire IT Production Floaters, seriously. It’s a real job title in manufacturing and assembly. I think floaters can help elevate production and at the same time destroy the information technology silos. As the benefits of cross-team collaboration erode the information sharing barriers each silo becomes more efficient. Functionally move someone between security, network, compute, edge, mainframe, and the PMO on a regular basis and I’ll bet it has a positive impact. Let those individuals report back as an outsider looking in. Do they have something negative to say about how the network team deals with the security team? Having been on both sides they should be listened to. Help them pace production and make changes where needed.

For fun — What would that job description look like?

Essential Functions

  • You will be responsible for performing varied tasks including network, security and testing of applications.
  • Work will be performed in different functional areas depending on the highest need at the time.
  • Must be comfortable and willing to have varied work that may change each day or potentially several times throughout the day
  • Demonstrates capability in configuring network devices and securing computing devices
  • Capable of reading and comprehending network architecture diagrams and routing protocol diagrams
  • Responsible for checking deployed devices to meet XYZ security compliance standards

Minimum Qualifications

  • Not be an idiot, willing to learn, change skills, seek out new life, go where no one has gone before
  • Able to read an OEM manual, use an IP subnet calculator, text and walk
  • Be at least 18 years of age
  • Ability to work independently

Equipment/Machinery Used

  • A web browser
  • A terminal
  • Linux, Windows, Mac networking tools

Personal Attributes

  • Ability to speak intelligently, to others smarter than you, in their own area of expertise.
  • Very strong written and oral communication skills
  • Keen attention to detail and not afraid to call anyone out
  • Ability to effectively prioritize and execute tasks in a high-pressure environment
  • Ability to build a team-oriented and collaborative environment
  • Mentally and physically flexible just in case you need to rack something by yourself in RU 42

Hire a floater!

In case you’re still here — let me know what you think!

Cisco Live – A Shared Experience

By Warcop

I’m not sure if you knew this, but I really love complex systems. Nothing gets much more complex than the human interactions and emotions we deal with every day.

Some days, when we work with our routers, switches and firewalls they’ll give us some unexpected feedback. What do you do with that feedback? Maybe you’ve written some automation tool or output processing program to figure out that specific output. Cool, nice, you want to not have that unexpected outcome to ever happen again. That’s the bubble computers fit in, don’t do this unless I told you to. Humans, not so much.

That’s one difference compared to when we are interacting with people. You never know what that response is going to look like to your input. Logic in – snark out? Snark in – logic out? Maybe both? How fast can you process that snark to escalate it to another level? Wow, where am I going with this.

Anyways – back to the part where I said I love complex systems. Human interaction is complex and I love observing it. I love being a part of it. I love people. I love shared experiences.

Shared experiences are part of living life. Why do you enjoy eating together? Maybe you don’t. Why do you enjoy watching movies together? Maybe you don’t. Why do you enjoy going to the lake or the beach? Maybe you don’t.

The point is – Cisco Live is a shared experience for the community of people that come there and socialize. Not everyone is there for that aspect and that’s OK. Go kill some sessions, bang out this incredible new automation workflow, awesome! However, that isn’t why I’m there and I know it’s not why several others are there. I’m there for the opportunity to enjoy a shared experience with those that have lived, loved, felt, bled, and cried the same things I have.

So back to this topic of “Why shared experiences”? Why Now Josh? Why wait until Cisco Live 2015 to start attending? The reason is – the community. I’ve been watching from afar for a really long time. When Cisco put real value ($$$) in the conference for reinforcing the social aspect is why I wanted to be there.

The social aspect, the shared experience, is why I’m there. The people are more important. Period. We all want these shared experiences in life and Cisco Live is one of the few, if only, places we can take a deep breath and make ridiculous jokes about subject matter only we’d get. You wouldn’t get it.

So where do we go from here? It’s an easy answer for me. I’ll be there, phone, lanyard, loin cloth, and backpack in tow. I’m there to talk to YOU and get to know YOU. We can share shop knowledge or not, I don’t care which, just join the conversation. Sit in the circle and say something snarky, ridiculous, super smart, or whatever.

I’m looking forward to meeting more people and watching the community expand. I’m hedging on, it’s not what you know, it’s who you know.

.. and this was train of thought.. not sure if I ended up where I wanted to.

I should blog more

By Warcop

^topic

Collaboration (insert nothing here)

By Warcop

I’ve been sitting here trying to think of a topic and I’m drawing a blank. So that’s what I’m going to talk about. Nothing. Zero. Null.

My current daily focus centers on connecting people to people, people to robots, and people to data. So it involves a fair amount knowledge of understanding networking principles, tools, and concepts. However, I also have to lean into the philosophical and the “being human” side. So how as a “Collaboration” guy do I get people disconnected? How do I build things in such a way you actually have a chance to get to zero? Silence. Nothing.

It really comes as a series of recommendations because the technology that exists today is focused on keeping you connected 24x7x365 to something. That something could be work, data, or media. Being “disconnected” isn’t a concept that is around much anymore. It used to be a prevalent part of working with technology. Remember the “bring your laptop in for updates” events? How about before laptops? Keeping workers functional when disconnected used to be a very real thing. Now you’re always on and the traditional work day has long drifted away.

The first recommendation is get in complete control of your notifications. Even if takes hours to figure this out you should stop and write down or type out all of the things that “notify” you. Figure out how they notify you, prioritize, and if it’s not priority then turn it off! Those little red notification bubbles on all those apps are not needed. Remember – if the product you’re using is free then you are the product. Some companies need you “plugged in” to operate. In “priority mode” there should only be 2 or 3 things that notify and interrupt you while working. In “flex mode” feel free to open up those notifications a little more.

The second recommendation is get in control of your calling and texting notifications. If you have a business number and a personal number are you using your business number? That’s one way of setting up a barrier between being on and off. You can text with your business number quite a few different ways and companies should really consider offering that. Business calls, texts, and voicemails flow and stay within the business. The side benefit is it makes it easier for other people to cover your work as needed. If you need to disconnect for vacation, family, or whatever you can redirect that flow of information. It’s not so easy if the flow of information is direct to your personal. I think this is hugely important for getting to zero.

The third recommendation has nothing to do with technology. It has to do with you as a human exercising some deep rooted control over yourself. Tell yourself you’re done with work between certain hours. Tell yourself you’re done with all screens for periods of time. I’ve learned some things firsthand watching my children. Creativity doesn’t fully realize itself when we are constantly connected and feeding our minds. We need to be bored. We need to be off. Zero. Nothing.

I’m not treading any new ground here in anything I’ve said. There are some really awesome articles and research available on these topics. I highly recommend making sure you know how to disconnect without interrupting your work streams. If you’re the single individual in a corporate IT department get on contract with a managed services company to back you up. Everyone needs a break. Take one.

Cisco Communications Manager – Why you no 12.0?

By Warcop

Alright. I can hear you yelling at the screen right now. @Warcop is back again trying to get me to run a dot zero release. Yes. Here I am and I come with reasons and cookies.

Let’s bust a couple of myths.
Myth #1: Dot zero releases are evil and should be sent back to the hell that they came from.
I at least have a couple of readers who feel that way. I know you and here’s the response. The call manager development team is not evil and they’re not out to get you. The code base that is call manager is very mature and portions of the call control code hasn’t changed in years. The team that works on dot zero releases is probably the same team that works on all the other releases. So I ask you a question in return. Why do you trust the person coding service updates more than the person coding the dot zero?

Myth #2: Call Manager dot zero releases should be considered a beta.
This myth is in an echo chamber outside of Cisco and within Cisco. Call manager releases come on schedule and if you notice there’s a predictable cadence. A dot zero release is what I consider a fork of an existing .5 release. If you browse the bug tool (yes I do this for fun) for any release you’ll likely notice fixes are published for a .5 and .0 release. So both release trains are getting the same bug fixes where applicable. So why do you trust 11.5SU5 more than 12.0SU1.

So if they contain some of the same bug fixes does that mean the code is the same? Don’t leave now! I have proof! The recent work on the Apple Push Notification service has required some fixes to the call manager service. So it’s no surprise that the APNS COP file to patch call manager applies to BOTH 11.5.1(SU3) and 12.0(1). The file that’s in this patch is the call manager binary and if it applies to both versions.. ZING! If you’ve applied CSCvf57440 to 11.5(su5) you’re running the same call manager binary as 12.0(1). Saying that a dot zero release is a beta is not accurate. A dot zero release is not described as beta software on the website or release notes.

There’s a wealth of reasons why you should be moving to version 12. I’m liking the security updates the most since that is top of mind with nearly everyone. Instead of listing all the features here the datasheet link is: https://www.cisco.com/c/en/us/products/collateral/unified-communications/unified-communications-manager-callmanager/datasheet-c78-739475.html

The CCIE Collaboration lab is also being upgraded to Communications Manger version 12. If it’s good enough for the lab it should be good enough for you!

There are production clusters in the world running version 12 and they’re humming right along. Let’s face the reality that software updates are being released faster than most of us can absorb. So why not change the echo chamber to “Why would you not run the latest code?”

Here’s a quote I remember and reference often: “Updates are evil. Updates are only OK when I don’t read the release notes and click install.” — @swiftonsecurity

Bring the feedback @Warcop on Twitter

Mixed Mode requires an Encryption License

By Warcop

With the release of Communications Manager 11.5SU3 there is a change in licensing specific to encryption. The reason for this change is the need to meet certain export and regulatory requirements. Since the license file will be issued from Cisco directly to a user/customer they’ll have better control and visibility who is enabling mixed mode. I think the visibility part will be important for Cisco to understand how few customers are running mixed mode. Maybe that will encourage them to make some needed changes?

So what happens when you upgrade to 11.5SU3 and you have mixed mode enabled? You’ll get a warning that’ll you need to add an encryption license to Prime License Manger. Mixed mode will not be disabled on your cluster by installing this patch.

How do you get the license? Over at the product upgrade tool at the Cisco site you can use your contract or Spark calling subscription to obtain the $0 license CUCM-PLM-ENC-K9=. Install the issued license on Prime License Manager and synchronize with Communications Manager.

I encourage you to comment on this post as other technical details are discovered. It’s still unknown if there is a time limit on running mixed mode without this installed license file. Thanks!

Reference:
Cisco UCM Release Notes
CUCM Readme

Cisco Expressway – Exporting Certificates

By Warcop

There’s multiple reasons you might want to export the certificate from Cisco Expressway. Maybe you need to replace the server, move a certificate from one cluster node to another cluster node, back up the keys, or simply use it somewhere else.

Recently I was setting up a cluster and forgot to generate the key and CSR locally. Using OpenSSL locally the first time is the best way I’ve found to move, secure, and backup keys without needing an export. However, I forgot and generated the CSR on the Expressway node. I received the signed certificate from the CA and landed in this predicament. Expressway doesn’t have an export button so you have to go digging. Grab the shovel.

Fortunately you have root access to Cisco Expressway. (If we were only so lucky with Communications Manager.)

I’m not suggesting that any modifications be done with this method. Even though a quick poking around ssl.conf proves it’s not as complex as you’d think. We’re just looking at files.

SSH as root and you’ll find the certs in this directory.

cd /tandberg/persistent/certs

The two files server.pem and privkey.pem are the files you’re looking for. However, for sanity purposes I’ll show you how to verify this is the key you’re looking for. The public key modulus and the private key modulus should match.

If you want to verify the modulus block as part of the cert text then do this:

openssl x509 -in server.pem -text -noout

openssl rsa -in privkey.pem -text -noout

If you want to check it without the cert text:

openssl x509 -in server.pem -modulus -noout

openssl rsa -in privkey.pem -modulus -noout

And the real shortcut is using an md5 to match:

openssl x509 -in server.pem -modulus -noout | openssl md5

openssl rsa -in privkey.pem -modulus -noout | openssl md5

So now you’ve done the comparison, it matches and you want to grab the private key:

cat privkey.pem

Copy pasta the text block and save using your favorite editor. Now you have the files you need to upload to the other nodes using the GUI. Yes, there are ways to automate/move things around underneath without Expressway losing it’s mind, but this method is simple enough for everyone.

Communications Manager LDAP Groups Caveat

By Warcop

If you’re wanting to LDAP synchronize Active Directory distribution groups for use with Cisco Jabber you’ll want to pay close attention to the ‘Synchronize’ setting. This setting is found on the ‘LDAP Directory’ configuration page. TLDR – If you’re using different synchronization agreements for users and groups the user directory synchronization must also be selected for ‘Users and Groups’.

Why and what’s happening inside the DirSync service?

If the directory synchronization agreement is set for ‘Users Only’ the LDAP search filter looks like this:
LDAPUsers

If the directory synchronization agreement is set for ‘Users and Groups’ the LDAP search filter looks like this:
LDAPUsersandGroups

You’ll notice at the bottom of the second screenshot the ‘memberof’ attribute request. This is where the user synchronization agreement requests the all of the groups the user is a member of. This also means that if this is the first time you’re setting up the agreements you’ll have to synchronize groups and then users. If you’re adding groups you have to run a full sync on both agreements.

So again — if you’re using different synchronization agreements for users and groups because you’re looking in different containers both agreements need to be set for Synchronize: ‘Users and Groups’. Obviously if you’re using one synchronization agreement to import both users and groups in a single container you wouldn’t run into this little caveat.

I got tripped up on this recently because the setting would seemingly imply it’s what you’re importing and not the search filter. It took a couple of packet captures to figure it out. PCAP or didn’t happen!

CCM packet cap:
utils network capture eth0 file packetcap1 count 100000 size all

jabber

Cisco Communications Manager – Going Mixed Mode

By Warcop

Introduction

It’s with great enthusiasm I’m coming to you today to talk about Communications Manager encryption! Collaboration and encryption are two of my favorite subjects so bringing them together is exciting. For many years engineers have steered clear of enabling encryption on Communications Manager, but I think it’s time it becomes the default. The position I’m taking isn’t a favored one and I’m hoping the rest of this post may change your mind. Let’s face it — It’s 2016 and security is and has become a really big deal. Ask any CIO or CSO and they’ll tell you encrypted telecommunications inside or outside of their business is important. The ugly truth (and the reason I’m writing this) is that most Communications Manager systems are running completely in the clear.

The Ugly

I’m not going to get into the details of all surface attack areas in the Cisco CUCM solution. The basics are very well understood that the signaling and media is in the clear. It’s easy to access conversations using a simple packet capture in Wireshark and Vomit. Since nearly all Cisco infrastructure devices support some type of packet capture it is a trivial task to record any phone conversation of your choice. If you want to look at some details of VoIP penetration testing I suggest taking a look at Viproy. You’ll find a wealth of information already generated by people who do this for a living. For example – the Tapberry Pi IP Phone is a fairly intrusive device, but would you ever suspect a tampered IP phone?

Now I’m not going to come out saying that enabling CUCM mixed-mode is the end all solution to this problem. It’s merely there to build a foundation on which all other endpoints, trunks, and associated services can be encrypted. Will mixed-mode prevent all of the surface attack areas that’s a part of penetration testing? Nope, not even close. You have to look and think about security within the solution as a concept, not a feature. When you’re bolting on new services like Expressway, Conductor, or Contact Center is it your mode of operation to ignore SSL/TLS, set all SIP to TCP 5060, and just “make it work”? That’s the mode of operation I’d like to see changed among collaboration engineers. Failing back to un-encrypted means something is wrong and I don’t think it’s OK as a long term mode of operation.

The more this is implemented the more it’ll become a standard.

The Good

The primary fear that’s been instilled in collaboration engineers is the thought of a lost security token. Most of us have been there doing an upgrade or certificate management and completely forgot to check if it’s mixed mode. I have news for you – that isn’t the products fault, but it was really bad prior to 10.x. I believe it’s just a lingering fear from everything prior to 10.x that makes us glaze over mixed-mode. When it does go wrong it goes wrong very dramatically. If the phone trusting keychain breaks you’ll have an operational CUCM with zero phones. Understanding the recovery keys that are now in place is fundamental to restoring operations.

Communications Manager 10.x has helped alleviate the concerns when it comes to losing a security token. It’s now a simple operation to enable services, run the commands, and mixed-mode is operational. The focus here is enabling CTL (Certificate Trust Lists) on Communications Manager to show there isn’t anything to worry about. Were also talking about using the embedded “utils ctl” operations and not using hardware eTokens. It’s also good to note that phones no longer talk to the call manager service for certificate operations so it’s an “out-of-band” certificate management.

How does Communications Manager prevent security token loss? The tokenless CTL approach has been enhanced by including an additional SAST (Site Administrator Security Token) in the CTL file for recovery operations. It just so happens that this additional SAST is the ITLRecovery certificate. The benefit should be clear if you’re familiar with ITL operations. The ITLRecovery certificate does not change during hostname or certificate operations, and there is a procedure for manually backing up this key. So you can take this secondary token offline independent of the disaster recovery system. It’s a virtual to physical operation so you can burn the key to CD or save it in a key vault.

Just to reiterate – performing the CTL operation using the command line the CTL file gets signed by the Callmanager.pem private key and includes the ITLRecovery certificate as a second SAST. This should remove your fears of lost security tokens using this new tokenless CTL method. The ITLRecovery key has very clear guidance how to backup this key and I’ll demonstrate it below.

The Activation

I’m going to run through a few things to check before getting to mixed-mode activation. The activation of CTL is based on the assumption all secure by default TVS and ITL operations are working normally. If you want to check all ITL operations this link will walk you through a detailed process. Unified Communications Manager ITL Enhancements in Version 10.0(1)

Certificate management is super important and the last thing you want to happen is a certificate expiring without warning. My prerequisite to any and all certificate operations is to ensure that the certificate monitor is in operation. (I’m making the assumption your CUCM can send e-mails. It better.) My recommendation is to configure certificate monitor to start notifications 90 days before expiration. The default 29 days is to short especially three years later when installers are gone and you’re scrambling to figure out what to do.

I want this thing to annoy all the right people every 7 days. Make it noisy.

Example:

CertMonitor

Now that we have that out of the way it’s important to now check the validity of all current CUCM certificates. Enabling mixed-mode turns on CTL operations and inside that CTL file is the Callmanager.pem. I generally have a rule of thumb that if I’m on a cluster and Callmanager.pem has less than a year to go just spend the extra cycle and reissue it.

So let’s take a look at my lab callmanager.pem and see if it needs any action. If the validity period is less than a year go ahead and reissue Callmanager.pem ensuring its SHA256 and X509v3 compliant. This is an entirely different process so I’m counting on everything is good with Callmanager.pem and ITL operations.

admin: show cert list own

CallManager/CallManager.pem: Certificate Signed by warcop-server1-ca

admin:show cert own CallManager/CallManager.pem

Validity From: Tue Jul 22 16:45:34 EDT 2014

To:   Thu Jul 21 16:45:34 EDT 2019

Great! My lab is still good for a few more years. Let’s backup the ITLRecovery key and you’ll enter your SFTP details to the file.

admin:file get tftp ITLRecovery.p12

Please wait while the system is gathering files info …done.
Sub-directories were not traversed.
Number of files affected: 1
Total size in Bytes: 1717
Total size in Kbytes: 1.6767578
Would you like to proceed [y/n]? y
SFTP server IP: 172.31.255.99
SFTP server port [22]:
User ID: cisco
Password: *****
Download directory: /

Go ahead and save this file to a key vault or burn to CD as this is your primary recovery token for both ITL operations and CTL operations.

Time to activate some services.

Activate the Cisco CTL Provider service on each Cisco Unified Communications Manager server in the cluster.

Activate the Cisco Certificate Authority Proxy service only on the first node in the cluster.

Make sure to activate the CAPF services and CTL provider service before performing any CTL operations. This ensures you do not have to update the CTL again after CAPF is enabled.

Do you have a CTL file already? It’s possible that long ago the cluster was running mixed mode and was changed back to non-secure mode. If this was the case the CTL file left on the TFTP servers may be old and needs deleting. You can check for a ctl file with “show ctl”.

admin:show ctl

Length of CTL file: 0
CTL File not found. Please run CTLClient plugin or run the CLI – utils ctl.. to generate the CTL file.
Error parsing the CTL File.

So we’ve determined that we do not have a CTL file and the “show ctl” command above hints us how to generate it. If you do have a CTL file first make sure you’re completely positive you’ll never need the old one and run “file delete tftp CTLFile.tlv”

The following are the only options for “utils ctl” operations.

  • utils ctl set-cluster mixed-mode
    • Updates the CTL file and sets the cluster to mixed mode.
  • utils ctl set-cluster non-secure-mode
    • Updates the CTL file and sets the cluster to non-secure mode.
  • utils ctl update CTLFile
    • Updates the CTL file on each node in the cluster.

At this point the last remaining set is to activate CTL operation. You’ll need to be prepared to reboot the cluster and all phones will begin downloading the CTL file.

admin:utils ctl set-cluster mixed-mode

This operation will set the cluster to Mixed mode. Do you want to continue? (y/n): Y
Moving Cluster to Mixed Mode
Cluster set to Mixed Mode
Please Restart the TFTP and Cisco CallManager services on all nodes in the cluster that run these services

Now you can verify that the CTL file has been generated and exists in the TFTP directory.

admin:show ctl

The checksum value of the CTL file:
a0352774cac687c17a253497dd521b89(MD5)
133f24b90202b3b744b754f9b9aceb6007d82223(SHA1)
Length of CTL file: 7200
The CTL File was last modified on Tue Mar 15 21:51:16 EDT 2016

(take a look at the end of this post for a full CTL file output example)

The CTL file is going to contain four records and two of which are the SAST tokens with the first record showing the Callmanager.pem signing record and the last showing the ITLRecovery record. At this point you have a valid CTL and you’re ready to reboot the cluster. The command indicates that you only need to restart services, but I favor rebooting the cluster to get everything off to a fresh start.

So this what should display after reboot when you check the Cluster Security Mode Enterprise Parameter. (Ignore LBM Security Mode – it’s just in the screenshot)

ClusterSecure

Congratulations! You’ve put down the foundation to enable encryption across a variety of devices, trunks, and services.

With version 11 it’s important to note that the ITLRecovery key validity period has been extended from 5 years to 20 years. If you perform an upgrade to version 11 this key is not automatically extended because that would break things. Once you’re on version 11 you should go through the ITLRecovery key re-issuance procedure. Once the new key is generated it will be 20 years and then you should perform “utils ctl update CTLfile”.

Hint: Version 11 also introduced something different when you regenerate a Callmanager, ECDSA, or Tomcat certificate. Now that HTTPS configuration downloads are available the TFTP service need be deactivated and reactivated. A service restart isn’t enough to have TFTP pull that new certificate and have the service on port 6972 running with the latest signature.

From an operations standpoint when do I need to run “utils ctl update CTLfile”? Any time you’re performing cluster changing operations such as adding or removing nodes, after restoring cluster nodes, changing IPs or hostnames, and especially anytime you upload or change certificates on the system. I’ve just worked it into my routine to keep the CTL file updated during all certificate operations.

I hope I’ve dispelled some myths about tokenless CTL and you’ll take a second look at enabling mixed-mode. It’s highly unlikely security engineers are going to become familiar with all CUCM security operations so I believe it’s the responsibility of the collaboration engineer to step up. We’re talking a basic understanding of PKI and nothing more than a few keys within the system.

Please feel free to hit me up on Twitter @Warcop if you see something that needs further clarification. Also, feel free to disagree with me!

Important References:

Security Guide for Cisco Unified Communications Manager , Release 11.0(1)

Unified Communications Manager ITL Enhancements in Version 10.0(1)

IP Phone Security and CTL (Certificate Trust List)

Viproy VoIP Penetration Testing and Exploitation Kit

vomit – voice over misconfigured[1] internet telephones

Full CTL File:

admin:show ctl
The checksum value of the CTL file:
f52754c4c647ae3a4f25c4bd24726213(MD5)
296d6a7990cc80ad9bcfa1d747e0a18e34a18df4(SHA1)

Length of CTL file: 7200
The CTL File was last modified on Sat Feb 20 20:08:59 EST 2016

Parse CTL File
—————-

Version:        1.2
HeaderLength:   420 (BYTES)

BYTEPOS TAG             LENGTH  VALUE
——- —             ——  —–
3       SIGNERID        2       105
4       SIGNERNAME      56      CN=cucm1.warcop.net;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
5       SERIALNUMBER    19      76:00:00:00:07:F2:F3:96:30:3A:DA:D8:27:00:00:00:00:00:07
6       CANAME          21      CN=warcop-server1-ca
7       SIGNATUREINFO   2       15
8       DIGESTALGORTITHM        1
9       SIGNATUREALGOINFO       2       8
10      SIGNATUREALGORTITHM     1
11      SIGNATUREMODULUS        1
12      SIGNATURE       256
9e  8b  d8  25  78  8f  5d  c0
49  7e  f3  ac  4f  9b  ae  4f
b5  98  98  d  c1  86  30  32
6c  65  78  9b  3a  88  c9  5f
63  5  a2  a6  2d  b9  de  f1
5e  8a  8e  29  cd  84  48  a6
a9  71  86  40  39  20  b2  21
d0  5d  35  e4  9c  69  3b  56
15  c4  cd  f7  93  3c  3  87
eb  56  ba  e1  93  4  14  b3
15  83  69  23  ba  73  e5  20
95  fd  7  21  a4  53  8e  a6
10  3a  3c  e8  85  f0  fc  ee
62  c3  8a  a8  c1  df  e  45
f1  4  8f  1d  ae  46  39  91
d9  2d  c5  6a  8f  5d  3d  e8
54  65  cf  cd  56  d7  16  89
d6  d3  d3  74  91  3d  5c  2a
92  23  3e  d0  a6  40  d2  eb
0  5b  f8  c5  9  e4  aa  3d
39  39  9d  14  6c  7  d7  20
cb  d8  74  64  53  17  2d  3d
ad  8b  e3  c8  fd  b3  63  70
50  a4  15  69  97  c5  e4  a0
f3  bf  78  7c  91  30  fc  41
3e  2e  dd  be  4c  50  3b  60
72  4a  de  76  ee  99  ff  b1
ae  69  c3  b  21  13  f7  b6
94  2  88  fa  d2  e  3b  58
8d  d2  71  f3  d3  93  78  9
9b  98  fe  a2  f5  b9  80  8e
42  44  1b  b  54  88  ea  49
14      FILENAME        12
15      TIMESTAMP       4

CTL Record #:1
—-
BYTEPOS TAG             LENGTH  VALUE
——- —             ——  —–
1       RECORDLENGTH    2       2260
2       DNSNAME         6       cucm1
3       SUBJECTNAME     56      CN=cucm1.warcop.net;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
4       FUNCTION        2       System Administrator Security Token
5       ISSUERNAME      21      CN=warcop-server1-ca
6       SERIALNUMBER    19      76:00:00:00:07:F2:F3:96:30:3A:DA:D8:27:00:00:00:00:00:07
7       PUBLICKEY       270
8       SIGNATURE       256
9       CERTIFICATE     1594    0D D1 77 CB E1 C4 64 B3 CB 6E F0 24 69 DE C8 7B DF 47 63 67 (SHA1 Hash HEX)
10      IPADDRESS       4
This etoken was used to sign the CTL file.

CTL Record #:2
—-
BYTEPOS TAG             LENGTH  VALUE
——- —             ——  —–
1       RECORDLENGTH    2       2260
2       DNSNAME         6       cucm1
3       SUBJECTNAME     56      CN=cucm1.warcop.net;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
4       FUNCTION        2       CCM+TFTP
5       ISSUERNAME      21      CN=warcop-server1-ca
6       SERIALNUMBER    19      76:00:00:00:07:F2:F3:96:30:3A:DA:D8:27:00:00:00:00:00:07
7       PUBLICKEY       270
8       SIGNATURE       256
9       CERTIFICATE     1594    0D D1 77 CB E1 C4 64 B3 CB 6E F0 24 69 DE C8 7B DF 47 63 67 (SHA1 Hash HEX)
10      IPADDRESS       4

CTL Record #:3
—-
BYTEPOS TAG             LENGTH  VALUE
——- —             ——  —–
1       RECORDLENGTH    2       1100
2       DNSNAME         6       cucm1
3       SUBJECTNAME     53      CN=CAPF-3ca8c4ee;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
4       FUNCTION        2       CAPF
5       ISSUERNAME      53      CN=CAPF-3ca8c4ee;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
6       SERIALNUMBER    16      4C:F5:F5:88:A1:DB:CB:58:BC:2A:8C:4F:6C:6B:3C:8E
7       PUBLICKEY       140
8       SIGNATURE       128
9       CERTIFICATE     666     D2 BF B8 54 05 0D 62 30 F6 BA 12 98 E0 37 7C 08 E5 CB 64 70 (SHA1 Hash HEX)
10      IPADDRESS       4

CTL Record #:4
—-
BYTEPOS TAG             LENGTH  VALUE
——- —             ——  —–
1       RECORDLENGTH    2       1160
2       DNSNAME         6       cucm1
3       SUBJECTNAME     68      CN=ITLRECOVERY_cucm1.warcop.net;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
4       FUNCTION        2       System Administrator Security Token
5       ISSUERNAME      68      CN=ITLRECOVERY_cucm1.warcop.net;OU=IT;O=Warcop;L=Atlanta;ST=GA;C=US
6       SERIALNUMBER    16      69:62:6A:28:E3:76:37:C7:23:75:C2:D9:80:3C:D7:82
7       PUBLICKEY       140
8       SIGNATURE       128
9       CERTIFICATE     696     43 BA 4D 06 24 8D 5A FA 5D FB 6C 28 7A 7F 50 33 BB 72 D8 D2 (SHA1 Hash HEX)
10      IPADDRESS       4
This etoken was not used to sign the CTL file.

The CTL file was verified successfully.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cisco Collaboration Certificates and Security

By Warcop

A lot has changed with CUCM certificate management and requirements. I think it’s only starting to sink in that CUCM is an application and should be treated like any other application that requires certificates. Continuous development has extended to on-prem apps. 

PKI isn’t new and certificates are not new. We’re not talking about bleeding edge technology and the issues that I’m seeing are related to misunderstanding fundamental technologies. Subtract CUCM from the equation and SSL/TLS/PKI principles apply regardless of the product underneath. 

Just because we as voice engineers haven’t “done it this way” before doesn’t mean we should “keep doing it this way”.

I usually start with an assessment of the client and their ability to manage internal PKI. If my assessment is that they haven’t done their PKI correctly or do not understand the concepts I immediately divert to a managed PKI discussion. For example, if I see a single root CA that’s SHA1 and it’s also the issuing CA that’s a thumbs down. If I also see they are lacking MDM or ISE it’s pretty evident adding a single management CA isn’t going to accomplish the end goal.

What is the end goal? Easy.. You don’t want anyone to click through a certificate warning regardless of device. If you as an administrator are clicking through certificate warnings you should seriously consider fixing that. 

Do PKI right or let the professionals do it for you. 

You can save yourself a lot of time, effort, and management headache if you’ll use 3rd party verified certificates internally and externally. The reality is you’re going to spend a fraction of the cost of internal PKI management. 

This also means that your servers should be operating in a domain that is resolvable in the context of the Internet. The service domain and server fields of the collaboration applications should be within an FQDN of a domain that you own. This is a strong recommendation from Cisco and has been a personal recommendation for years. Microsoft has even been advocating this forever but it’s been ignored in a large way. 

Let’s keep it real and stop using IP addresses to define connection points in and out of applications. We’ve all complained to developers before as network engineers if they’ve hard coded IP addresses. However we as voice engineers are doing the same thing. Use fully qualified names and SRV records to connect services together. TLS relies on the naming context of each service connection point and IP addresses in certificates are not acceptable.

Also it’s not reasonable to accept wide use of wild card certificates. What wild cards are intended for and what they’re being used for these days are two drastically different things. If your issuing authority will re-key a wildcard for a single issue SAN that is a step in a better direction but I’m not a fan of that either. You’re putting all your eggs in one basket against that wildcard. 

My final soap box is about secure clusters. A large majority of CUCM clusters are non-secure meaning they use unencrypted communications. Yes the token management hasn’t always been the easiest but this should be a default configuration change moving forward. Version 11 of CUCM has introduced a lot of enhancements in this area. It’s still PKI we are talking but it’s a different key ring. SIP signaling and media encryption inside the network should be just as important as outside the network. More and more we are doing Ethernet handoffs to carriers for the WAN. In reality your voice communications are exposed on those connections unless you’re also running an encrypted WAN. How many of us are running an encrypted WAN? 

Encryption for telecommunications should be a high priority for all enterprises and engineers responsible for telecommunications. It’s been a high exposure area for quite a long time and the conversation needs to shift. We’re not behind the firewall anymore. 

  

Cisco UCM 11 and LDAP Group Filtering

By Warcop

Introduced in UCM version 11 is the ability to synchronize groups from Active Directory. The primary driver is to have Active Directory groups available in the Cisco Jabber contact list.

One problem this brings up is that if you’re synchronizing from the base DN you’ll import all security groups. So you’ll need an effective filter to only get the groups you want. Generally speaking distribution groups are an ideal target for what you want represented in UCM and Jabber. The more granular you get will require more administration in Active Directory.

I’ll first outline the LDAP filters that will look for security groups and filter different types of groups. These are the bit values for each group and you’ll end up using the bitwise value.

All Security Groups with a type of Global
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483650))

All Security Groups with a type of Domain Local
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483652))

All Security Groups with a type of Universal
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483656))

Values for the different group types:
Global = 2
Domain Local = 4
Universal = 8
Security Group = 2147483648
Distribution Group = no value

Using the above information we can then build LDAP filters to only import distribution groups. Since a distribution group doesn’t have a value you have to add the NOT operator to the query.

All Global Distribution Groups
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2)(!(groupType:1.2.840.113556.1.4.803:=2147483648)))

All Domain Local Distribution Groups
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=4)(!(groupType:1.2.840.113556.1.4.803:=2147483648)))

All Universal Distribution Groups
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=8)(!(groupType:1.2.840.113556.1.4.803:=2147483648)))

Now what if you want distribution groups and security groups and want to select certain groups? Depending on how much additional administration you want in Active Directory you can pick a custom attribute. At this point all you have to do is look for the custom attribute not in use and populate it. Exchange 2010 SP2 and higher introduces 5 new multivalued attributes, but in this example we’re still using a custom attribute.

I recommend running some queries to determine if you have any custom attributes currently in use and then picking the next available value. You can use whatever value you would like just as long as it’s descriptive enough why it’s being used. In the examples below I’ve used “CiscoUCM” as a value to indicate the system thats using it in a query.

All Groups with Custom Attribute 1 – (Note the LDAP property is named extensionAttribute1)
(&(objectCategory=group)(extensionAttribute1=CiscoUCM))

Only Universal Distribution Groups with Custom Attribute 1
(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=8)(extensionAttribute1=CiscoUCM)(!(groupType:1.2.840.113556.1.4.803:=2147483648)))

I personally prefer the highest granular approach based on universal distribution groups and a custom attribute. This way the control is based on the source information and synchronized Cisco UCM/IMP information is kept to a minimum.

Happy filtering!

Cisco Jabber 11 Links and Videos

By Warcop

Cisco Jabber version 11 is significant in many ways and it’s difficult to outline all of the details in a single post. First and foremost Cisco is continuing to show that cross-platform feature parity, mobile first, and secure first is a leading part of the strategy.

Enterprise groups, persistent chat, and WebEx CMR escalation are three things significant enough to mention. There are some future integrations that will be better shown than blogging about.

There is a lot of work happening back at Cisco and I’m sure the Jabber team backlog is growing. In the meantime I highly recommend taking a look at the videos below to become a little more familiar with Cisco Jabber 11.

Check out these new Jabber 11 Videos

The release notes detailing the new features:

Jabber_Blue

SSL Errors and WordPress custom domains

By Warcop

I’ve recently been informed that since I’m using a custom domain that some browsers are giving an SSL warning when trying to visit my blog. I’m not 100% sure if WordPress has it in their ability to fix this because this would imply they would start allowing private key changes for each hosted site. Most likely they do not support this.

In the meantime I’m NOT going to be updating my links to point to the HTTP URL. If you’ve got a browser giving warnings to the site I recommend you try without the HTTPS.

I will look more into this issue and if necessary I’ll drop the custom domain and head back to a wordpress named blog to eliminate domain validation SSL errors. SSL is more important than the name associated with the site.

Thanks! (I’m still recovering from Cisco Live so there hasn’t been a wrap-up post. I’ve also been very busy at work.)

Cisco UC and VMware Latency Sensitivity

By Warcop

With VMware 5.5 showing up everywhere including Cisco collaboration specific hosts I found it was time to look into the use of latency sensitivity. Please don’t run off and turn this on without a proper understanding of what you’re changing on your UC cluster especially if you’re not follow co-residency guidelines. (That was a disclaimer and why wouldn’t you follow co-res guidelines anyway?)

When scoping or designing a UC cluster that involves Unity Connection there was always an extra core reservation for the VMware scheduler. With VMware 5.5 and the ability to specify latency sensitivity we don’t need to have one core sitting idle.

The reason this is important was due to core oversubscription and the co-residency guidelines published by Cisco. For example; if your host has 12 cores you can now safely lay out the VM reservations across all 12 cores and not 11 cores. This only applies if you have an active Unity Connection VM on the host machine. All other VMs on the host must have the latency sensitivity set to “normal”. Only the Unity Connection VMs will be set to “high”.

So now you’re wondering how do I cut it on? First it has to be modified while the machine is powered off, and if you have vSphere Web Client it is simple as “VM Options | Advanced settings | Edit | Latency Sensitivity = High”.

vsphere_web_latency_sens

OK – I see this yellow exlaimation point and what is about to happen? The warning is to let you know this should be performed on a VM that has CPU reservation. If you used the OVA to deploy Unity Connection and have not changed vCPU amounts or reservations you’re in the clear to proceed.

Since most UC clusters and ESXi hosts specific for collaboration are not joined to vSphere you need a way to turn on latency sensitivity without vSphere Web.

  • Using the vSphere client edit the settings of the Unity Connection VM.
  • Click Options | Advanced-General | Configuration
  • First scan to see if “sched.cpu.latencySensitivity” is in the configuration parameters.
  • If the setting does not exist click “Add Row”
    • Name = sched.cpu.latencySensitivity
    • Value = high
  • Click OK | OK

vsphere_gui_latency_sens

Don’t forget to power it back on. 🙂

Cisco Jabber Certificate Warning Again?

By Warcop

There has been a flurry of activity lately when it comes to Jabber and certificate warnings. Jabber adoption is growing and being exposed to a lot of different installations of Communications Manager and IM & Presence.

Not only do we have the adoption rate skyrocketing but we’re also seeing the development cycle becoming extremely fast. In the last 2 months I believe I’ve counted 4 minor releases to just 1 train of Jabber. This is presenting several challenges between the development and the deployment of Cisco Jabber. The technical information, deployment recommendations, and caveats are not being consumed fast enough in the field to understand the nuances of getting it to work. All of this is a great effort by Cisco but I do believe there could be a better way getting technical notes out to the field.

Certificate warnings are something that just isn’t acceptable. If your workaround is telling someone to ‘Click Accept’ then you’re not doing something right. We’re dealing with a client application and parts of the CUCM system that in the past have rarely needed to be touched. If communications manager and presence were installed for telephony only there are many things lacking to get these certificate warnings to go away.

Here is the latest information straight from Cisco regarding the certificates you need to adjust depending on the products you have deployed.

Required Certificates for On-Premises Servers On-premises servers present the following certificates to establish a secure connection with Cisco Jabber:

Server Certificate
Cisco Unified Presence or Cisco Unified Communications Manager IM and Presence Service HTTP (Tomcat) XMPP (cup, cup-xmpp, cup-xmpp-s2s, tomcat)
Cisco Unified Communications Manager HTTP (Tomcat) and CallManager certificate (secure SIP call signalling for secure phone & CTI connection validation) (callmanager, tomcat)
Cisco Unity Connection HTTP (Tomcat)
Cisco WebEx Meetings Server HTTP (Tomcat)
Cisco VCS Expressway Cisco Expressway-E Server certificate (used for HTTP, XMPP and SIP call signaling)

So let’s break this table down into something a little more understandable. On Communications Manager this involves uploaded valid CA certificates to callmanager-trust and tomcat-trust. Once those are uploaded you need to generate a CSR for callmanager.pem and tomcat.pem. Even though you can sign all your CUCM nodes with one SAN certificate there are several caveats with this and my personal recommendation is to continue signing each CUCM node with its own certificate.

On IM & Presence you need to upload valid CA certificates to cup-trust, cup-xmpp-trust, and tomcat trust. Once that is complete you need to generate the CSR and sign cup, cup-xmpp, cup-xmpp-s2s, and tomcat.

On Unity Connection you’ll only need to upload and sign the tomcat certificate.

One Expressway-CORE you’ll need to upload the valid CA certificate and generate a server certificate. On Expressway-EDGE you’ll 100% need a public signed CA certificate for the server certificate. If you’re not going to use Expressway for B2B communications and every device using Jabber MRA has managed certificates you MIGHT get away with a private CA signed certificate, but I still wouldn’t recommend it.

These certificates need to be x509v3 compliant for server authentication, client authentication, and ipsec. If you’re looking for the exact settings needed I have the certificate template EKUs below. All of the certificates can all be signed by a Microsoft CA with an x509v3 compliant template.

Version: V3
SignatureAlgorithm: SHA1withRSA (1.2.840.113549.1.1.5)

Extension: ExtKeyUsageSyntax (OID.2.5.29.37)
Critical: false
Usage oids: 1.3.6.1.5.5.7.3.1, 1.3.6.1.5.5.7.3.5, 1.3.6.1.5.5.7.3.2,

Extension: KeyUsage (OID.2.5.29.15)
Critical: false
Usages: digitalSignature, nonRepudiation,keyEncipherment,dataEncipherment

The callmanager certificate also has keyAgreement usage but this is mutually exclusive of keyEncipherment per RFC 3280. The keyCertSign is also not needed as an EKU because this isn’t a self-signed certificate. Your callmanger certificate is perfectly valid without these two EKUs. My x509v3 template typically includes nonRepudiation to allow the certificate template to be used for VMware servers.

A final reminder is if you truly want to support all client types without doing managed PKI go ahead and save yourself a lot of headache and get public CA signed certificates for all of this.

So you’ve done all of that and you still get a certificate warning?

There is a good possibility you have certificates correct but you’re still getting these prompts. Let’s understand this issue a little deeper before I get to the long term fix. SSL certificate validation is based on the client and server agreeing to the subject name or SAN name. For example, if you have a valid and trusted certificate on a web server and you browse to https://10.1.1.0 you’re going to get a certificate warning from your browser. This is because you told the browser you wanted a secure connection to 10.1.1.0 but the server sends back a certificate the the name webserver.domain.com. The same thing is happening underneath Jabber between CUCM, CUPS and Expressway. When Jabber connects to CUCM via UDS or CTI the server may be responding with an IP address even though you’ve requested the FQDN. SO WAIT – isn’t that a reverse of what you just said? Absolutely it is but the same principle applies. The CUCM and CUPS “System > Server” setting is giving you this problem. So while Jabber tries to connect to CUCM via cucm-node1.domain.com the UDS/CTI is responding with https://10.1.1.20 which will be the IP address of the node.

So now what – I can’t just go and change that “System > Server” value without breaking the whole cluster! This is where some additional pain points may come into play. The “System > Server” value controls quite a few communications between cluster nodes and endpoints. This value is essentially put into the phone configuration files that are downloaded. Over the years Cisco SRND and guides have flip flopped with the recommendation here. There have also been several bugs along the way using hostname or FQDN. Now we’re to a point where FQDN is the standard and any bugs will need to be fixed to support FQDN. My first recommendation is make sure you’re on the latest versions of CUCM and CUPS and this is super important if you’re serious about the Jabber deployment.

Some have opted to put the hostname or IP address of the “System > Server” values into the SAN certificate fields that are uploaded to the servers. While this sounds like an OK workaround that particular SSL certficate it still invalid per the CA/Browser forum. It may “work” but at this point in time it is more of a hack. You can read the full details of this CA/Browser change at https://www.digicert.com/internal-names.htm

I know this post got a little long but it’s really important stuff to get your deployment up to the new standards, using valid certificates, and keeping certificate warning prompts away from your users.

Thanks and feel free to comment or reach me on Twitter!

❌