Low-Cost Hardware Hunter - Citrix Netscaler

Citrix (formerly NetScaler) have End-Of-Lifed a large selection of their 'NetScaler' load balancer products - as such they are appearing on the clearance/auction markets and are priced very well. As the devices require a very expensive license to operate as a 'NetScaler' appliance they are typically being sold for scrap as the merchants cannot assure the operability of the device as originally intended. I've seen them typically selling for $100 to $150 (excluding shipping, fees etc).

The good news for home-lab enthusiasts is that under the covers they are just SuperMicro SuperServer products with some custom branding. Even better news is that there is absolutely no lock-down on the hardware - you can just flash OpenWRT onto a the drive or USB and boot up first time.

The products in question are the Netscaler MPX 8000 series. Due to Citrix's 'pay as you grow' licensing program ALL of the hardware products are full-spec and were only performance limited by software restrictions, meaning no matter what 8000 series device you get, you will receive:

  • 1U Server Chassis with front-facing network and console ports
  • Hot-swap dual power supply (usually only provided with 1 PSU installed, unless you're very lucky)
  • Intel Xeon E3 Series Processer (Typically E3-1275 V2)
  • 32GB of ECC DDR3 RAM
  • Dual rear-mounted 2.5" SATA Drive Bays
  • 256GB SSD (sometimes sellers remove these for confidentiality)
  • Supermicro Remote Management Controller (remote network console access) via front facing 'LOM' port.

The only major model differences come with the network ports, they are either specced with just 6x 1GbE ports OR 6x 1GbE ports AND 2x 10GbE SFP+ ports. Both networks cards are Intel igbe and ixgbe hardware, long supported by OpenWRT.

There is also normally a Cavium Nitrox SSL accelerator card but these are mostly useless as they need custom drivers and don't support the latest SSL standards (rip it out!).

The chassis also features an integrated LCD display, which I'm pleased to report is just a completely OEM Matrix Orbital LK162-12 RS232 LCD, which can be easily controlled using serial commands for custom use - it even has a 5-button keypad which simply sends back ASCII charaters on the serial bus, making it easy to capture and integrate into scripts.

Simple script to display data on the LCD

#!/bin/sh

dev="/dev/ttyS1"

stty -F $dev speed 19200                # set the serial port speed to match LCD
echo -ne '\xFE\x42\x00' > $dev          # turn on the backlight
echo -ne '\xFE\x58' > $dev              # clear the screen
echo -ne '\xFE\x48' > $dev              # home the cursor
echo -ne 'OpenWRT 23.05.2' > $dev       # print first line
echo -ne '\xFE\x47\x01\x02' > $dev      # move cursor to second line
echo -ne $(ifstatus lan |  jsonfilter -e '@["ipv4-address"][0].address') > $dev

The fan controller is very kind for home use, throttling back to nearly inaduible levels when under low-load, which is rare for server grade products.

The only downside is there is just one USB2 port externally accessible, however there is a spare USB3 header on the board which is active and 2 USB3 ports can be added for just a few $$.
Main Board and PCIE Cards


Network Ports (front facing)

SATA Hot-swap bays (only one is wired, but there are plenty of spare SATA headers on the board)

4 Likes

1/1 to 1/5 labelling on the GE ports might suggest, that there is an internal switch chip in use, and that there might be only one GE link to the mainboard for those ports. The sixth port is perhaps connected to a second ethernet controller.

The CPU is a 4C8T 3.5 Ghz with 3.9 turbo from 2012.

A bootlog from a recent linux kernel would be interesting.

I don't beleive this is the case. The network ports are on two discreet PCIE cards, each 8x. This is more to do with the way the ports were named for the NetScaler software, there are 3 controllers and the ports were numbered as follows:
0/1 - Onboard controller
1/1 -1/6 - Intel igbe controller
10/1 - 10/2 - Intel ixgbe controller

When booted in OpenWRT the ports are simply numbered eth0 to eth9.

I'll drop a bootlog here at some point today.

Here we go

Onboard is an Intel e1000 (there are two ports but one is hijacked by the LOM module)

[   14.081045] e1000e: Intel(R) PRO/1000 Network Driver
[   14.086095] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[   14.092209] e1000e 0000:11:00.0: Disabling ASPM L0s L1
[   14.097686] e1000e 0000:11:00.0: Interrupt Throttling Rate (ints/sec) set to simplified (2000-8000 ints) mode
[   14.107723] e1000e 0000:11:00.0: Interrupt Mode set to 1
[   14.174197] e1000e 0000:11:00.0 0000:11:00.0 (uninitialized): registered PHC clock
[   14.245613] e1000e 0000:11:00.0 eth8: (PCI Express:2.5GT/s:Width x1) ac:1f:6b:e6:de:74
[   14.253640] e1000e 0000:11:00.0 eth8: Intel(R) PRO/1000 Network Connection
[   14.260717] e1000e 0000:11:00.0 eth8: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
[   14.267598] e1000e 0000:12:00.0: Disabling ASPM L0s L1
[   14.273053] e1000e 0000:12:00.0: Interrupt Throttling Rate (ints/sec) set to simplified (2000-8000 ints) mode
[   14.347448] e1000e 0000:12:00.0 0000:12:00.0 (uninitialized): registered PHC clock
[   14.416741] e1000e 0000:12:00.0 eth9: (PCI Express:2.5GT/s:Width x1) ac:1f:6b:e6:de:75
[   14.424774] e1000e 0000:12:00.0 eth9: Intel(R) PRO/1000 Network Connection
[   14.431849] e1000e 0000:12:00.0 eth9: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF

6x 1GbE Ports are Intel igb

[    7.531131] igb: Intel(R) Gigabit Ethernet Network Driver
[    7.536666] igb: Copyright (c) 2007-2014 Intel Corporation.
[    7.597993] igb 0000:05:00.0: added PHC on eth0
[    7.602655] igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection
[    7.609629] igb 0000:05:00.0: eth0: (PCIe:5.0Gb/s:Width x4) 00:e0:ed:8e:c6:9d
[    7.616969] igb 0000:05:00.0: eth0: PBA No: 104900-000
[    7.622205] igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    7.685410] igb 0000:05:00.1: added PHC on eth1
[    7.690080] igb 0000:05:00.1: Intel(R) Gigabit Ethernet Network Connection
[    7.702562] igb 0000:05:00.1: eth1: (PCIe:5.0Gb/s:Width x4) 00:e0:ed:8e:c6:9e
[    7.709893] igb 0000:05:00.1: eth1: PBA No: 104900-000
[    7.715152] igb 0000:05:00.1: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    7.777869] igb 0000:05:00.2: added PHC on eth2
[    7.782543] igb 0000:05:00.2: Intel(R) Gigabit Ethernet Network Connection
[    7.789527] igb 0000:05:00.2: eth2: (PCIe:5.0Gb/s:Width x4) 00:e0:ed:8e:c6:9f
[    7.796847] igb 0000:05:00.2: eth2: PBA No: 104900-000
[    7.802101] igb 0000:05:00.2: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    7.865344] igb 0000:05:00.3: added PHC on eth3
[    7.869988] igb 0000:05:00.3: Intel(R) Gigabit Ethernet Network Connection
[    7.876991] igb 0000:05:00.3: eth3: (PCIe:5.0Gb/s:Width x4) 00:e0:ed:8e:c6:a0
[    7.884339] igb 0000:05:00.3: eth3: PBA No: 104900-000
[    7.889602] igb 0000:05:00.3: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    7.954013] igb 0000:07:00.0: added PHC on eth4
[    7.958660] igb 0000:07:00.0: Intel(R) Gigabit Ethernet Network Connection
[    7.965663] igb 0000:07:00.0: eth4: (PCIe:5.0Gb/s:Width x4) 00:e0:ed:8e:c6:a1
[    7.972995] igb 0000:07:00.0: eth4: PBA No: 104900-000
[    7.978240] igb 0000:07:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    8.039886] igb 0000:07:00.1: added PHC on eth5
[    8.044537] igb 0000:07:00.1: Intel(R) Gigabit Ethernet Network Connection
[    8.051543] igb 0000:07:00.1: eth5: (PCIe:5.0Gb/s:Width x4) 00:e0:ed:8e:c6:a2
[    8.058833] igb 0000:07:00.1: eth5: PBA No: 104900-000
[    8.064078] igb 0000:07:00.1: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)

10GbE Ports are Intel ixgbe

[   11.280784] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[   11.286982] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[   12.595110] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 4, Tx Queue count = 4 XDP Queue count = 0
[   12.605514] ixgbe 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[   12.614162] ixgbe 0000:01:00.0: MAC: 2, PHY: 1, PBA No: FFFFFF-0FF
[   12.620463] ixgbe 0000:01:00.0: 00:e0:ed:8e:71:b9
[   12.626246] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
[   13.935108] ixgbe 0000:01:00.1: Multiqueue Enabled: Rx Queue count = 4, Tx Queue count = 4 XDP Queue count = 0
[   13.945554] ixgbe 0000:01:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[   13.954214] ixgbe 0000:01:00.1: MAC: 2, PHY: 1, PBA No: FFFFFF-0FF
[   13.960504] ixgbe 0000:01:00.1: 00:e0:ed:8e:71:ba
[   13.966244] ixgbe 0000:01:00.1: Intel(R) 10 Gigabit Network Connection

Full dmesg

2 Likes

Interesting.
However, I can only wonder about the power consumption of all these server products...

Do the 10GB ports support 2.5GB, and 5GB?

While that's something only supersebbo can tell, I wouldn't count on that. 2.5GBASE-T and 5GBASE-T are rather new standards, invented because the older 10GBASE-T remains expensive and power hungry (heat dissipation), as well as having much stricter requirements on the cables (while 2.5GBASE-T gets pretty far with old cat-5). Given the age (roughly a decade) of these systems and their focus on enterprise networks, I wouldn't assume that they'd support 2.5GBASE-T, only 10GBASE-T and 1000BASE-T (and 100BASE-T, probably 10 MBit/s as well). But that's only guessing on my part.

1 Like

That will be entirely down to what the Intel ixgbe driver supports, the 10GbE ports are just on a PCIE card, which I believe is just an X520-DA2 variant. It's all mainstream hardware inside (Supermicro, Intel, Cavium), the only component which actually has a Citrix part number on it is the Citrix logo LED!

This post (despite the title) seems to confirm that later versions of the ixgbe Linux driver DO support 2.5Gbit and 5Gbit connections. Obviously you'd have to also find an SFP+ module compatible with 2.5/5Gbit connections also. Just for info, officially Citrix only supported Fibre (LR/SR) connections on the 10Gbit ports but this was likely only for heat dissipation requirements. Any SFP compatible with the Intel boards should work.

1 Like

It may also depends on the actual card used here, just saying that I wouldn't assume that it works (given age and and enterprise target audience), good if it does, but that's not guaranteed (and compatibility with 2.5/ 5 GBit/s SFP+ modules is hit or miss anyways, even in contemporary SFP+ cages).

I have two of them, works flawesly with pfsense and esxi, 10gig nics are similiar to Intel x520 so 1 or 10 gig speed. Mine have Samsung 840 pro 250 gig SSD and dual psu. Didn't tried openwrt but i'm sure that will work nicely.

1 Like