Updating the Testbed - External Infrastructure

Between the set up of the original testbed and the beginning of the update process, some NAS vendors also approached us to evaluate rackmount units with 10 GbE capability. This meant that the ZyXel GS2200-24 switch that we had been using in our testbed would no longer pass muster. Netgear graciously accepted our request to participate in the testbed upgrade process by providing the ProSafe GSM7352S-200, a 48-port Gigabit L3 managed switch with built-in 10 GbE.

In the first version of the testbed, we had let the ZyXel GS2200-24 act as a DHCP relay and configured the main router (Buffalo AirStation WZR-D1800H) to provide DHCP addresses to all the NAS units, machines and VMs connected to the switch. In essence, it was a live network with the ability for the VMs and the NAS under test to access the Internet too. With the GSM7352S, we decided to isolate the NAS testbed completely.

The first port of the Netgear ProSafe GSM7352S was connected to the ZyXel switch and acts as the management port. The switch acts as a DHCP client and gets a management IP address from the Buffalo router. We configured ports 1 through 12 to remain as part of the default VLAN. Clients connected to these ports obtain their IP addresses (of the form 192.168.1.x) via relay from the main router. Ports 13 through 50 were made members of a second VLAN and a DHCP server issuing addresses of the form 192.168.2.x was associated with this VLAN. No routes were set up between the 192.168.1.x and 192.168.2.x subnets.

The GbE port associated with the host OS of our testbed workstation was connected to Port 2 of the ProSafe GSM7352S. Therefore, we were able to log into the workstation via Remote Desktop from our main network. The NAS under test was connected to ports 47 and 48, which were then set up for aggregation via the switch's web UI. In the case of NAS units with 10 GbE ports, the plan is to connect them to ports 49 and 50 and aggregate them in a similar way.

All the VMs and the NAS itself are under the same subnet and can talk to each other while being isolated from the external network. Since the host OS also has an internal network for management (each VM is connected to the internal network in the 10.0.0.x subnet and also to the switch in the 192.168.2.x subnet), we were able to run all the benchmarks within the isolated network from the Remote Desktop session in the host OS.

Updating the Testbed - Workstation Infrastructure Thecus N4800: Testbed in Action


View All Comments

  • Andrew911tt - Thursday, November 29, 2012 - link

    From what I understand the OCZ RevoDrive Hybrid is being used just as a PCI-e to Sata converter is that correct?

    I understand the changes that you made on the external network setup, but my question is why did you make this change?
  • ganeshts - Thursday, November 29, 2012 - link

    1. Yes, and we also got 100 GB of NAND as a new drive for the host OS to access

    2. Our previous external network setup (ZyXel switch) had only 24 ports. With 12 VMs, we had plenty of spare ports for the management port and for the NAS units. When moving to 25 VMs, we ran out of ports in the switch. The second reason is that we are planning to evaluate 10 GbE NAS units in the future and it is important to have a switch capable of 10 GbE for that purpose.
  • Andrew911tt - Thursday, November 29, 2012 - link

    I understand what you did, but why did you create the separate sub-nets and isolate them from the internet like in first set up. Reply
  • ganeshts - Friday, November 30, 2012 - link

    We wanted to eliminate unnecessary / unintended traffic from the machines on the live network (192.168.1.x) to the NAS or even the VMs themselves. Reply
  • SunLord - Friday, November 30, 2012 - link

    Why are you using a stupid Revo. You should of gotten an SAS HBA and used 5.25" to 4 x 2.5" bay adapters then you could of put in upto 20 2.5" ssd and an optical drive. Reply
  • SunLord - Friday, November 30, 2012 - link

    Something like this is what i meant for the 4x2.5" adapter

  • Flunk - Friday, November 30, 2012 - link

    Or simply add hang extra bays from the roof of the case. Reply
  • Plifzig - Friday, November 30, 2012 - link

    So, were all the SATA ports occupied? Or were they just all taken? Sounds like they were occupied.

    And also taken.
  • KranZ - Friday, November 30, 2012 - link

    Were you using the default 1500 byte MTU or did you bump the interfaces and VMs up to 9000 byte MTUs? Reply
  • kenyee - Friday, November 30, 2012 - link

    Could you guys please test these things for noise/heat w/ more drives when you test cases?
    E.g., the Nanoxia Deep Silence review recently. Looks like it'd be perfect for something like your SOHO NAS. It was tested w/ an SSD and no hard drives :-P
    The case in this review had a hard drive card.
    If you have so many slots, why would you not load it up?
    And if you're using a camera like the D800 w/ 50MB RAW files and trying to do video w/ terabytes of raw footage, you're going to load it up w/ hard drives...

Log in

Don't have an account? Sign up now