27 septiembre 2013

VMware vCloud: desplegar vShield App sobre ESXi nested



Antes de entrar en materia, conviene tener claro que es la "nested virtualization": un ESXi en formato nested es un ESXi instalado en una maquina virtual que a su vez esta creada en un ESXi fisico, es decir, "un virtualizador virtualizado":


Partimos de la base de que ya tenemos vShield Manager desplegado y conectado al vCenter.

Vamos al turrón. La forma mas sencilla de desplegar vShield App como parte de la instalación de vCloud Director de VMware, es hacerla en modo gráfico:

-Desde la pestaña vShield del host conectados por el vSphere Client
-Desde la consola de vShield Manager por https (el usuario por defecto es "admin" y la clave "default")

Recordemos que se debe desplegar una vShield App por cada host del entorno, al igual que de vShield Endpoint y de vShield Data Security; pero solo una instancia de vShield Manager por vCenter.

Desde vSphere Client: seleccionamos un host y desde la pestaña vShield marcamos las App a instalar y pulsamos en Install:


Configuramos las ip´s y la red y el Datastore donde queremos que se despliegue la App:
NOTA: tener cuidado con desplegar la App en un host en el que resida vCenter! Podeis hacer un vMotion y luego retomar la instalacio ya que el proceso podria causar cortes de red.

  
Durante le proceso de instalación puede quedarse colgado en este punto, aunque no llega a salir un mensaje de error, podemos ver la consola de la App:



Como vShield App es una maquina virtual desde la consola podemos ver el mensaje de error:


Es aquí donde tenemos el gap para poder hacer funcionar maquinas virtuales que requieran x86-64 CPU instaladas sobre un ESXi virtual (nested): debemos habilitar en las settings del ESXi nested la opción "Expose hardware assisted virtualization to the guest OS" como vemos en la imagen:


Este parámetro se debe cambiar con la mv apagada, reiniciamos el ESXi nested, relanzamos la instalación de la App. Puede que nos encontremos con un mensaje de erro de la instalación y que tengamos que hacer un "uninstall" de la App para limpiar los restos de la instalación fallida y luego lanzar de nuevo el "install"

En otro post veremos como desinstalar agentes de vCloud en los ESXi de forma manual.

Una vez relanzado el install vemos desde la consola como ha terminado la instalación correctamente y nos pide ya directamente el login:


Ya tenemos la App de vShield y de Endpoint desplegada, la App de vShield Data Security debe hacerse por separado. ...voila!



Documentacion, mas de 180 paginas en este pdf ...mucho que leer: 

vShield Administration Guide 5.1 http://www.vmware.com/pdf/vshield_51_admin.pdf :
Indice:
     -vShield Manager 5.1
     -vShield App 5.1
     -vShield Edge 5.1
     -vShield Endpoint 5.1



25 septiembre 2013

Understanding networks in vCloud Director - Part 1/2

To easily use the program, you must have clear concepts of network types that we have in vCloud Director.


We have basically three types:

External Network: a portgroup on a switch (distributed, standard or Nexus). It is the vlan and ip segment (public or private) allocated physically

Organization Network: created automatically when create the Provider VDC, it´s only for organization use and could be one of three types:
1-direct connected to an external network
2-routed connected with a vShield Edge wich have two ip´s: one on the External Network side and one on the Organization Network to share the traffic between the vApp and the External Network
3-no connected to external network

vApp network: this network is created automatically when the vApp is created. The two functions are:
-No connected to Organization Network
-Routed connected to Organization Network


....On next post Part-1/2 will review more concepts like Fencing, Isolating or Routed networks.

17 septiembre 2013

HP Smart Update Manager 6.0 (HP SUM 6)


 HP Smart Update Manager is a product which updates firmware and software on HP ProLiant servers, and firmware on HP Integrity servers. HP SUM has a browser-based GUI; as well as a scriptable interface using legacy command line interface, inputfile, and console (technology preview) modes. 

User Guide:
http://www.hp.com/support/HP_SUM_UG_en

Technical White Paper:
http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-6947ENW&cc=us&lc=en

Download "HP Smart Update Manager version 6.00 - ISO":
http://h17007.www1.hp.com/us/en/enterprise/servers/products/service_pack/hpsum/index.aspx




11 septiembre 2013

Do you know what NUMA is? really? - Part 1

This post is about NUMA concept, because many people speaks about NUMA and don´t know "really" what is or how to use it.

By the way, NUMA is refered to the server platforms with more than one system bus and dedicates different memory banks to different processors. 

See an example with 2 bus (2 sockets) which  access to their own memmory banks internally and  access the rest memmory banks when interleavng is disable on the BIOS :


Each CPU (socket) + local memory = NUMA node

In the past, there were servers with only one bus and the CPU was increased in Ghz  more and more, and the memmory consumtion grow up; it´s teh cause nowadays the servers have more than one bus and you must install the memmory on banks pairing the bus.

If the memory is not populated correctly and distributed equally across the nodes the O.S. stop responding and display a purple screen (PSOD) with the following NUMA node error message:



In a NUMA architecture, processors may access local memory quickly and remote memory more slowly. This can dramatically improve memory throughput as long as the data are localized to specific processes (and thus processors). On the downside, NUMA makes the cost of moving data from one processor to another, as in workload balancing, more expensive.  The high latency of remote memory accesses can leave the processors under-utilized, constantly waiting for data to be transferred to the local node, and the NUMA connection can become a bottleneck for applications with high-memory bandwidth demands.

However, an advanced memory controller allows a node to use memory on all other nodes, creating a single system image. When a processor accesses memory that does not lie within its own node (remote memory), the data must be transferred over the NUMA connection, which is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, as the technology’s name implies.

Memory interleaving refers to the way the system maps its memory addresses to the physical  memory locations in the memory channels and DIMMs. Typically, consecutive system memory addresses are staggered across the DIMM ranks and across memory channels in the following manner:

>Rank Interleaving. Every consecutive memory cache line (64 bits) is mapped to a different DIMM rank.
>Channel Interleaving. Every consecutive memory cache line is mapped to a different memory channel.

disabling Node Interleaving = NUMA active
At least, NUMA options is only avaliable on Intel Nehalem and AMD Opteron 


Next post "Part 2" will review the NUMA use on VMware.

10 septiembre 2013

Sizing Storage

Once you get a clear idea who-are-who on the complicated I/O latency world (review other two post here and here).....you need to think about the general rules and best practices to sizing storage for your IOPS needs.


GAVG (Guest Average Latency) total latency as seen from vSphere
KAVG (Kernel Average Latency) time an I/O request spent waiting inside the vSphere storage stack.
QAVG (Queue Average latency) time spent waiting in a queue inside the vSphere Storage Stack.

DAVG (Device Average Latency) latency coming from the physical hardware, HBA and Storage device.

Bad performance if:

High Device Latency: Device Average Latency (DAVG) consistently greater than 20 to 30 ms may cause a performance problem for your typical application.
High Kernel Latency: Kernel Average Latency (KAVG) should usually be 0 in an ideal environment, but anything greater than 2 ms may be a performance problem.

And now, what about the storage?

The typical workload show on this picture, is for the most of the virtual machines, but, othre types like DDBB are so different and will need special configuration like dedicated raidgroups, ssd cache pools, etc.

It´s so important when planning and sizing a infrastructure to bear in mind not only the storage, it´s a mix with the raid conf, diskt typs (ssd, sas, sata), the storage protocol (FC, FCoE, iSCSI or NFS), networking and conf (cabling, switch, vlans conf), vmware conf (storage adapters, datastores conf (vmfs, rdm, advanced parameters) and so on.


09 septiembre 2013

The HP Power Advisor utility (free)

    HP has created the HP Power Advisor utility that provides accurate and meaningful estimates of the power needs for HP ProLiant and Integrity servers. ´

     The HP Power Advisor utility is a tool for calculating power use of the major components within a rack to determine power distribution, power redundancy, and battery backup requirements for computer facilities. Power Advisor allows you to configure each individual server or node. You can then duplicate the server configuration as often as necessary to populate an enclosure, and then duplicate it to populate a rack. The result is that you can build a complete data center quickly by duplicating a rack.

Easy to install, easy to use (drag & drop), friendly interface....a great free tool!!!







05 septiembre 2013

Visual ESXTOP GUI for VMware

The VMware visualESXTOP is a tool from the VMware Labs to view in windows-mode (hehehe) the results of the cli command ESXTOP and with a friendly option to generate some graphics

1- Download it from: http://labs.vmware.com/flings/visualesxtop  and unzip files:



2- Change PATH on the System Properties - Advanced System Settings - Advanced - Environmet Variables - System Variables: "PATH=C:\Program Files (x86)\Java\jre6\bin;
(Be carefull with this changes , and backup all the the info you modify for a restore case)


3- Start from CMD or click on the "visualEsxtop.bat" and connect to a ESX host or to a vCenter Server from Menu; File - Connect to Live-server, or start it from powercli:




This will open a new window with the GUI:

Now, you can review al the metrics from the diferents tabs:

...And generate some graphics , for example de% CPU ready:





.

01 septiembre 2013

VMware vSphere & ESXi 5.5 new maximus

ü  320 Physical CPUs per ESXi host 
ü  ESXi 5.5  supports 4 TB of Memory 
ü  16 NUMA Nodes per host
ü  4096 Maximum vCPUs per host 
ü  Support for 40 GBps Physical Network adapters
ü  62 TB VMDK or RDM disk 
ü  16 GB FibreChannel end-to-end Support
ü  Hardware Version 10

Comparison version:

  ESXi 5.1 ESXi 5.5
Physical CPUs 160 320
Memory 2 TB 4 TB
NUMA ports 8 15
vCPUS 2048 4096
Networks Adapter 10 GBps 40 GBps
vDISK 2 TB 62 TB
Hardware version 9 10