Quantcast
Channel: Intel Communities : All Content - Wired Ethernet
Viewing all 4566 articles
Browse latest View live

Ubuntu 14.04.3 NVMUpdate XL710

$
0
0

Hi All,

Downloaded NVMUpdate for Intel XL710 adapters for Linux. Trying to run it with log. There is a messages in a log:

Intel(R) Ethernet NVM Update Tool

NVMUpdate version 1.25.20.03

Copyright (C) 2013 - 2015 Intel Corporation.

./nvmupdate64e -l log.txt

Warning: Unsupported base driver version 1.2.48 (min required 1.3.19)

Warning: Not supported version of base driver for Intel(R) Ethernet Converged Network Adapter X710-4

Warning: Unsupported base driver version 1.2.48 (min required 1.3.19)

Warning: Not supported version of base driver for Intel(R) Ethernet Converged Network Adapter X710

Warning: Unsupported base driver version 1.2.48 (min required 1.3.19)

Warning: Not supported version of base driver for Intel(R) Ethernet Converged Network Adapter X710

Warning: Unsupported base driver version 1.2.48 (min required 1.3.19)

Warning: Not supported version of base driver for Intel(R) Ethernet Converged Network Adapter X710

Config file read.

Inventory

 

But at first I upgraded i40e drivers to 1.2.48. I don't see a driver more fresh than 1.2.48 on download.intel.com .

What should I do?


Gigabyte Z97-D3H+Intel X520-SR2 fail to enable VFs

$
0
0

Hi, I try to create VFs with Intel X520-SR2 on Gigabyte Z97-D3H.

I enable Intel Virualization Technology and VT-d in BIOS setting and add boot parameter "intel_iommu=on".

I use PF driver (ixgbe driver version: 4.0.3) and VF driver (ixgbevf driver version: 2.16.1) on Ubuntu 14.10 with kernel version of 3.16.0.23

After insert ixgbe and ixgbevf module, I fail to see VFs with lspci -n command.

Can anybody help me? Thank you!

Issue Programming I210 Flash

$
0
0

Hi,

 

I am having difficulties programming the I210 Flash. In lanconf the flash option to write is greyed out.

If I try to write update NVM image from file in lanconf, I get an error saying the device ID is 0531 but the file ID is 1533.

The Flash I am using is SST25VF040B. The security_enable strapping is enabled with a 3k3 pull down.

If I place a pre-programmed Flash device onto the board in place of the empty device everything works fine. However pre-programming is not an option as we would really like to be able to program them on board.

 

I really appreciate any help on this matter.

 

Please let me know if you require any further information.

 

Kyle.

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

Intel i354 2.5 GbE dual adapter and SuperMicro Blade Chasis Network Cable Unplugged

$
0
0

The problem:

 

"Network cable disconnected" error after installing PROSet software suite on a SuperMicro blade nodes with B1SA4-2750F mainboard and Intel i354 2.5 Gbe Backplane Network adapters (x2).

I need teaming and VLANs features from PROSet suite, that`s why i`m using that software. If only the drivers are installed, both NICs works fine, but in Windows 8.1 there are no teaming feature - tested also with PowerShell.

Tested also with Windows 10, Windows Server 2012/2012 r2 - same issue - if only drivers are added (doesn`t matter if they are installed from software suite or from Device Manager), both NICs works fine, no matter how many times I`ve restart the machine. Windows 10 have teaming from PowerShell, but can`t define VLANs...

If PROSet suite is installed, both NICs change their status at "Network Cable Unplugged". The only resolution for this issue i`ve found so far (not always) is Right Button on adapter=> Properties=> Configure - after that, the adapter changes his status normal, but if machine is restarted, networks are disconnected again...

I`ve installed a lot of different versions of Intel PROSet software, tested with Unplug/Plug blade nodes, also with changing blade nodes position - no result.

 

Any resolution?

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.

Intel 82579V issues on Windows 10

$
0
0

Hello everyone,

 

Since I upgrade my PC from Windows 7 Pro to Windows 10 Pro several days ago, I do have issue to get my on-board network card running. My Hardware is:

 

Mainboard: ASUS P8Z68-V Pro/Gen3

On-Board NIC: Intel 82579V Gigabit

 

Let me try to explain my problem. The LAN Adapter is disapearing completly every few seconds, if I do have the device manager open, I can see a refreh all few seconds. In the network and sharing center I can see the LAN adapter disapering. In the event log I can see the following error (I am trying to translate it from German)

 

- For the network adapter "Intel...." a reset operation was started. While the hardware is beeing reset, network connections are unavailble. Reason: The network driver did send a reset operation. die network adapter was reseted 150 times since the last initialization

 

The original error message can be found in the attachment.

 

Regarding to the driver, I am using the latest Windows 10 driver which I found on the intel page (Screenshot is attached)

 

- driver date: 15.02.2015

- driver version: 12.12.140.22

 

I found something in an other community that disabled EEE will solve the issue - but it did not.

 

Really hope you guys can help me out.

 

Thanks.

SR-IOV: Using both PF and VFs

$
0
0

Hey,

 

I created 2 VFs on a 10 Gigabit port (Intel 82599 10 Gigabit NIC).

 

Should only VFs be used as 2 5-Gigabit virtual ports?

 

Or PF is also usable? If it is, then if I use both PF and VFs - each of them gets 3.33-Gigabit of throughput?

 

Can I assign both PF and VFs to a single VM?

 

What is the recommeded use?

 

Thanks,

Shaham


Windows 10 Enterprise Preview

$
0
0

Has anyone been able to get multiple VLANs to work in the W10 previews?

I have tried multiple builds with the last 3 versions of Proset and can not enable the VLANs after creation.

I can create them as I do in 8.1 Pro, but I can not enable them.

I am using an 82579LM nic in a Lenovo Thinkcentre.

x540-at2 - Cannot Set Link Speed

$
0
0

Hello,

 

Need some help to figure out if I am missing something.  I've been trying to set 'speed and duplex' value under Link Speed to 1 Gbps, but get an error message - SetSetting failed

I've tried the same with and without connecting network cable to the port. 

Appreciate comments/suggestions.  Thanks!

 

Screenshot shown below:

 

LinkSpeed.png

VLAN creation on Windows 10 Enterprise TP

$
0
0

Hello, there.

 

This morning I upgraded my fully functionnal Windows 8.1 Enterprise installation to Windows 10 Technical Preview. Before that, I downloaded the Intel Network Adapter Driver from this website, version 20.1, for Windows 10 64 bits. After the driver installation, I had the VLANs tab in the network card properties. However, i'm unable to create a VLAN. The network card is automatically disabled then I receive an error message saying this (translated from french):

 

One or more vlans could not be created. Please check the adapter status and try again.


The window freezes and I have to force-close it. 802.1 option is of course enabled in the Advanced options tab. The event viewer always shows the same error when I try to create a VLAN:


Nom de l’application défaillante NCS2Prov.exe, version : 20.1.1021.0, horodatage : 0x554ba6a4

Nom du module défaillant : NcsColib.dll, version : 20.1.1021.0, horodatage : 0x554ba57d

Code d’exception : 0xc0000005

Décalage d’erreur : 0x0000000000264064

ID du processus défaillant : 0x19d4

Heure de début de l’application défaillante : 0x01d0ada33fd50576

Chemin d’accès de l’application défaillante : C:\Program Files\Intel\NCS2\WMIProv\NCS2Prov.exe

Chemin d’accès du module défaillant: C:\WINDOWS\SYSTEM32\NcsColib.dll

ID de rapport : eefb5842-9220-4bad-93d3-774828c5736e

Nom complet du package défaillant :

ID de l’application relative au package défaillant :

 

I already tried to uninstall all the packages and drivers related to the network card. I deleted fantom network cards then cleaned up the registry. I tried to set some compatibility options to the given executable file, with no success. I tried to reinstall the driver with Drivers Signature disabled, tried to disable IPv4/IPv6 from the network card before trying to add a VLAN... I tried everything I found on Google.

 

Could someone help me, please?

Intel 82579V Issue

$
0
0

<><> Apologies if this has been posted in the wrong place <><>

 

Ok I'll keep this brief.

 

Problem:- LAN regulary disconnects and reconnets 30 seconds later.

 

Symptons:- Loose connection to Lan / Internet for around 30 seconds.

 

Background:- Had this problem with both P8P67 B2 and P8P67 pro boards. I have several other computers connected to the switch (not hub) and they are working fine. Have replaced the lead to no avail. Have even used the same lead in several other computers which works fine.

 

Config:-

 

Study

 

Netgear 8 port SWITCH 10/100/1000
Server
VOIP
PC

 

Switch connected to lounge hard wired via outdoor sheilded cat6 lead.

 

Lounge

 

Netgear 8 port SWITCH 10/100/1000
Router
XBOX 360
Wii
<><><>

 

Message in System Event Logs;-

 

<><>
Warning message - date time - source = e1cexpress
Event ID = 27

 

Intel 82579V Gigabit Network Connection
- Network link is disconnected
<><>

 

Then it states its connected again.

 

Have also tried the following;-

 

Remove Kaspersky 2011
Ensure ALL power management even OS is disabled
Use IPV4 instead of IPv6 in prefix policies
Disable nativue IPv6
Disable tunnel IPv6
Disable IPv6

 

netsh interface tcp set global rss=disabled
netsh interface tcp set global autotuninglevel=disabled
netsh int ip set global taskoffload=disabled

 

Disabled SNP;-
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Tcpip\Parameters]
EnableTCPChimney=dword:00000000
EnableTCPA=dword:00000000
EnableRSS=dword:00000000

 

Have tried driver from Asus MB CD, Asus Website, Your Website, Windows Update all to no avail.

 

Please help.

I217-LM - no RSS possible under Win 8.1 Pro x64 - driver setting without effect

$
0
0

Hi,

 

I'm using an ASRock Rack EPC612D8A-TB motherboard with two Intel onboard NICs (I210 (Ethernet 2) and I217-LM (Ethernet)), running Win 8.1 Pro x64 with all Windows Updates installed and the latest Intel Ethernet driver package (20.2.3001.0). My problem is with the I217-LM, according to the Intel spec sheet it does support RSS (cp. figure on page 2).

In the I217-LM's advanced driver settings there is an option to enable and disable RSS. However Windows itself always says that the I217-LM is not RSS-capable.

i217-lm-no-rss-womac.png

The same option in the I210 driver options has the expected effect: With the PowerShell command get-SmbClientNetworkInterface you can see the RSS capability changing from True to False and vice versa.

 

Can anyone tell why the I217-LM is not getting the RSS feature?

 

Further system details:
CPU: Xeon E5-1620-V3

RAM: 2 x 16 GiB Crucial DDR4-2133 ECC

The motherboard does not have a later BIOS/UEFI release than the one installed.

 

Thank you very much for your help!

Intel i219-V network issue

$
0
0

I have an issue with my onboard network card on an ASUS Z170 Pro Gaming mainboard. I get disconnects all the time and i had no problem at all before i got the mainboard with the Intel NIC. The error id "e1cexpress eventID: 27" is occuring in the Windows event log. It is 100% not a router or internet provider problem as everything works fine with other devices connected to the same network.


I tried many things posted in other threads here that looked like similar issues but the problem still exists.

The Intel i219-V is not even listed in the intel download center, so i only found up to date drivers on other computer manufacturer websites. I also tried to set the ethernet energy feature off, but it does not help. Maybe it delays the problem a little bit as it only occurs every 3 or 4 hours now.

But as i am also playing online-games, this is an absolutely no-go as it kicked me out of some games a few times already while i was playing.


I am using Windows 10 Pro. Currently i have installed the drivers that came with the mainboard (from the driver CD).

Maybe some other people with this mainboard also have this issue and found a solution. Please help.

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.


FreeBSD and Intel XL710 10G

$
0
0

Hi all,

I've installed FreeBSD 10.2-STABLE on a server with 2*CPU E5-2643v3 (with HyperThreading on) and Intel XL710 4*10G SFP+ card.

At first I updated FreeBSD drivers to 1.4.0 (from download.intel.com).

I see next strange thing - Every acive ixl interface creates 24 queues (6 core *2 (HT) *2 CPU) but uses only 16-17 of them:

irq284: ixl0:q0                164383663       1941

irq285: ixl0:q1                371238730       4384

irq286: ixl0:q2                378286557       4468

irq287: ixl0:q3                365073427       4312

irq288: ixl0:q4                371116376       4383

irq289: ixl0:q5                372589584       4400

irq290: ixl0:q6                361879025       4274

irq291: ixl0:q7                354607200       4188

irq292: ixl0:q8                223602267       2641

irq293: ixl0:q9                199067474       2351

irq294: ixl0:q10               212598000       2511

irq295: ixl0:q11               202534854       2392

irq296: ixl0:q12               212050675       2504

irq297: ixl0:q13               209106917       2469

irq298: ixl0:q14               201452403       2379

irq299: ixl0:q15               203896634       2408

irq300: ixl0:q16                76328643        901

irq301: ixl0:q17                    6030          0

irq302: ixl0:q18                    5433          0

irq303: ixl0:q19                    6804          0

irq304: ixl0:q20                    6098          0

irq305: ixl0:q21                    6603          0

irq306: ixl0:q22                    6476          0

irq307: ixl0:q23                    7141          0

irq309: ixl1:q0                161169757       1903

irq310: ixl1:q1                402042077       4748

irq311: ixl1:q2                399166615       4714

irq312: ixl1:q3                389702886       4602

irq313: ixl1:q4                383371508       4528

irq314: ixl1:q5                388621686       4590

irq315: ixl1:q6                385533771       4553

irq316: ixl1:q7                390478220       4612

irq317: ixl1:q8                232313544       2743

irq318: ixl1:q9                248387076       2933

irq319: ixl1:q10               233942388       2763

irq320: ixl1:q11               237794942       2808

irq321: ixl1:q12               227292626       2684

irq322: ixl1:q13               222151566       2623

irq323: ixl1:q14               234209020       2766

irq324: ixl1:q15               217878026       2573

irq325: ixl1:q16                80177041        947

irq326: ixl1:q17                      83          0

irq327: ixl1:q18                      74          0

irq328: ixl1:q19                     201          0

irq329: ixl1:q20                      98          0

irq330: ixl1:q21                      95          0

irq331: ixl1:q22                      91          0

irq332: ixl1:q23                      87          0

 

# top -aSCHP

 

 

last pid: 28661;  load averages:  7.06,  6.35,  6.23                                                                                                                                                                      up 0+23:35:07  17:15:12

391 processes: 31 running, 215 sleeping, 145 waiting

CPU 0:   0.0% user,  0.0% nice,  0.0% system, 39.4% interrupt, 60.6% idle

CPU 1:   0.0% user,  0.0% nice,  0.0% system, 48.8% interrupt, 51.2% idle

CPU 2:   0.0% user,  0.0% nice,  0.0% system, 42.1% interrupt, 57.9% idle

CPU 3:   0.0% user,  0.0% nice,  0.0% system, 40.2% interrupt, 59.8% idle

CPU 4:   0.0% user,  0.0% nice,  0.4% system, 41.3% interrupt, 58.3% idle

CPU 5:   0.0% user,  0.0% nice,  0.0% system, 37.0% interrupt, 63.0% idle

CPU 6:   0.0% user,  0.0% nice,  0.0% system, 35.8% interrupt, 64.2% idle

CPU 7:   0.0% user,  0.0% nice,  0.0% system, 39.0% interrupt, 61.0% idle

CPU 8:   0.0% user,  0.0% nice,  0.0% system, 22.0% interrupt, 78.0% idle

CPU 9:   0.0% user,  0.0% nice,  0.0% system, 26.0% interrupt, 74.0% idle

CPU 10:  0.0% user,  0.0% nice,  0.0% system, 17.7% interrupt, 82.3% idle

CPU 11:  0.0% user,  0.0% nice,  0.0% system, 19.3% interrupt, 80.7% idle

CPU 12:  0.0% user,  0.0% nice,  0.4% system, 25.2% interrupt, 74.4% idle

CPU 13:  0.0% user,  0.0% nice,  0.0% system, 23.6% interrupt, 76.4% idle

CPU 14:  0.0% user,  0.0% nice,  0.0% system, 22.4% interrupt, 77.6% idle

CPU 15:  0.0% user,  0.0% nice,  0.0% system, 26.8% interrupt, 73.2% idle

CPU 16:  0.0% user,  0.0% nice,  1.2% system,  1.6% interrupt, 97.2% idle

CPU 17:  0.0% user,  0.0% nice,  0.8% system,  0.0% interrupt, 99.2% idle

CPU 18:  0.0% user,  0.0% nice,  1.2% system,  0.0% interrupt, 98.8% idle

CPU 19:  0.0% user,  0.0% nice,  0.4% system,  0.0% interrupt, 99.6% idle

CPU 20:  0.0% user,  0.0% nice,  3.5% system,  0.0% interrupt, 96.5% idle

CPU 21:  0.0% user,  0.0% nice,  1.6% system,  0.0% interrupt, 98.4% idle

CPU 22:  0.0% user,  0.0% nice,  0.8% system,  0.0% interrupt, 99.2% idle

CPU 23:  0.0% user,  0.0% nice,  2.0% system,  0.0% interrupt, 98.0% idle

 

# netstat -I ixl0 -w1 -h

            input           ixl0           output

   packets  errs idrops      bytes    packets  errs      bytes colls

      235K     0     0       126M       300K     0       321M     0

      233K     0     0       114M       297K     0       312M     0

      232K     0     0       116M       300K     0       315M     0

      227K     0     0       108M       297K     0       316M     0

 

Where is the source of problem that network adapter doesn't utilize all queues ?

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

Mechanical dimension for E10G42BFSR

$
0
0

Hi,

May I know the mechanical dimension of this ethernet card (E10G42BFSR)?
The datasheet which we have download from Intel website does not have the overall height profile that we need.

The dimension is based on the L and W. It doesn’t have the overall height as we need to know if the heatsink will protrude beyond the bracket.

We are concern that the height of the heatsink on the Intel Server adapter card might block our chassis as we are installing it on a 1U system.

Please help to advise the mechanical dimension of the card. Thanks.

 

Best Regards,

Chong

Procedure to downgrade NVM on X710

$
0
0

Please let me know the procedure to downgrade NVM on X710

NIC-rich but CPU-poor?

$
0
0

Recently we got a few new servers. All have identical configuration. Each has dual E5-2620v3 2.4Ghz CPUs, 128GiB RAM (8 x 16GiB DDR4 DIMMs), 1 dual-40G XL710, and two dual 10G SPF+ mezz cards (i.e. 4 x 10G SPF+ ports). All of them run CentOS 7.1 x86_64.  These XL710s are connected to the 40G ports of QCT LY8 switches using genuine Intel QSFP+ DACs.  All 10G SPF+ ports are connected to Arista 7280SE-68 switches, but using third party DACs.  All systems have been so far minimally tuned:

  • In each's BIOS, the pre-defined "High Performance" profile is selected, furthermore, Intel I/OAT is enabled, VT-d is disabled (We don't need them to run virtual machines, they are for HPC applications).
  • In each's CentOS, the tuned-adm active is set to network-throughput.

 

After the servers have been setup, we have been using iperf3 to run long-running tests among such servers. So far, we have observed consistent packet drops on the receiving side.  An example:


[root@sc2u1n0 ~]# netstat -i

Kernel Interface table

Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg

ens10f0  9000 236406987      0      0 0      247785514      0      0      0 BMRU

ens1f0    9000 363116387      0  2391 0      2370529766      0      0      0 BMRU

ens1f1    9000 382484140      0  2248 0      2098335636      0      0      0 BMRU

ens20f0  9000 565532361      0  2258 0      1472188440      0      0      0 BMRU

ens20f1  9000 519587804      0  4225 0      5471601950      0      0      0 BMRU

lo      65536 19058603      0      0 0      19058603      0      0      0 LRU



We have also observed iperf3 retries at the beginning of a test session and often, during a session (not as often however).  Two examples:


40G pairs:


$ iperf3 -c 192.168.11.100  -i 1 -t 10

Connecting to host 192.168.11.100, port 5201

[  4] local 192.168.11.103 port 59351 connected to 192.168.11.100 port 5201

[ ID] Interval          Transfer    Bandwidth      Retr  Cwnd

[  4]  0.00-1.00  sec  2.77 GBytes  23.8 Gbits/sec  54    655 KBytes  

[  4]  1.00-2.00  sec  4.26 GBytes  36.6 Gbits/sec    0  1.52 MBytes  

[  4]  2.00-3.00  sec  4.61 GBytes  39.6 Gbits/sec    0  2.12 MBytes  

[  4]  3.00-4.00  sec  4.53 GBytes  38.9 Gbits/sec    0  2.57 MBytes  

[  4]  4.00-5.00  sec  4.00 GBytes  34.4 Gbits/sec    7  1.42 MBytes  

[  4]  5.00-6.00  sec  4.61 GBytes  39.6 Gbits/sec    0  2.01 MBytes  

[  4]  6.00-7.00  sec  4.61 GBytes  39.6 Gbits/sec    0  2.47 MBytes  

[  4]  7.00-8.00  sec  4.61 GBytes  39.6 Gbits/sec    0  2.88 MBytes  

[  4]  8.00-9.00  sec  4.61 GBytes  39.6 Gbits/sec    0  3.21 MBytes  

[  4]  9.00-10.00  sec  4.61 GBytes  39.6 Gbits/sec    0  3.52 MBytes  

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval          Transfer    Bandwidth      Retr

[  4]  0.00-10.00  sec  43.2 GBytes  37.1 Gbits/sec  61            sender

[  4]  0.00-10.00  sec  43.2 GBytes  37.1 Gbits/sec                  receiver

 

82599 powered 10G pairs:

 

$ iperf3 -c 192.168.15.100 -i 1 -t 10

Connecting to host 192.168.15.100, port 5201

[  4] local 192.168.16.101 port 53464 connected to 192.168.15.100 port 5201

[ ID] Interval          Transfer    Bandwidth      Retr  Cwnd

[  4]  0.00-1.00  sec  1.05 GBytes  9.05 Gbits/sec  722  1.97 MBytes  

[  4]  1.00-2.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.80 MBytes  

[  4]  2.00-3.00  sec  1.10 GBytes  9.42 Gbits/sec  23  2.15 MBytes  

[  4]  3.00-4.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.16 MBytes  

[  4]  4.00-5.00  sec  1.09 GBytes  9.41 Gbits/sec    0  2.16 MBytes  

[  4]  5.00-6.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.17 MBytes  

[  4]  6.00-7.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.18 MBytes  

[  4]  7.00-8.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.22 MBytes  

[  4]  8.00-9.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.27 MBytes  

[  4]  9.00-10.00  sec  1.10 GBytes  9.42 Gbits/sec    0  2.34 MBytes  

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval          Transfer    Bandwidth      Retr

[  4]  0.00-10.00  sec  10.9 GBytes  9.38 Gbits/sec  745            sender

[  4]  0.00-10.00  sec  10.9 GBytes  9.37 Gbits/sec                  receiver


Looking around, I ran into a 40G NIC Tuning article on the DOE Energy Science Network fast data site, quoted "At the present time (February 2015), CPU clock rate still matters a lot for 40G hosts.  In general, higher CPU clock rate is far more important than high core count for a 40G host.  In general, you can expect it to be very difficult to achieve 40G performance with a CPU that runs more slowly than 3GHz per core.We don't have such fast CPUs The E5-2620v3 is a mid-range CPU from the Basic category, not even the Performance category. So,

  • Are our servers too rich in NICs, but under-powered CPU-wise? 
  • Is there anything that we can do to get these servers to behave at least reasonably?  Especially, not dropping packets?


BTW, a few days ago we updated all servers with the most recent Intel stable i40e and ixgbe drivers, but we have not run the set_irq_affinity CPU yet. Neither we have tuned the NIC (e.g. adjusting rx-usecs value etc). The reason is because each server runs two highly concurrent applications which tend to use all the cores. We are afraid that to use the set_irq_affinity script, we may negatively impact the performance of our applications. But if Intel folks consider running the script beneficial, we are willing to try.

 

Regards,

 

-- Zack

Viewing all 4566 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>