Quantcast
Channel: Intel Communities : All Content - Wired Ethernet
Viewing all 4566 articles
Browse latest View live

i40e Ethernet Connection XL710 Network Driver - version 1.5.10-k 2.6.32-696 not loading correctly

$
0
0

Has anyone ran into a similar issue. After a yum update to kernel 2.6.32-696.3.2.el6.x86_64 or 2.6.32-696. My bond2 interface stops working correctly.

After that I am unable to set speed settings and unable to ping anything. This is causing my NFS shares to stop working, as they are being mounted via that nic.

When I roll back to 2.6.32-642.13.1.el6.x86_64 starts working right away.

 

from dmesg it looks like the kernel is unable to detect that we are using 10Gbps cards. How do I proceed with reporting this bug?

 

======================================================

2.6.32-696.3.2.el6.x86_64

======================================================

# modinfo i40e

filename:       /lib/modules/2.6.32-696.3.2.el6.x86_64/kernel/drivers/net/i40e/i40e.ko

version:        1.5.10-k

license:        GPL

description:    Intel(R) Ethernet Connection XL710 Network Driver

author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>

srcversion:     B5DC8E286FEFB9414076D56

alias:          pci:v00008086d00001588sv*sd*bc*sc*i*

alias:          pci:v00008086d00001587sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D4sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D3sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D2sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D1sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D0sv*sd*bc*sc*i*

alias:          pci:v00008086d000037CFsv*sd*bc*sc*i*

alias:          pci:v00008086d000037CEsv*sd*bc*sc*i*

alias:          pci:v00008086d00001587sv*sd*bc*sc*i*

alias:          pci:v00008086d00001589sv*sd*bc*sc*i*

alias:          pci:v00008086d00001586sv*sd*bc*sc*i*

alias:          pci:v00008086d00001585sv*sd*bc*sc*i*

alias:          pci:v00008086d00001584sv*sd*bc*sc*i*

alias:          pci:v00008086d00001583sv*sd*bc*sc*i*

alias:          pci:v00008086d00001581sv*sd*bc*sc*i*

alias:          pci:v00008086d00001580sv*sd*bc*sc*i*

alias:          pci:v00008086d00001574sv*sd*bc*sc*i*

alias:          pci:v00008086d00001572sv*sd*bc*sc*i*

depends:        ptp

vermagic:       2.6.32-696.3.2.el6.x86_64 SMP mod_unload modversions

parm:           debug:Debug level (0=none,...,16=all) (int)

 

# grep i40e /tmp/dmesg-2.6.32-696.3.2.el6.x86_64

i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.5.10-k

i40e: Copyright (c) 2013 - 2014 Intel Corporation.

i40e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16

i40e 0000:0b:00.0: setting latency timer to 64

i40e 0000:0b:00.0: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0

i40e 0000:0b:00.0: MAC address: <REDACTED>

i40e 0000:0b:00.0: irq 85 for MSI/MSI-X

i40e 0000:0b:00.0: irq 86 for MSI/MSI-X

i40e 0000:0b:00.0: irq 87 for MSI/MSI-X

i40e 0000:0b:00.0: irq 88 for MSI/MSI-X

i40e 0000:0b:00.0: irq 89 for MSI/MSI-X

i40e 0000:0b:00.0: irq 90 for MSI/MSI-X

i40e 0000:0b:00.0: irq 91 for MSI/MSI-X

i40e 0000:0b:00.0: irq 92 for MSI/MSI-X

i40e 0000:0b:00.0: irq 93 for MSI/MSI-X

i40e 0000:0b:00.0: irq 94 for MSI/MSI-X

i40e 0000:0b:00.0: irq 95 for MSI/MSI-X

i40e 0000:0b:00.0: irq 96 for MSI/MSI-X

i40e 0000:0b:00.0: irq 97 for MSI/MSI-X

i40e 0000:0b:00.0: irq 98 for MSI/MSI-X

i40e 0000:0b:00.0: irq 99 for MSI/MSI-X

i40e 0000:0b:00.0: irq 100 for MSI/MSI-X

i40e 0000:0b:00.0: irq 101 for MSI/MSI-X

i40e 0000:0b:00.0: irq 102 for MSI/MSI-X

i40e 0000:0b:00.0: irq 103 for MSI/MSI-X

i40e 0000:0b:00.0: irq 104 for MSI/MSI-X

i40e 0000:0b:00.0: irq 105 for MSI/MSI-X

i40e 0000:0b:00.0: irq 106 for MSI/MSI-X

i40e 0000:0b:00.0: irq 107 for MSI/MSI-X

i40e 0000:0b:00.0: irq 108 for MSI/MSI-X

i40e 0000:0b:00.0: irq 109 for MSI/MSI-X

i40e 0000:0b:00.0: irq 110 for MSI/MSI-X

i40e 0000:0b:00.0: PCI-Express: Speed 8.0GT/s Width x8

i40e 0000:0b:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA

i40e 0000:0b:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16

i40e 0000:0b:00.1: setting latency timer to 64

i40e 0000:0b:00.1: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0

i40e 0000:0b:00.1: MAC address: <REDACTED>

i40e 0000:0b:00.1: irq 111 for MSI/MSI-X

i40e 0000:0b:00.1: irq 112 for MSI/MSI-X

i40e 0000:0b:00.1: irq 113 for MSI/MSI-X

i40e 0000:0b:00.1: irq 114 for MSI/MSI-X

i40e 0000:0b:00.1: irq 115 for MSI/MSI-X

i40e 0000:0b:00.1: irq 116 for MSI/MSI-X

i40e 0000:0b:00.1: irq 117 for MSI/MSI-X

i40e 0000:0b:00.1: irq 118 for MSI/MSI-X

i40e 0000:0b:00.1: irq 119 for MSI/MSI-X

i40e 0000:0b:00.1: irq 120 for MSI/MSI-X

i40e 0000:0b:00.1: irq 121 for MSI/MSI-X

i40e 0000:0b:00.1: irq 122 for MSI/MSI-X

i40e 0000:0b:00.1: irq 123 for MSI/MSI-X

i40e 0000:0b:00.1: irq 124 for MSI/MSI-X

i40e 0000:0b:00.1: irq 125 for MSI/MSI-X

i40e 0000:0b:00.1: irq 126 for MSI/MSI-X

i40e 0000:0b:00.1: irq 127 for MSI/MSI-X

i40e 0000:0b:00.1: irq 128 for MSI/MSI-X

i40e 0000:0b:00.1: irq 129 for MSI/MSI-X

i40e 0000:0b:00.1: irq 130 for MSI/MSI-X

i40e 0000:0b:00.1: irq 131 for MSI/MSI-X

i40e 0000:0b:00.1: irq 132 for MSI/MSI-X

i40e 0000:0b:00.1: irq 133 for MSI/MSI-X

i40e 0000:0b:00.1: irq 134 for MSI/MSI-X

i40e 0000:0b:00.1: irq 135 for MSI/MSI-X

i40e 0000:0b:00.1: irq 136 for MSI/MSI-X

i40e 0000:0b:00.1: PCI-Express: Speed 8.0GT/s Width x8

i40e 0000:0b:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA

i40e 0000:0b:00.0: eth8: already using mac address <REDACTED>

i40e 0000:0b:00.1: eth9: set new mac address <REDACTED>

 

# ethtool -i bond2

driver: bonding

version: 3.7.1

firmware-version: 2

bus-info:

supports-statistics: no

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

 

# cat /proc/net/bonding/bond2

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

 

 

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: None

Currently Active Slave: None

MII Status: down

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

 

 

Slave Interface: eth9

MII Status: down

Speed: Unknown

Duplex: Unknown

Link Failure Count: 0

Permanent HW addr:  <REDACTED>

Slave queue ID: 0

 

 

Slave Interface: eth8

MII Status: down

Speed: Unknown

Duplex: Unknown

Link Failure Count: 0

Permanent HW addr:  <REDACTED>

Slave queue ID: 0

 

 

ethtool -s bond2 speed 10000 duplex full autoneg off

Cannot set new settings: Operation not supported

  not setting speed

  not setting duplex

  not setting autoneg

 

======================================================

2.6.32-642.13.1.el6.x86_64

======================================================

# modinfo i40e

filename:       /lib/modules/2.6.32-642.13.1.el6.x86_64/kernel/drivers/net/i40e/i40e.ko

version:        1.4.7-k

license:        GPL

description:    Intel(R) Ethernet Connection XL710 Network Driver

author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>

srcversion:     B91F227B49241127F18771D

alias:          pci:v00008086d00001588sv*sd*bc*sc*i*

alias:          pci:v00008086d00001587sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D2sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D1sv*sd*bc*sc*i*

alias:          pci:v00008086d000037D0sv*sd*bc*sc*i*

alias:          pci:v00008086d00001587sv*sd*bc*sc*i*

alias:          pci:v00008086d00001589sv*sd*bc*sc*i*

alias:          pci:v00008086d00001586sv*sd*bc*sc*i*

alias:          pci:v00008086d00001585sv*sd*bc*sc*i*

alias:          pci:v00008086d00001584sv*sd*bc*sc*i*

alias:          pci:v00008086d00001583sv*sd*bc*sc*i*

alias:          pci:v00008086d00001581sv*sd*bc*sc*i*

alias:          pci:v00008086d00001580sv*sd*bc*sc*i*

alias:          pci:v00008086d0000157Fsv*sd*bc*sc*i*

alias:          pci:v00008086d00001574sv*sd*bc*sc*i*

alias:          pci:v00008086d00001572sv*sd*bc*sc*i*

depends:        ptp

vermagic:       2.6.32-642.13.1.el6.x86_64 SMP mod_unload modversions

parm:           debug:Debug level (0=none,...,16=all) (int)

 

grep i40e /tmp/dmesg-2.6.32-642.13.1.el6.x86_64

i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.4.7-k

i40e: Copyright (c) 2013 - 2014 Intel Corporation.

i40e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16

i40e 0000:0b:00.0: setting latency timer to 64

i40e 0000:0b:00.0: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0

i40e 0000:0b:00.0: MAC address: <REDACTED>

i40e 0000:0b:00.0: irq 85 for MSI/MSI-X

i40e 0000:0b:00.0: irq 86 for MSI/MSI-X

i40e 0000:0b:00.0: irq 87 for MSI/MSI-X

i40e 0000:0b:00.0: irq 88 for MSI/MSI-X

i40e 0000:0b:00.0: irq 89 for MSI/MSI-X

i40e 0000:0b:00.0: irq 90 for MSI/MSI-X

i40e 0000:0b:00.0: irq 91 for MSI/MSI-X

i40e 0000:0b:00.0: irq 92 for MSI/MSI-X

i40e 0000:0b:00.0: irq 93 for MSI/MSI-X

i40e 0000:0b:00.0: irq 94 for MSI/MSI-X

i40e 0000:0b:00.0: irq 95 for MSI/MSI-X

i40e 0000:0b:00.0: irq 96 for MSI/MSI-X

i40e 0000:0b:00.0: irq 97 for MSI/MSI-X

i40e 0000:0b:00.0: irq 98 for MSI/MSI-X

i40e 0000:0b:00.0: irq 99 for MSI/MSI-X

i40e 0000:0b:00.0: irq 100 for MSI/MSI-X

i40e 0000:0b:00.0: irq 101 for MSI/MSI-X

i40e 0000:0b:00.0: irq 102 for MSI/MSI-X

i40e 0000:0b:00.0: irq 103 for MSI/MSI-X

i40e 0000:0b:00.0: irq 104 for MSI/MSI-X

i40e 0000:0b:00.0: irq 105 for MSI/MSI-X

i40e 0000:0b:00.0: irq 106 for MSI/MSI-X

i40e 0000:0b:00.0: irq 107 for MSI/MSI-X

i40e 0000:0b:00.0: irq 108 for MSI/MSI-X

i40e 0000:0b:00.0: irq 109 for MSI/MSI-X

i40e 0000:0b:00.0: irq 110 for MSI/MSI-X

i40e 0000:0b:00.0: PCI-Express: Speed 8.0GT/s Width x8

i40e 0000:0b:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA

i40e 0000:0b:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16

i40e 0000:0b:00.1: setting latency timer to 64

i40e 0000:0b:00.1: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0

i40e 0000:0b:00.1: MAC address: <REDACTED>

i40e 0000:0b:00.1: irq 111 for MSI/MSI-X

i40e 0000:0b:00.1: irq 112 for MSI/MSI-X

i40e 0000:0b:00.1: irq 113 for MSI/MSI-X

i40e 0000:0b:00.1: irq 114 for MSI/MSI-X

i40e 0000:0b:00.1: irq 115 for MSI/MSI-X

i40e 0000:0b:00.1: irq 116 for MSI/MSI-X

i40e 0000:0b:00.1: irq 117 for MSI/MSI-X

i40e 0000:0b:00.1: irq 118 for MSI/MSI-X

i40e 0000:0b:00.1: irq 119 for MSI/MSI-X

i40e 0000:0b:00.1: irq 120 for MSI/MSI-X

i40e 0000:0b:00.1: irq 121 for MSI/MSI-X

i40e 0000:0b:00.1: irq 122 for MSI/MSI-X

i40e 0000:0b:00.1: irq 123 for MSI/MSI-X

i40e 0000:0b:00.1: irq 124 for MSI/MSI-X

i40e 0000:0b:00.1: irq 125 for MSI/MSI-X

i40e 0000:0b:00.1: irq 126 for MSI/MSI-X

i40e 0000:0b:00.1: irq 127 for MSI/MSI-X

i40e 0000:0b:00.1: irq 128 for MSI/MSI-X

i40e 0000:0b:00.1: irq 129 for MSI/MSI-X

i40e 0000:0b:00.1: irq 130 for MSI/MSI-X

i40e 0000:0b:00.1: irq 131 for MSI/MSI-X

i40e 0000:0b:00.1: irq 132 for MSI/MSI-X

i40e 0000:0b:00.1: irq 133 for MSI/MSI-X

i40e 0000:0b:00.1: irq 134 for MSI/MSI-X

i40e 0000:0b:00.1: irq 135 for MSI/MSI-X

i40e 0000:0b:00.1: irq 136 for MSI/MSI-X

i40e 0000:0b:00.1: PCI-Express: Speed 8.0GT/s Width x8

i40e 0000:0b:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA

i40e 0000:0b:00.0: eth8: already using mac address <REDACTED>

i40e 0000:0b:00.0: eth8: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:0b:00.1: eth9: set new mac address <REDACTED>

i40e 0000:0b:00.1: eth9: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

 

# ethtool -i bond2

driver: bonding

version: 3.7.1

firmware-version: 2

bus-info:

supports-statistics: no

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

 

# cat /proc/net/bonding/bond2

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

 

 

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: None

Currently Active Slave: eth8

MII Status: up

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

 

 

Slave Interface: eth8

MII Status: up

Speed: 10000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr:  <REDACTED>

Slave queue ID: 0

 

 

Slave Interface: eth9

MII Status: up

Speed: 10000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr:  <REDACTED>

Slave queue ID: 0


Issue with X710 and XL710 on Dell PowerEdge server + RedHat 7.2

$
0
0

Hi Intel comunity,

We have serious problem with intel cards (X710 and XL170). 

The linux server (Dell Power Edge 630) don’t see them completely under RedHat 7.2. I dont see the interfaces under linux ifconfig -a.

I have installed the last drivers (ixgbe-5.1.3 and i40e-2.0.26) but didnt succedded to update the firmware (if it is the problem).

 

Here below the output of my server:

  • From "modinfo" I get the following output:

 

[root@TBOS ~]# modinfo i40e

filename: /lib/modules/3.10.0-327.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko

version: 2.0.26

license: GPL

description: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver

author: Intel Corporation, <e1000-devel@lists.sourceforge.net>

rhelversion: 7.2

srcversion: F49696A466EC36F89F8FE86

alias: pci:v00008086d0000158Bsv*sd*bc*sc*i*

alias: pci:v00008086d0000158Asv*sd*bc*sc*i*

alias: pci:v00008086d000037D3sv*sd*bc*sc*i*

alias: pci:v00008086d000037D2sv*sd*bc*sc*i*

alias: pci:v00008086d000037D1sv*sd*bc*sc*i*

alias: pci:v00008086d000037D0sv*sd*bc*sc*i*

alias: pci:v00008086d000037CFsv*sd*bc*sc*i*

alias: pci:v00008086d000037CEsv*sd*bc*sc*i*

alias: pci:v00008086d0000374Csv*sd*bc*sc*i*

alias: pci:v00008086d00001588sv*sd*bc*sc*i*

alias: pci:v00008086d00001587sv*sd*bc*sc*i*

alias: pci:v00008086d00001589sv*sd*bc*sc*i*

alias: pci:v00008086d00001586sv*sd*bc*sc*i*

alias: pci:v00008086d00001585sv*sd*bc*sc*i*

alias: pci:v00008086d00001584sv*sd*bc*sc*i*

alias: pci:v00008086d00001583sv*sd*bc*sc*i*

alias: pci:v00008086d00001581sv*sd*bc*sc*i*

alias: pci:v00008086d00001580sv*sd*bc*sc*i*

alias: pci:v00008086d00001574sv*sd*bc*sc*i*

alias: pci:v00008086d00001572sv*sd*bc*sc*i*

depends: ptp,vxlan

vermagic: 3.10.0-327.el7.x86_64 SMP mod_unload modversions

parm: debug:Debug level (0=none,...,16=all) (int)

[root@TBOS ~]#

 

  • From ifconfig –a, we don’t see the port at all !!!

 

  • When I want to update the firmware, I get
      • [root@TBOS Linux_x64]# ./nvmupdate64e

Intel(R) Ethernet NVM Update Tool

NVMUpdate version 1.28.19.4

Copyright (C) 2013 - 2016 Intel Corporation.

WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.

Inventory in progress. Please wait [*|........]

Num Description Ver. DevId S:B    Status

=== ======================================== ===== ===== ====== ===============

01) Intel(R) Ethernet Converged Network 1572 00:004 Access error

Adapter X710

Tool execution completed with the following status: Device not found

Press any key to exit.

 

  • Dmesg output:

[root@TBOS ~]# dmesg| grep i40

[ 3.549403] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.0.26

[ 3.549405] i40e: Copyright(c) 2013 - 2017 Intel Corporation.

[ 3.578751] i40e 0000:04:00.0: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10

[ 3.578754] i40e 0000:04:00.0: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.

[ 3.822134] i40e 0000:04:00.0: MAC address: 3c:fd:fe:0c:cb:e0

[    3.835115] i40e 0000:04:00.0: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

[ 3.835118] i40e 0000:04:00.0: DCB init failed -53, disabled

[ 3.835166] i40e 0000:04:00.0: irq 91 for MSI/MSI-X

…..

[ 3.836118] i40e 0000:04:00.0: irq 148 for MSI/MSI-X

[ 4.050907] i40e 0000:04:00.0: Added LAN device PF0 bus=0x04 dev=0x00 func=0x00

[ 4.050912] i40e 0000:04:00.0: PCI-Express: Speed 8.0GT/s Width x8

[ 4.080877] i40e 0000:04:00.0: Features: PF-id[0] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA

[ 4.094861] i40e 0000:04:00.1: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10

[ 4.094864] i40e 0000:04:00.1: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.

[ 4.339689] i40e 0000:04:00.1: MAC address: 3c:fd:fe:0c:cb:e2

[    4.349612] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

[ 4.349615] i40e 0000:04:00.1: DCB init failed -53, disabled

[ 4.349684] i40e 0000:04:00.1: irq 150 for MSI/MSI-X

……

[ 4.350710] i40e 0000:04:00.1: irq 207 for MSI/MSI-X

[ 4.499464] i40e 0000:04:00.1: Added LAN device PF1 bus=0x04 dev=0x00 func=0x01

[ 4.499469] i40e 0000:04:00.1: PCI-Express: Speed 8.0GT/s Width x8

[ 4.529440] i40e 0000:04:00.1: Features: PF-id[1] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA

[ 4.543425] i40e 0000:04:00.2: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10

[ 4.543427] i40e 0000:04:00.2: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.

[ 4.989984] i40e 0000:04:00.2: MAC address: 3c:fd:fe:0c:cb:e4

[ 5.000011] i40e 0000:04:00.2: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

[ 5.000014] i40e 0000:04:00.2: DCB init failed -53, disabled

[ 5.000085] i40e 0000:04:00.2: irq 208 for MSI/MSI-X

…..

[ 5.001379] i40e 0000:04:00.2: irq 265 for MSI/MSI-X

[ 5.228749] i40e 0000:04:00.2: Added LAN device PF2 bus=0x04 dev=0x00 func=0x02

[ 5.228754] i40e 0000:04:00.2: PCI-Express: Speed 8.0GT/s Width x8

[ 5.263716] i40e 0000:04:00.2: Features: PF-id[2] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA

[ 5.277703] i40e 0000:04:00.3: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10

[ 5.277705] i40e 0000:04:00.3: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.

[ 5.566417] i40e 0000:04:00.3: MAC address: 3c:fd:fe:0c:cb:e6

[ 5.576408] i40e 0000:04:00.3: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

[ 5.576411] i40e 0000:04:00.3: DCB init failed -53, disabled

[ 5.576487] i40e 0000:04:00.3: irq 266 for MSI/MSI-X

……

[ 5.577839] i40e 0000:04:00.3: irq 323 for MSI/MSI-X

[ 5.725374] i40e 0000:04:00.3: Added LAN device PF3 bus=0x04 dev=0x00 func=0x03

[ 5.725383] i40e 0000:04:00.3: PCI-Express: Speed 8.0GT/s Width x8

[ 5.755339] i40e 0000:04:00.3: Features: PF-id[3] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA

[   67.004064] i40e 0000:04:00.0: removed PHC from p2p1

[   67.040316] i40e 0000:04:00.0: Deleted LAN device PF0 bus=0x04 dev=0x00 func=0x00

[   70.039865] i40e 0000:04:00.1: removed PHC from p2p2

[   70.070096] i40e 0000:04:00.1: Deleted LAN device PF1 bus=0x04 dev=0x00 func=0x01

[   73.240777] i40e 0000:04:00.2: removed PHC from p2p3

[   73.279308] i40e 0000:04:00.2: Deleted LAN device PF2 bus=0x04 dev=0x00 func=0x02

[   74.650315] i40e 0000:04:00.3: removed PHC from p2p4

[   74.690215] i40e 0000:04:00.3: Deleted LAN device PF3 bus=0x04 dev=0x00 func=0x03

[root@TBOS ~]#

 

  • From lspci| egrep -i "Network|Ethernet" we see the following output:
    • 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
    • 04:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
    • 04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
    • 04:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

 

Please help ...

 

Thanks

X710-4 NVM Tool Reports "Update not found"

$
0
0

Hi, I have several X710-DA4 that I purchased at different times, and some of them I was able to grab the latest firmware (5.05) and upgrade them. nvmupdate64e and ethool show this on the good ones:

 

driver: i40e

version: 1.6.42

firmware-version: 5.05 0x8000289d 1.1568.0

bus-info: 0000:85:00.2

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.

Inventory in progress. Please wait [.........*]

Num Description                               Ver. DevId S:B    Status

=== ======================================== ===== ===== ====== ===============

01) Intel(R) Ethernet Converged Network       5.05  1572 00:004 Up to date

    Adapter X710-4

02) Intel(R) I350 Gigabit Network Connection  1.99  1521 00:129 Update not

                                                                available

03) Intel(R) Ethernet Converged Network       5.05  1572 00:133 Up to date

    Adapter X710-4

 

On the other box, it will not let me upgrade:

 

driver: i40e

version: 2.0.23

firmware-version: 4.10 0x800011c5 0.0.0

bus-info: 0000:01:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.

Inventory in progress. Please wait [|.........]

 

Num Description                               Ver. DevId S:B    Status

=== ======================================== ===== ===== ====== ===============

01) Intel(R) Ethernet Converged Network       4.10  1572 00:001 Update not

    Adapter X710-4                                              available

02) Intel(R) I350 Gigabit Network Connection  1.99  1521 00:129 Update not

                                                                available

03) Intel(R) Ethernet Converged Network       4.10  1572 00:130 Update not

    Adapter X710-4                                              available

 

Does anyone know what's wrong?

Is GT desktop adapter supported in Windows 10

$
0
0

Hi,

Just checking if GT desktop adapter supports Windows 10? Thanks,

Re: Is GT desktop adapter supported in Windows 2012

$
0
0

Hi

 

How about Windows 2012?

 

thanks

Intel Gigabit CT & CT2 Desktop Adapter

$
0
0

What are the differences between them?

 

Intel Gigabit CT Desktop Adapter

and

Intel Gigabit CT2 Desktop Adapter

XL710 Stops Receiving Packets After a Particular PPPoE Packet

$
0
0

HI Everyone,

 

We are using XL710 hardware on Linux 4.1.20 kernel.

 

We have an issue where the following behavior is noticed.

 

  • When we send a simple loop back traffic to the XL710, it works fine.
  • When a specific PPPoE packet is sent from an external port to the XL710, on Linux we notice that the XL710 driver has no response. There is no interrupt is raised for this received packet.
  • After the above condition, XL710 stops receiving any packet.

 

Here is the packet contents for a good packet and the failing packet.

 

I can send any number of good packets, and XL710 is able to received it.

After I send a single failing packet, XL710 stop receiving packets. In fact, it does not receive even the good packets after this.

 

 

Good Packet:

 

Frame 2: 128 bytes on wire (1024 bits), 124 bytes captured (992 bits) on interface 0

    Interface id: 0 (\\.\pipe\view_capture_172-27-5-51_6_89_07182017_154149)

    Encapsulation type: Ethernet (1)

    Arrival Time: Jul 18, 2017 15:41:13.680293000 India Standard Time

    [Time shift for this packet: 0.000000000 seconds]

    Epoch Time: 1500372673.680293000 seconds

    [Time delta from previous captured frame: 0.496532000 seconds]

    [Time delta from previous displayed frame: 0.496532000 seconds]

    [Time since reference or first frame: 0.496532000 seconds]

    Frame Number: 2

    Frame Length: 128 bytes (1024 bits)

    Capture Length: 124 bytes (992 bits)

    [Frame is marked: False]

    [Frame is ignored: False]

    [Protocols in frame: eth:ethertype:mpls:pwethheuristic:pwethcw:eth:ethertype:vlan:ethertype:vlan:ethertype:pppoes:ppp:ipcp]

Ethernet II, Src: Performa_00:00:02 (00:10:94:00:00:02), Dst: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

    Destination: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        Address: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:02 (00:10:94:00:00:02)

        Address: Performa_00:00:02 (00:10:94:00:00:02)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: MPLS label switched packet (0x8847)

MultiProtocol Label Switching Header, Label: 1 (Router Alert), Exp: 0, S: 1, TTL: 64

    0000 0000 0000 0000 0001 .... .... .... = MPLS Label: Router Alert (1)

    .... .... .... .... .... 000. .... .... = MPLS Experimental Bits: 0

    .... .... .... .... .... ...1 .... .... = MPLS Bottom Of Label Stack: 1

    .... .... .... .... .... .... 0100 0000 = MPLS TTL: 64

PW Ethernet Control Word

    Sequence Number: 0

Ethernet II, Src: Performa_00:00:03 (00:10:94:00:00:03), Dst: Superlan_00:00:01 (00:00:01:00:00:01)

    Destination: Superlan_00:00:01 (00:00:01:00:00:01)

        Address: Superlan_00:00:01 (00:00:01:00:00:01)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:03 (00:10:94:00:00:03)

        Address: Performa_00:00:03 (00:10:94:00:00:03)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: PPPoE Session (0x8864)

PPP-over-Ethernet Session

    0001 .... = Version: 1

    .... 0001 = Type: 1

    Code: Session Data (0x00)

    Session ID: 0x0001

    Payload Length: 74

Point-to-Point Protocol

    Protocol: Internet Protocol Control Protocol (0x8021)

PPP IP Control Protocol

    Code: Configuration Request (1)

    Identifier: 2 (0x02)

    Length: 10

    Options: (6 bytes), IP address

        IP address: 0.0.0.0

            Type: IP address (3)

            Length: 6

            IP Address: 0.0.0.0

 

Failing Packet:

 

Frame 1: 128 bytes on wire (1024 bits), 124 bytes captured (992 bits) on interface 0

    Interface id: 0 (\\.\pipe\view_capture_172-27-5-51_6_89_07182017_154149)

    Encapsulation type: Ethernet (1)

    Arrival Time: Jul 18, 2017 15:41:13.183761000 India Standard Time

    [Time shift for this packet: 0.000000000 seconds]

    Epoch Time: 1500372673.183761000 seconds

    [Time delta from previous captured frame: 0.000000000 seconds]

    [Time delta from previous displayed frame: 0.000000000 seconds]

    [Time since reference or first frame: 0.000000000 seconds]

    Frame Number: 1

    Frame Length: 128 bytes (1024 bits)

    Capture Length: 124 bytes (992 bits)

    [Frame is marked: False]

    [Frame is ignored: False]

    [Protocols in frame: eth:ethertype:mpls:pwethheuristic:pwethcw:eth:ethertype:vlan:ethertype:vlan:ethertype:pppoes:ppp:ipcp]

Ethernet II, Src: Performa_00:00:02 (00:10:94:00:00:02), Dst: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

    Destination: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        Address: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:02 (00:10:94:00:00:02)

        Address: Performa_00:00:02 (00:10:94:00:00:02)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: MPLS label switched packet (0x8847)

MultiProtocol Label Switching Header, Label: 1 (Router Alert), Exp: 0, S: 1, TTL: 64

    0000 0000 0000 0000 0001 .... .... .... = MPLS Label: Router Alert (1)

    .... .... .... .... .... 000. .... .... = MPLS Experimental Bits: 0

    .... .... .... .... .... ...1 .... .... = MPLS Bottom Of Label Stack: 1

    .... .... .... .... .... .... 0100 0000 = MPLS TTL: 64

PW Ethernet Control Word

    Sequence Number: 0

Ethernet II, Src: Performa_00:00:03 (00:10:94:00:00:03), Dst: Superlan_00:00:01 (00:00:01:00:00:01)

    Destination: Superlan_00:00:01 (00:00:01:00:00:01)

        Address: Superlan_00:00:01 (00:00:01:00:00:01)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:03 (00:10:94:00:00:03)

        Address: Performa_00:00:03 (00:10:94:00:00:03)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: PPPoE Session (0x8864)

PPP-over-Ethernet Session

    0001 .... = Version: 1

    .... 0001 = Type: 1

    Code: Session Data (0x00)

    Session ID: 0x0001

    Payload Length: 74

Point-to-Point Protocol

    Protocol: Internet Protocol Control Protocol (0x8021)

PPP IP Control Protocol

    Code: Configuration Request (1)

    Identifier: 2 (0x02)

    Length: 10

    Options: (6 bytes), IP address

        IP address: 20.6.0.23

            Type: IP address (3)

            Length: 6

            IP Address: 20.6.0.23

 

Regards,

Sadashivan

intel pro/1000 pt bricked?

$
0
0

I was trying to install an intel pro/1000 pt dual-port card into a Ubuntu linux server computer and ran into the "NVM Checksum is Invalid" message.  I tried to run the bootutil utility on the card, but now only ports 1 & 2 show and the mac addresses are gone.  Is this card salvageable or is it bricked?  I ordered another single-port card, but does anyone know the proper way to handle this issue?

 

Brian


i350 initial bring up

$
0
0

i'm in project with i350 develop.

 

is it possible that accsess empty eeprom through pcie?

 

i wondering that factory default I350 can recognize pcie link? with empty eeprom?

Ethtool hooks in Intel X710 i40e Driver. Contact for Driver Expert please?

$
0
0

Dear Intel Ethernet Driver Experts,

 

The following is the good news.

 

In a nutshell, in the field, there was a challenge with MTU and it was blocking progress.

This is with respect to X710 and i40e driver.

 

Doing ethtool -r interface cleared the MTU block.

Wanted to learn in X710 i40e driver the hooks that we, Intel provided for ethtool -r.

 

 

Contact for Driver Expert will helpful.

 

Who is the expert contact please?

 

Much appreciated.

X540-T2 issue on Windows 2012 R2 on Dell R610

$
0
0

Dear all,

 

I have a Intel Nic 10 Gb X540-T2 on a Dell R610 with Windows 2012 R2.

It seams that it does not work well, I downloaded the latest drivers (22.4.0.1 - Date 16/06/2017) from Intel website but from Windows Device Manager I see another driver version and date.

 

In attachment the screenshot shows.

Why? How can I resolve this?

 

Thanks in advance

 

 

 

Mario

 

WDM_Intel_X540-T2.jpeg

Intel PRO 1000 CT Desktop adapter - WOL in Windows 10 does not work

$
0
0

Dear all,

 

the Intel PRO 1000 CT Desktop Adapter (EXPI9301CT ) is supporting WOL.

But it does not work when you try to use it in a Windows 10 machine.

I think the reason is the Driver. Because Intel does not provide a Driver for Windows 10.

The inbox Driver is used instead.

The Intel Driver has it's own tab for the powermanagement, where the WOL Options can be set.

The Windows inbox Driver only has the Default Windows Settings.

 

Has anybody an idea how to use WOL with the PRO 1000 CT Adapter?

 

Thanks.

i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts

$
0
0

Hi!

 

There is a dual E5-2690v3 box based on  Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c, running Ubuntu 16.04.1, w. all current updates.

It has a XL710-QDA2 card, fw 5.0.40043 api 1.5 nvm 5.04 0x80002537, driver 1.5.25 (the stock Ubuntu i40e driver 1.4.25 resulted in a crash), that is planned to be used as an iSCSI initiator endpoint. But there seems to be a problem: the log file fills up with "RX driver issue detected" messages and occasionally the iSCSI link resets as ping times out. This is critical error, as the mounted device becomes unusable!

 

So, Question 1: Is there something that can be done to fix the iSCSI behaviour of the XL710 card? When testing the card with iperf (2 concurrent sessions, the other end had a 10G NIC), there were no problems. The problems started when the iSCSI connection was established.

 

Question 2: Is there a way to force the card to work in PCI Express 2.0 mode? The server downgraded the card once after several previous failures and then it became surprisingly stable. I cannot find a way to make it persist though.

 

Some excerpts from log files (there are also occasional TX driver issues, but much less frequently than RX problems):

 

 

[  263.116057] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)

[  321.030246] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[  332.512601] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

..lots of the above messages...

[  481.001787] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[  487.183237] NOHZ: local_softirq_pending 08

[  491.151322] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

..lots of the above messages...

[ 1181.099046] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1199.852665]  connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295189627, last ping 4295190878, now 4295192132

[ 1199.852694]  connection1:0: detected conn error (1022)

[ 1320.412312]  session1: session recovery timed out after 120 secs

[ 1320.412325] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412331] sd 10:0:0:0: [sdk] killing request

[ 1320.412347] sd 10:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK

[ 1320.412352] sd 10:0:0:0: [sdk] CDB: Write Same(10) 41 00 6b 40 69 00 00 08 00 00

[ 1320.412356] blk_update_request: I/O error, dev sdk, sector 1799383296

[ 1320.412411] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412423] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412428] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412433] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412438] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412442] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412446] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412451] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412455] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412460] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412464] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412469] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412473] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412477] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412482] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412486] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412555] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412566] Aborting journal on device sdk-8.

[ 1320.412571] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412576] JBD2: Error -5 detected when updating journal superblock for sdk-8.

[ 1332.831851] sd 10:0:0:0: rejecting I/O to offline device

[ 1332.831864] EXT4-fs error (device sdk): ext4_journal_check_start:56: Detected aborted journal

[ 1332.831869] EXT4-fs (sdk): Remounting filesystem read-only

[ 1332.831873] EXT4-fs (sdk): previous I/O error to superblock detected

 

Unloading the kernel module and modprobe-ing it again:

 

[ 1380.970732] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 1.5.25

[ 1380.970737] i40e: Copyright(c) 2013 - 2016 Intel Corporation.

[ 1380.987563] i40e 0000:81:00.0: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[ 1381.127289] i40e 0000:81:00.0: MAC address: 3c:xx:xx:xx:xx:xx

[ 1381.246815] i40e 0000:81:00.0 p5p1: renamed from eth0

[ 1381.358723] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 1381.416135] i40e 0000:81:00.0: PCI-Express: Speed 8.0GT/s Width x8

[ 1381.454729] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[ 1381.471584] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[ 1381.605866] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xy

[ 1381.712287] i40e 0000:81:00.1 p5p2: renamed from eth0

[ 1381.751417] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready

[ 1381.810607] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready

[ 1381.820095] i40e 0000:81:00.1: PCI-Express: Speed 8.0GT/s Width x8

[ 1381.826141] i40e 0000:81:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[ 1647.123056] EXT4-fs (sdk): recovery complete

[ 1647.123414] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)

[ 1668.179234] NOHZ: local_softirq_pending 08

[ 1673.994586] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1676.871805] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1692.833097] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1735.179086] NOHZ: local_softirq_pending 08

[ 1767.357902] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1803.828762] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

 

After several failures, the card loaded in PCI-Express 2.0 mode. It became stable then:

 

Jan  1 18:44:35  systemd[1]: Started ifup for p5p1.

Jan  1 18:44:35  systemd[1]: Found device Ethernet Controller XL710 for 40GbE QSFP+ (Ethernet Converged Network Adapter XL710-Q2).

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5028] devices added (path: /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net/p5p1, iface: p5p1)

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5029] locking wired connection setting

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5029] get unmanaged devices count: 3

Jan  1 18:44:35  avahi-daemon[1741]: Joining mDNS multicast group on interface p5p1.IPv4 with address xx.xx.xx.xx.

Jan  1 18:44:35  avahi-daemon[1741]: New relevant interface p5p1.IPv4 for mDNS.

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5577] device (p5p1): link connected

Jan  1 18:44:35  avahi-daemon[1741]: Registering new address record for xx.xx.xx.xx on p5p1.IPv4.

Jan  1 18:44:35  kernel: [11572.541797] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

Jan  1 18:44:35  kernel: [11572.579303] i40e 0000:81:00.0: PCI-Express: Speed 5.0GT/s Width x8

Jan  1 18:44:35  kernel: [11572.579309] i40e 0000:81:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.

Jan  1 18:44:35  kernel: [11572.579312] i40e 0000:81:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.

Jan  1 18:44:35  kernel: [11572.617328] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RX: 1BUF RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA

Jan  1 18:44:35  kernel: [11572.635294] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

Jan  1 18:44:35  kernel: [11572.917343] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xx

Jan  1 18:44:35  systemd[1]: Reloading OpenBSD Secure Shell server.

Jan  1 18:44:35  systemd[1]: Reloaded OpenBSD Secure Shell server.

Jan  1 18:44:35  kernel: [11572.921344] i40e 0000:81:00.1: SAN MAC: 3c:xx:xx:xx:xx:xx

Jan  1 18:44:35  NetworkManager[1911]: <warn>  [1483289075.9656] device (eth0): failed to find device 14 'eth0' with udev

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.9671] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/13)

Jan  1 18:44:35  kernel: [11572.976596] i40e 0000:81:00.1 p5p2: renamed from eth0

 

Kind regards,

 

jpe

What is VC_CRT_x64 version 1.02.0000?

$
0
0

What installs it and is it relevant with Intel Network Connections 21.1.30.0?  We are very security conscience and want to remove this software if it has no purpose. I am wondering if this was installed by an older version of Intel Network Connections and is no longer needed. Is this just a registry file that can be removed?

X540-AT2 speed problem

$
0
0

Hi!

I a have Intel s2600wt motherboard this two X540-AT2 ehernet adapters. Link speed in auto negatiation is only 100mbps, then i set 1000mbps on switch or in ethernet adapter properties - link lost. My os is Windows Server 2012 R2 and my switch is Cisco 6509 running on ios  version 15.1(2)SY9. Also i tried  Cisco 2960G with ios Version 15.0(2)SE8 this same result.


Flow Director configuration not working

$
0
0

Hi:

I am using the i40e_4.4.0 driver for XL710 Network Card. Currently, I am trying to loopback the connections.

 

For this purpose, I had to set the two ports on promiscuous mode. Thus, using my application, I crated custom UDP packets.

 

For the Rx Queue setting, I have set the flow director as:

 

ethtool -N ens1f0 flow-type udp4 dst-port 319 action 3 loc

ethtool -N ens1f1 flow-type udp4 dst-port 319 action 3 loc

 

Essentially, I want all the packets with this dst-port to be forwarded to Queue 3. I can also see the rule has been inserted.

 

But, as seen in the attached picture, the flow director is not able to match the incoming packet. Thus, it does not forward the incoming packet to my desired queue.

 

proc-interrupts.png

 

Is this error due to promiscuous mode that I had set on the NIC ports ?

 

I am not sure what's creating this issue. Also, I have verified that the incoming packet is destined for Port 319.

 

I will be able to provide other details, if needed !

 

I would appreciate any help.

 

Thanks !

Determine transceiver module serial number from i40e (XL710) pf driver ?

$
0
0

How can the transceiver module's serial number be extracted in the pf driver ? Also is it possible to extract the transceiver serial number in the VF driver ?

IES api install problem & HNI Driver

$
0
0

Hi all,

 

I am setting up the test environment for FM10000. I downloaded IES( Intel Ethernet Switch Software ) api and tried to install it in Unbuntu 16.04 LTS. I followed the guideline in some Documents .Generally, what I do is cd to ies/src and type command

" sudo make install PLATFORM=rubyRapids REF_PLATFORM=libertyTraili INSTALL_DIRECTORY=/home/brayn/Documents".

I go back and check my folder( /home/brayn/Documents ) , there is nothing installed in it so I assume I  failed ? Below is the message shows on terminal.

 

brayn@brayn-Ultra-27:~/Documents/ies/src$ sudo make install PLATFORM=rubyRapids REF_PLATFORM=libertyTraili INSTALL_DIRECTORY=/home/brayn/Documents

make[1]: Entering directory '/home/brayn/Documents/ies/src'

/bin/mkdir -p '/usr/local/lib'

/bin/bash ../libtool   --mode=install /usr/bin/install -c   libFocalpointSDK.la libLTStdPlatform.la '/usr/local/lib'

libtool: install: /usr/bin/install -c .libs/libFocalpointSDK-4.1.3_0378_00314560.so /usr/local/lib/libFocalpointSDK-4.1.3_0378_00314560.so

libtool: install: (cd /usr/local/lib && { ln -s -f libFocalpointSDK-4.1.3_0378_00314560.so libFocalpointSDK.so || { rm -f libFocalpointSDK.so && ln -s libFocalpointSDK-4.1.3_0378_00314560.so libFocalpointSDK.so; }; })

libtool: install: /usr/bin/install -c .libs/libFocalpointSDK.lai /usr/local/lib/libFocalpointSDK.la

libtool: install: /usr/bin/install -c .libs/libLTStdPlatform-4.1.3_0378_00314560.so /usr/local/lib/libLTStdPlatform-4.1.3_0378_00314560.so

libtool: install: (cd /usr/local/lib && { ln -s -f libLTStdPlatform-4.1.3_0378_00314560.so libLTStdPlatform.so || { rm -f libLTStdPlatform.so && ln -s libLTStdPlatform-4.1.3_0378_00314560.so libLTStdPlatform.so; }; })

libtool: install: /usr/bin/install -c .libs/libLTStdPlatform.lai /usr/local/lib/libLTStdPlatform.la

libtool: install: /usr/bin/install -c .libs/libFocalpointSDK.a /usr/local/lib/libFocalpointSDK.a

libtool: install: chmod 644 /usr/local/lib/libFocalpointSDK.a

libtool: install: ranlib /usr/local/lib/libFocalpointSDK.a

libtool: install: /usr/bin/install -c .libs/libLTStdPlatform.a /usr/local/lib/libLTStdPlatform.a

libtool: install: chmod 644 /usr/local/lib/libLTStdPlatform.a

libtool: install: ranlib /usr/local/lib/libLTStdPlatform.a

libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/sbin" ldconfig -n /usr/local/lib

----------------------------------------------------------------------

Libraries have been installed in:

   /usr/local/lib

 

 

If you ever happen to want to link against installed libraries

in a given directory, LIBDIR, you must either use libtool, and

specify the full pathname of the library, or use the `-LLIBDIR'

flag during linking and do at least one of the following:

   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable

     during execution

   - add LIBDIR to the `LD_RUN_PATH' environment variable

     during linking

   - use the `-Wl,-rpath -Wl,LIBDIR' linker flag

   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

 

 

See any operating system documentation about shared libraries for

more information, such as the ld(1) and ld.so(8) manual pages.

----------------------------------------------------------------------

make[1]: Nothing to be done for 'install-data-am'.

make[1]: Leaving directory '/home/brayn/Documents/ies/src'

 

Can anyone have experience help solve this problem?

 

Also I can't find HNI ( Host Network Interface) driver mentioned in the guideline, Where should I download it ?  Thanks!

Issue with setting smp_affinity on ixgbe cards

$
0
0

Hi,

I am using a Dell PowerEdge R730 with Dual Xeon, each 22 cores, with 6 ixgbe compatible cards, on which I am running Linux with ixgbe driver version 4.4.0-k, using kernel versions both 4.7.10 and 4.9.6.
I am loading the ixgbe modules at boot time, bringing up the interfaces and setting smp_affinity to the cards, using the set_irq_affinity script, so all the possible RxTx IRQs are distributed between all the available cores.
The problem is that it happens, random, but quite often that the smp_affinity setting fails, and I need manually to re-run the script one or more times in order desired settings to be applied. There were also several occasions when the settings were not applied at all, and it took me several reboots to script to start working again.
The problem appears not only randomly as occurrence, but also at random NIC controllers, so I am excluding the possibility of failed HW, since I also changed NICs.

I added some debug messages to track the affinity setting in Linux kernel, and it turns out that most of the times when the setting fails the error that affinity setting function irq_do_set_affinity returns is EBUSY, but also sometimes it returns ENOSPC.

More investigation on the topic showed whenever EBUSY was returned the problem could be overcome with re-running the script. But if the error returned was ENOSPC, it takes several reboots for the problem to disappear.

In order to provide some more details on the system I am attaching two text files with the output of the modinfo of the ixgbe and lspci on the machine.

Difference in DPDK and Native IXGBE driver support for 82599 NIC

$
0
0

Hello All,

 

We have been trying to make Unicast promiscuous mode work with RHEL7.3 with latest native ixgbe driver (ixgbe-5.1.3), but it seems that unicast promiscuous mode is not enabled for 82599 series nic cards in the native driver.

I can see an explicit check in ixgbe_sriov.c code, where before enabling promiscuous mode, it checks if NIC card is equal(or lower) than 82599EB, it returns.

 

Adding snippet below:

        case IXGBEVF_XCAST_MODE_PROMISC:

                if (hw->mac.type <= ixgbe_mac_82599EB)

                        return -EOPNOTSUPP;

 

 

                fctrl = IXGBE_READ_REG(hw, IXGBE_FCTRL);

                if (!(fctrl & IXGBE_FCTRL_UPE)) {

                        /* VF promisc requires PF in promisc */

                        e_warn(drv,

                               "Enabling VF promisc requires PF in promisc\n");

                        return -EPERM;

                }

 

 

                disable = 0;

                enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |

                         IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;

                break;

 

But, when I see the corresponding code in DPDK16.11 version, I can see the support has been added for 82599 NICs family. The feature seems to have implemented using IXGBE_VMOLR_ROPE  flag.

 

Relevant snippet from DPDK code:

uint32_t

ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)

{

        uint32_t new_val = orig_val;

 

        if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)

                new_val |= IXGBE_VMOLR_AUPE;

        if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)

                new_val |= IXGBE_VMOLR_ROMPE;

        if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)

                new_val |= IXGBE_VMOLR_ROPE;

        if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)

                new_val |= IXGBE_VMOLR_BAM;

        if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)

                new_val |= IXGBE_VMOLR_MPE;

 

        return new_val;

}

 

 

So, can you please let us know, why such difference between supported NIC ? and can we also have similar functionality ported to the native ixgbe driver?

 

Other setup details

 

Kernel version

# uname -r

3.10.0-514.el7.x86_64

 

LSPCI output

# lspci -nn | grep Ether | grep 82599

81:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)

81:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)

81:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)

 

# ethtool -i eth2

driver: ixgbe

version: 5.1.3

firmware-version: 0x61bd0001

expansion-rom-version:

bus-info: 0000:81:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

 

Regards

Pratik

Viewing all 4566 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>